CN108334804A - Image processing apparatus and method and image processing system - Google Patents

Image processing apparatus and method and image processing system Download PDF

Info

Publication number
CN108334804A
CN108334804A CN201710051462.XA CN201710051462A CN108334804A CN 108334804 A CN108334804 A CN 108334804A CN 201710051462 A CN201710051462 A CN 201710051462A CN 108334804 A CN108334804 A CN 108334804A
Authority
CN
China
Prior art keywords
region
feature
face
image
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710051462.XA
Other languages
Chinese (zh)
Other versions
CN108334804B (en
Inventor
李荣军
黄耀海
谭诚
那森
松下昌弘
清水智之
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to CN201710051462.XA priority Critical patent/CN108334804B/en
Publication of CN108334804A publication Critical patent/CN108334804A/en
Application granted granted Critical
Publication of CN108334804B publication Critical patent/CN108334804B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A kind of image processing apparatus of present invention offer and method and image processing system.One aspect of the present invention discloses a kind of image processing apparatus.The image processing apparatus includes:It is configured to obtain the unit of face-image;It is configured to determine the unit at least one region pair from face-image, wherein the region of region centering is mutually symmetrical about one in the line of symmetry in face;And it is configured to determine feature from one in the region of the region centering based on first direction and the unit of feature is determined from another in the region of the region centering based on second direction, wherein, first direction and second direction are mutually symmetrical about the line of symmetry in region pair.According to the present invention, the feature for executing human body image retrieval (HIR) processing or face recognition (FID) processing will be more accurate, this will improve the precision that HIR is handled or FID is handled.

Description

Image processing apparatus and method and image processing system
Technical field
The present invention relates to image processing apparatus and method and image processing systems.
Background technology
During video monitor, in order to obtain the corresponding informance of particular person, such as the particular person exists during specific time period The behavior of specific position (for example, airport, supermarket etc.), human body image retrieve (Human Image Retrieval, HIR) technology By commonly used in from captured video frame retrieval for the particular person the image that is mutually related (for example, face-image), Thus operator can pass through the behavior of the image analysis particular person retrieved.It is specific in order to identify during identifying processing Whether people belongs to being registered in particular system (for example, accelerator control system (gate control system), payment system etc.) One of people identifies the face-image of the particular person usually using face recognition (Face Identification, FID) technology Whether the facial images match with the registration for being registered one of people.
In general, the precision of HIR processing (for example, face-image retrieval process) or FID processing is depended on from input face figure As (that is, query image) and/or the face-image of record/registration (for example, the video frame of shooting, image etc. of registration) extraction The precision of feature.In order to obtain the precision that more accurate feature allows to improve face-image retrieval process and FID processing, in A kind of example technique is disclosed in state patent CN102136062A comprising:From input face image extraction multiresolution office Portion's binary pattern (Local Binary Patterns, LBP) feature (that is, feature vector);Based on the face from record/registration The multiresolution LBP features of image zooming-out generate index;And the index of the multiresolution LBP features and generation based on extraction come The face-image of retrieval record/registration similar with input face image.
However, during video monitor or identifying processing, either the face-image of record/registration still inputs face figure Picture, the face in these face-images be usually accompanied by it is various block, for example, by it is several it is person-to-person it is overlapping caused by block (as shown in Figure 1A) blocks (as shown in Figure 1B) etc. caused by attachment (such as hair).As shown in FIG. 1A and 1B, these hide Gear will make a part of invisible of face, this will make the feature from the extracting section unreliable.In other words, from the part The feature of extraction will directly affect the precision of the feature of entire face.Therefore, during video monitor or identifying processing, remembering In the case that face in the face-image of record/registration or the face image of input is with blocking, for execute HIR processing or The feature of FID processing will be inaccurate.Therefore, the precision of HIR processing or FID processing will be affected.
Invention content
Therefore, in view of the record in background technology above, the disclosure aims to solve the problem that the above problem.
According to an aspect of the present invention, a kind of image processing apparatus is provided, described image processing unit includes:Image obtains Unit is taken, is configured to, face-image is obtained;Region is configured to determination unit, and at least one is determined from face-image A region pair, wherein the region of region centering is mutually symmetrical about one in the line of symmetry in face;And feature determines list Member is configured to, and determines feature from one in the region of region centering based on first direction, and be based on second direction Feature is determined from another in the region of region centering, wherein first direction and second direction be about the symmetrical of region pair Line is mutually symmetrical.Wherein, the line of symmetry in face includes at least the line of symmetry of the line of symmetry of face or the component of face.For example, First direction is clockwise direction, also, second direction is counterclockwise.
Using the present invention, the face in the face-image or input face image of record/registration is with the case where blocking Under, the feature for executing HIR processing or FID processing will be more accurate, this will improve the precision that HIR is handled or FID is handled.
According to description referring to the drawings, further property feature of the invention and advantage will be apparent.
Description of the drawings
Including in the description and the attached drawing that forms part of this specification, the embodiment of the present invention is shown, and with this Principle for explaining the present invention together is described.
Figure 1A to Figure 1B is schematically shown with the example facial in the face-image blocked.
Fig. 2 schematically shows the line of symmetry of the line of symmetry of example facial and the component of example facial.
Fig. 3 A to Fig. 3 B schematically show the exemplary region-of-interest (regions of the example facial in face-image Of interest, ROI).
Fig. 4 is the block diagram for schematically showing the hardware configuration that technology according to an embodiment of the invention may be implemented.
Fig. 5 is the block diagram of the structure for the image processing apparatus for showing first embodiment according to the present invention.
Fig. 6 schematically shows the flow chart of the image procossing of first embodiment according to the present invention.
Fig. 7 schematically shows the flow chart that feature as shown in FIG. 6 according to the present invention determines step S630.
Fig. 8 A to Fig. 8 F schematically show characteristics determining unit 530 as shown in Figure 5 according to the present invention and determine and side The mode of position (orientation) unrelated feature.
Fig. 9 A to Fig. 9 F schematically show characteristics determining unit 530 as shown in Figure 5 according to the present invention and determine and side The mode of the relevant feature in position.
Figure 10 schematically shows another flow that feature as shown in FIG. 6 according to the present invention determines step S630 Figure.
Figure 11 schematically shows another flow that feature as shown in FIG. 6 according to the present invention determines step S630 Figure.
Figure 12 is the block diagram for the structure for showing image processing apparatus according to the second embodiment of the present invention.
Figure 13 shows the arrangement of example image processing system according to the present invention.
Figure 14 shows the arrangement of another example image processing system according to the present invention.
Specific implementation mode
Describe exemplary embodiment of the present invention in detail below with reference to accompanying drawings.It should be noted that following description essence On be merely illustrative, illustratively, also, be in no way intended to limit the present invention and its application or purposes.It is illustrated in embodiment Component and the positioned opposite of step, numerical expression and numerical value do not limit the scope of the invention, in addition to otherwise stipulated. In addition, technology known to those skilled in the art, method and apparatus may not be discussed in detail, but in situation appropriate In should be this specification a part.
It note that similar reference numeral and letter refer to the similar project in attached drawing, therefore, once a project As soon as being defined in attached drawing, it need not be discussed in following attached drawing.
As described above, the face (for example, face) in face-image is blocked along with certain (for example, by several individuals' Between it is overlapping caused by block or blocked caused by attachment) in the case of, the precision of the feature of entire face will be affected. That is, the precision of the feature for executing HIR processing or FID processing will be affected.
Statistics instruction facial (for example, face) has symmetry.That is, face is almost symmetrical.For example, The dotted line 210 schematically shown in Fig. 2 is the line of symmetry of face.In addition, face generally includes several components, for example, facial Profile assembly, eyebrow component, eye component, nasal assembly and nozzle assembly.Also, each component that statistics also indicates face has symmetrically Property.That is, each component of face is also almost symmetrical and/or symmetrical above and below.For example, being schematically shown in Fig. 2 Dotted line 220 to 230 be face eye component line of symmetry, also, the dotted line 240 schematically shown in Fig. 2 is facial The line of symmetry of nozzle assembly.
In addition, the face in face-image generally comprises some regions, for example, eye areas, nasal area, mouth area Domain, the region determined based on the characteristic point of the component of face.Hereinafter, in the present invention, for example, these regions are considered as closing It notes region (Region of Interest, ROI).As shown in Figure 3A, for example, the nasal area of the eye areas of face, face It is used as corresponding ROI with the mouth region of face.As shown in Figure 3B, for example, the area determined based on the characteristic point of the component of face Domain is used as corresponding ROI.Also, for executing the feature (for example, feature vector) of HIR processing or FID processing usually from face Corresponding ROI determine.
Inventor has found, in one aspect, the symmetry of face and/or the group of face is considered when determining corresponding ROI In the case of the symmetry of part, similar feature can be determined from corresponding ROI.In other words, can be true from face-image Determine ROI in the case of (that is, region to) as described in the present invention, from a determining feature in ROI will with from ROI Another determine feature it is similar, the wherein ROI of the ROI centerings about face line of symmetry or face component line of symmetry It is mutually symmetrical.For example, as shown in Figure 3A, in the case where considering the symmetry of face, region 301 and region 302 can by regarding It is one ROI pairs.For example, as shown in Figure 3B, in the case where considering the symmetry of face, region 310 and region 340 can be by It is considered as one ROI pairs, region 320 and region 330 can be considered as another ROI pairs.Also, considering the symmetry of eye In the case of, region 310 and region 320 can be considered as one ROI pairs, and region 330 and region 340 can be considered as another ROI pairs.
On the other hand, it for any pair of ROI centerings, when feature corresponding to determination from the ROI, is being also contemplated for In the case of ROI pairs of the symmetry, the feature determined from ROI couples of the corresponding ROI will be almost the same.In other words, right In any pair of ROI centerings, for being determined from one in ROI to feature direction and for from another in ROI In the case that one is mutually symmetrical to the direction that feature is determined about ROI pairs of the line of symmetry, it is based on these symmetry directions Determining feature is mirror one another.Therefore, for any pair of ROI centerings, the feature determined from corresponding ROI can be with that This is exchanged.
Therefore, for any pair of ROI centerings, in the roi one is the ROI that is blocked (that is, as the present invention retouches The occlusion area stated) in the case of, feature is determined instead of the ROI directly blocked from this, from another of the ROI centerings The feature that ROI is determined can be directly used as the corresponding feature of the ROI blocked.In other words, even if face in face-image Portion is blocked along with certain, and the precision of the feature of entire face is also barely affected.That is, for executing HIR The precision of the feature of processing or FID processing will be barely affected.
Therefore, according to the present invention, during HIR processing or FID are handled, in the face-image or input face of record/registration Face in portion's image is with the precision that in the case of blocking, can improve the feature for executing HIR processing or FID processing. In other words, in the face-image of record/registration or input face image face with block in the case of, for executing The feature of HIR processing or FID processing will be more accurate, this will improve the precision that HIR is handled or FID is handled.
(hardware configuration)
First by with reference to Fig. 4 description may be implemented hereafter described in technology hardware configuration.
For example, hardware configuration 400 include central processing unit (CPU) 410, it is random access memory (RAM) 420, read-only Memory (ROM) 430, hard disk 440, input equipment 450, output equipment 460, network interface 470 and system bus 480.In addition, Hardware configuration 400 can be for example, by camera, personal digital assistant (PDA), mobile phone, tablet computer, meter on knee Calculation machine, desktop computer or other suitable electronic equipments are realized.
In the first realization method, image procossing according to the present invention is made of hardware or firmware, also, is used as hardware knot The module or component of structure 400.For example, hereinafter with reference to Fig. 5 detailed description image processing apparatus 500 and will hereinafter The image processing apparatus 1200 of 2 detailed descriptions is used as the module or component of hardware configuration 400 referring to Fig.1.In the second realization method In, image procossing according to the present invention is by the storage in ROM 430 or hard disk 440 and the software sharing executed by CPU 410.Example Such as, the image processing process 600 that will be described in greater detail below with reference to fig. 6 is used as the journey being stored in ROM 430 or hard disk 440 Sequence.
CPU 410 is any suitable programmable control device (for example, processor), and it is possible to by executing The various application programs that are stored in ROM 430 or hard disk 440 execute the various functions being described hereinafter.RAM 420 by with The program or data that are loaded from ROM 430 or hard disk 440 are provisionally stored, and used also as such space, in the sky Between in, CPU 410 execute various processes, for example, implement hereinafter with reference to Fig. 6 technologies described in detail and other can Use function.Hard disk 440 stores much information, for example, operating system (OS), various applications, control program and pre- by manufacturer The data for first storing or pre-defining.
In one implementation, input equipment 450 is for allowing user to be interacted with hardware configuration 400.In an example In, user can pass through 450 come input picture of input equipment/video/data.In another example, user can be by defeated Enter the corresponding image procossing that equipment 450 triggers the present invention.In addition, various forms may be used in input equipment 450, for example, pressing Button, keyboard or touch screen.In another implementation, input equipment 450 is for receiving from such as digital camera, video camera And/or the image/video of the special electrical devices output of network camera etc..
In one implementation, output equipment 460 is used for user's display processing result (for example, for inputting face The similar face-image of image).Moreover, various forms may be used in output equipment 460, for example, cathode-ray tube (CRT) or liquid Crystal display.In another implementation, output equipment 460 is used for such as HIR processing, FID processing or human attributes identification The subsequent operation of (Human Attribute Recognition, HAR) processing etc. exports handling result (for example, identified spy Sign).
Network interface 470 provides the interface for hardware configuration 400 to be connected to network.For example, hardware configuration 400 can be with Via network interface 470 communicate with the data of other electronic equipments connected via a network.It is alternatively possible to be hardware knot Structure 400 provides wireless interface, to carry out wireless data communication.System bus 480 can be provided in CPU 410, RAM 420, between ROM430, hard disk 440, input equipment 450, output equipment 460 and network interface 470 etc. mutual data transmission number According to transmission path.Although being referred to as bus, system bus 480 is not limited to any specific data transmission technology.
Above-mentioned hardware configuration 400 is merely illustrative, also, is in no way intended to limit invention, its application, or uses. Moreover, for simplicity, only showing a hardware configuration in Fig. 4.But it is also possible to use multiple hardwares knot as needed Structure.
(image procossing)
Image procossing according to the present invention is described referring next to Fig. 5 to Figure 14.
Fig. 5 is the block diagram of the structure for the image processing apparatus 500 for showing first embodiment according to the present invention.Wherein, Fig. 5 Shown in some blocks or whole block can be by dedicated hardware realization.As shown in figure 5, image processing apparatus 500 includes image Acquiring unit 510, region are to determination unit 520 and characteristics determining unit 530.
First, input equipment 450 shown in Fig. 4, which is received, exports from specific electronic device (for example, camera) or by user The face-image of input (for example, human face's image shown in Figure 1B).Secondly, input equipment 450 will connect via system bus 480 The face-image received is transferred to image acquisition unit 510.
Then, as shown in figure 5, image acquisition unit 510 obtains facial figure by system bus 480 from input equipment 450 Picture.
Region determines at least one region pair to determination unit 520 from face-image, wherein the region of region centering about One in line of symmetry in face is mutually symmetrical, also, the region of region centering is, for example, the ROI of face.In addition, such as Fig. 2 Described, the line of symmetry in face includes at least the symmetrical of the component of facial line of symmetry (for example, line of symmetry 210) or face Line (for example, line of symmetry 220/230/240).Also, as described in Fig. 3 A and Fig. 3 B, region is to being ROI pairs, such as ROI Pair of 310 and ROI 340, pair of ROI 320 and ROI 330, ROI 310 and ROI 320 pair and ROI 330 and ROI 340 pair.
For any pair (for example, ROI 310 and ROI 340 shown in Fig. 3 B to) of region centering, feature is true Order member 530 determines feature from one (for example, ROI 310) in the region of the region centering based on first direction and is based on Second direction determines feature from another (for example, ROI 340) in the region of the region centering.
Wherein, first direction and second direction are right each other about the line of symmetry (for example, line of symmetry of face) in the region pair Claim.In one implementation, by making the first direction of the line of symmetry based on the region pair invert, to obtain second direction. Also, in the case of being clockwise direction in a first direction, second direction is counterclockwise.In addition, in a kind of realization method In, it is identified be characterized in the relevant feature in orientation, for example, Scale invariant features transform (Scale Invariant Feature Transform, SIFT) feature, accelerate robust features (Speeded-Up Robust Features, SURF). It is identified to be characterized in the feature unrelated with orientation in another realization method, for example, local binary patterns (LBP) feature.
Flow chart 600 shown in fig. 6 is the corresponding process of image processing apparatus 500 shown in fig. 5.
As shown in fig. 6, in image acquisition step S610, image acquisition unit 510 is set by system bus 480 from input Standby 450 obtain face-image.
In region to determining in step S620, region determines at least one region pair to determination unit 520 from face-image. Take region 310 and region 340 shown in Fig. 3 B pair as an example, in one implementation, region is to determination unit 520 Corresponding region pair is determined by following processing.
First, region to determination unit 520 by using for example supervise descent method existing feature point detecting method, from face Characteristic point is detected in the component in portion.For example, point shown in Fig. 3 B indicates the characteristic point detected.
Second, region determines determination unit 520 a pair of of characteristic point that the line of symmetry about the region pair is mutually symmetrical (that is, characteristic point to).For example, the line of symmetry in a pair of of region 310 and region 340 is the line of symmetry of face.As a result, such as Fig. 3 B institutes Show, it may be determined that a pair of of characteristic point 311 and characteristic point 341.
Then, region determines corresponding region pair to determination unit 520 based on identified characteristic point pair.In a kind of reality In existing mode, by centered on characteristic point 311, determining a region of the region centering, and by being with characteristic point 341 Center determines another region of the region centering.Alternatively, it is also possible to by using other methods feature based point to determining The region of the region centering size having the same and is mutually symmetrical as long as each region determines in an identical manner.
Fig. 6 is returned to, in feature determines step S630, for any pair of region centering, 530 base of characteristics determining unit Feature is determined from one in the region of the region centering, and in first direction based on second direction from the region centering Another in region determines feature.Take a region to (for example, a pair of of region 310 and region 340 shown in Fig. 3 B) For, in one implementation, characteristics determining unit 530 determines feature with reference to Fig. 7.
As shown in fig. 7, in step S710, for each pixel in each region, characteristics determining unit 530 is based on should The gray scales of the gray scale of pixel and the neighborhood pixels of the pixel determines difference.In one implementation, the gray scale of a pixel It is the intensity value of the pixel.
In one implementation, in order to from a region to obtaining the feature unrelated with orientation (for example, LBP features), For taking a pair of of region 310 and region 340 shown in Fig. 3 B, characteristics determining unit 530 is determined with reference to Fig. 8 A to Fig. 8 F and is corresponded to Difference.For a pixel (for example, characteristic point 311) in region 310, first, characteristics determining unit 530 obtains the pixel Corresponding gray scale and the pixel neighborhood pixels (for example, 8 neighborhood pixels) corresponding gray scale.As shown in Figure 8 A, for example, Value " 83 " is the corresponding gray scale of pixel 311, also, the other values around value " 83 " are the correspondences of the neighborhood pixels of pixel 311 Gray scale.Then, characteristics determining unit 530 determines corresponding difference by comparing the gray scale obtained.Wherein, in this realization In mode, identified difference is the binary value of pixel 311.As shown in Figure 8 A, it is assumed that pixel 311 gray scale (that is, value " 83 ") be greater than or equal to the gray scales (for example, value " 43 ") of a neighborhood pixels in the case of, determined using 8 neighborhood pixels Corresponding difference is for example determined as binary value " 0 " by difference, characteristics determining unit 530.Otherwise, in the gray scale of pixel 311 In the case that (that is, value " 83 ") is less than the gray scale (for example, value " 204 ") of a neighborhood pixels, characteristics determining unit 530 for example will Corresponding difference is determined as binary value " 1 ".Therefore, for the pixel 311 in region 310, Fig. 8 C show for example true by feature The corresponding difference that order member 530 determines.In addition, it will be apparent to one skilled in the art that for determining difference The quantity of neighborhood pixels is not limited to 8.In addition, for a pixel (for example, characteristic point 341) in region 340, it is similar to reference The mode of Fig. 8 A and Fig. 8 C descriptions, Fig. 8 B show that corresponding gray scale, Fig. 8 D show pair for example determined by characteristics determining unit 530 The difference answered.
In another implementation, in order to from a region to obtaining with the relevant feature in orientation (for example, SIFT is special Sign), for taking a pair of of region 310 and region 340 shown in Fig. 3 B, characteristics determining unit 530 is determined with reference to Fig. 9 A to Fig. 9 F Corresponding difference.For a pixel (for example, characteristic point 311) in region 310, first, characteristics determining unit 530 is somebody's turn to do The corresponding gray scale of the corresponding gray scale of pixel and the neighborhood pixels (for example, 8 neighborhood pixels) of the pixel.As shown in Figure 9 A, For example, value " A " is the corresponding gray scale of pixel 311, also, the other values around value " A " are pairs of the neighborhood pixels of pixel 311 The gray scale answered.Then, characteristics determining unit 530 determines corresponding difference by comparing the gray scale obtained.Wherein, in this reality In existing mode, identified difference is the value in the vertical direction of the value and pixel 311 in the horizontal direction of pixel 311 respectively. As shown in Figure 9 A, it is assumed that determine that difference, feature determine single using the neighborhood pixels that gray scale is " 2. ", " 4. ", " 5. " and " 7. " Member 530 determines a difference (for example, value in the horizontal direction of pixel 311) for example, by " 5. " subtracting value " 4. " from value, Also, characteristics determining unit 530 determines another difference (for example, pixel 311 for example, by " 7. " subtracting value " 2. " from value Value in vertical direction).For example, for the pixel 311 in region 310, Fig. 9 C show pair determined by characteristics determining unit 530 The difference answered.In addition, it will be apparent to one skilled in the art that for determining that the mode of difference is not limited to above-mentioned reality Existing mode, and can be realized by using other technologies.For a pixel in region 340 (for example, characteristic point 341) it, is similar to the mode with reference to Fig. 9 A and Fig. 9 C descriptions, Fig. 9 B show that corresponding gray scale, Fig. 9 D show for example to be determined by feature The corresponding difference that unit 530 determines.
Fig. 7 is returned to, in step S720, characteristics determining unit 530 determines area based on first direction and corresponding difference First value of the pixel in one in the region of domain centering, and the region pair is determined based on second direction and corresponding difference In region in another in pixel second value.
In one implementation, in order to, to obtaining the feature unrelated with orientation, be taken shown in Fig. 3 B from a region For pixel 311 in region 310, corresponding difference is shown in Fig. 8 C.Therefore, characteristics determining unit 530 is based on Fig. 8 C institutes The first direction that shows and corresponding difference determine corresponding first value of pixel 311.As illustrated in fig. 8e, curve table with the arrow Example such as first direction (for example, clockwise).Therefore, according to clockwise, characteristics determining unit 530 is based on passing through row Arrange difference from upper left to lower-left and the binary value sequence that obtains determines the first value of pixel 311.As illustrated in fig. 8e, it is obtained The binary value sequence obtained is " 00010011 ", and the decimal value (i.e. " 19 ") of the binary value sequence obtained is considered as First value of pixel 311.For taking the pixel 341 in region 340 shown in Fig. 3 B, as described above, showing to correspond in Fig. 8 D Difference.Also, as shown in Figure 8 F, curve table example such as second direction with the arrow (for example, counterclockwise).Therefore, class It is similar to the mode with reference to described in Fig. 8 E, binary value sequence is obtained by arranging the difference from upper right to bottom right " 00010011 ", and the decimal value (" 19 ") of the binary value sequence obtained is considered as the second value of pixel 341.Change sentence It talks about, corresponds to the feature unrelated with orientation, the first value and second value are decimal values.In addition, as described above, gray scale is intensity Value;Therefore, the first value and second value are intensity value.
In another implementation, in order to from a region to obtain with the relevant feature in orientation, take shown in Fig. 3 B Region 310 in pixel 311 for, corresponding difference is shown in Fig. 9 C.Therefore, characteristics determining unit 530 is based on Fig. 9 C Shown in first direction and corresponding difference determine corresponding first value of pixel 311.As shown in fig. 9e, curve with the arrow Indicate such as first direction (for example, clockwise).Therefore, according to value clockwise, in the horizontal direction of pixel 311 With the value in the vertical direction of pixel 311, characteristics determining unit 530 determines the of pixel 311 by using such as following formula One value:
Wherein, (x, y) indicates the coordinate of pixel 311.Also, as shown in fig. 9e, the first value of pixel 311 is, for example, 45 Degree.
For taking the pixel 341 in region 340 shown in Fig. 3 B, corresponding difference is shown in Fig. 9 D.Also, such as Fig. 9 F Shown, with the arrow curve table example such as second direction (for example, counterclockwise).Therefore, it is similar to reference to described by figure 9E Mode, characteristics determining unit 530 determines the second value of pixel 341 by using such as following formula:
Wherein, as shown in fig. 9f, the second value of pixel 341 is, for example, 45 degree.In other words, correspond to relevant with orientation Feature, the first value and second value are angle values.
Fig. 7 is returned to, in step S730, the frequency of occurrences of the characteristics determining unit 530 based on the first value is from region centering A determining feature in region, and based on the frequency of occurrences of second value from the region of the region centering another determination Feature.As described above, characteristics determining unit 530 by determine region centering region in one in each pixel it is corresponding First value, and by determine the region pair region in another in each pixel corresponding second value.
In one implementation, for from a region to obtaining the feature unrelated with orientation, as described above, the first value It is decimal value with second value.It therefore, will be based on each first in the region for one in the region of the region centering The frequency of occurrences of value determines numerical value histogram, and the numerical value histogram will be considered as the corresponding spy determined from the region Sign.In addition, by using similar mode, another numerical value histogram for being determined based on the frequency of occurrences of each second value will by regarding For corresponding feature determined by another in the region of the region centering.
In another implementation, for from a region to obtain with the relevant feature in orientation, as described above, first Value and second value are angle values.It therefore, will be based on each first in the region for one in the region of the region centering The frequency of occurrences of value determines angular histogram, and the angular histogram will be considered as the corresponding spy determined from the region Sign.In addition, by using similar mode, another angular histogram for being determined based on the frequency of occurrences of each second value will by regarding For from the region of the region centering another determine corresponding feature.
In addition, with reference to Fig. 7, for a region pair, feature is determined based on each pixel of the region centering.Due to area The size in domain will directly affect the precision of feature, so, in order to from region to obtaining more accurate feature, for region centering At least one, in another implementation, characteristics determining unit 530 0 determines feature referring to Fig.1.
Take a region to (for example, a pair of of region 301 and region 302 shown in Fig. 3 A) for, as shown in Figure 10, In step S1010, characteristics determining unit 530 by by the region division of the region centering for subregion come from the region to obtaining At least one subregion pair.Wherein, the subregion of a sub-regions centering is mutually symmetrical about the line of symmetry in the region pair.Change sentence It talks about, the quantity of the subregion in each region in the region pair is identical with size.As shown in Figure 3A, region 301 and region 302 Four sub-regions are divided into, also, subregion 303 and subregion 304 are considered as a sub-regions pair.
In step S1020, characteristics determining unit 530 is from subregion to determining feature.Wherein, for the region to (example Such as, a pair of of region 301 and region 302) in each sub-regions to (for example, a pair of of subregion 303 and subregion 304), feature Determination unit 530 is by using the similar mode as described in reference Fig. 7 to Fig. 9 from the subregion to the corresponding feature of determination.
Then, in step S1030, for each region of the region centering, characteristics determining unit 530 will be from the region Subregion determined by corresponding feature link together, to build the corresponding feature in the region.
In addition, for example, as shown in Figure 1A, face 110 is blocked by face 120, and region 111 and region 112 are a regions It is right, and region 111 is occlusion area.As described above, in the present invention, even if the face in face-image is along with certain screening Gear is also barely affected the precision of the feature of entire face.Therefore, in other realization methods, for region centering It is at least a pair of, characteristics determining unit 530 1 determines feature referring to Fig.1.Wherein, preferably, in this realization method, The region of region centering is mutually symmetrical about the line of symmetry of face.
Take a region to (for example, a pair of of region 111 and region 112 shown in Figure 1A) for, as shown in figure 11, In step S1110, characteristics determining unit 530 determines whether a region of the region centering is occlusion area.Also, in the area In the case that one in the region of domain centering is occlusion area, processing enters step S1130;Otherwise, processing enters step S1120。
In one implementation, for each region (for example, region 111) of the region centering, characteristics determining unit The black picture element density in 530 regions based on the region centering executes corresponding judgement.For example, first, characteristics determining unit 530 Come pair and the region pair by using the existing radix-2 algorithm of such as OSTU algorithms, adaptive thresholding algorithm, thresholding algorithm etc. The image (for example, image in region 111 as shown in Figure 1A) answered carries out binaryzation.Then, characteristics determining unit 530 passes through The black picture element density in the region is calculated using following formula:
Wherein α indicates black picture element density.Finally, characteristics determining unit 530 judges that the black picture element density in the region is It is no to be more than predetermined threshold (for example, TH1).Also, in the case where black picture element density is more than TH1, which is judged as hiding Keep off region;Otherwise, which is not occlusion area (that is, de-occlusion region).
In another implementation, for each region (for example, region 111) of the region centering, feature determines single Member 530 executes corresponding judgement based on the existing track-detecting method of such as head tracking method, Ω trackings etc..Take figure For face 110 and face 120 shown in 1A, first, characteristics determining unit 530 passes through head tracking method detection face 110 Track (for example, track 1) and face 120 track (for example, track 2).Then, characteristics determining unit 530 determines 1 He of track Crossover position between track 2.Finally, the regional prediction at crossover position is by characteristics determining unit 530 by its position Occlusion area.Occlusion area will be confirmed as a pair of of region 111 shown in figure 1A and region 112, region 111.
Figure 11 is returned to, in step S1120, characteristics determining unit 530 is by using as with reference to the class described in Fig. 7 to Fig. 9 As mode, from region to (for example, it is shown in figure 1A a pair of region 111 and region 112) determine feature.For example, being based on first Direction (for example, clockwise) determines corresponding feature from region 111, and based on second direction (for example, side counterclockwise To) from region 112 determine corresponding feature.
In step S1130, characteristics determining unit 530 is based on second direction (for example, counterclockwise) from the region pair In another region (that is, region 112) determine feature.In addition, in the case where region 111 is not occlusion area, it is based on second Direction determines corresponding feature from region 111.Therefore, characteristics determining unit 530 is determined from region 112 based on first direction and is corresponded to Feature.
Then, in step S1140, the feature determined from region 112 is considered as occlusion area by characteristics determining unit 530 The corresponding feature in (that is, region 111).For example, the copy of the feature determined from region 112 is considered as the corresponding of occlusion area Feature.
In addition, as optional solution, it is corresponding being determined from the region in the region pair for taking a region pair After feature (for example, feature vector 1 and feature vector 2), characteristics determining unit 530 will further link spy in the following manner Vector 1 and feature vector 2 are levied to obtain the corresponding feature in the region pair.In an example, feature vector 1 and feature vector 2 link in a serial fashion.That is, the length of chain feature is the length of the length and feature vector 2 of feature vector 1 With.In another example, in order to reduce the length of chain feature, feature vector 1 and feature vector 2 link in a parallel fashion. That is, the length of chain feature is equal to the length of any one in feature vector 1 and feature vector 2.In another reality In example, in order to reduce the size of chain feature, by using the existing machine learning side of such as principal component analysis (PCA) method etc. Method comes chain feature vector 1 and feature vector 2.
As described above, in the present invention, in an aspect, in the symmetry for determining that corresponding region clock synchronization considers face And/or the symmetry of the component of face, in other words, region is to symmetrically being determined.In another aspect, when from corresponding area The symmetry in region pair is also contemplated for when domain is to determining feature, in other words, from region to symmetrically determining feature.Therefore, remembering The face-image of record/registration or face in input face image in the case of blocking, according to present invention determine that feature Will be more accurate, this will improve the precision of HIR processing or FID processing.
As described above, from symmetrically determining region to symmetrically determining feature.Therefore, for the arbitrary of region centering A pair, from the region of the region centering an identified feature and from the region of the region centering another institute really Fixed feature is almost the same.Therefore, the exemplary application as the above-mentioned image procossing with reference to Fig. 5 to Figure 11, next will ginseng Another image processing apparatus 1200 is described according to Figure 12.
Figure 12 is the block diagram for the structure for showing image processing apparatus 1200 according to the second embodiment of the present invention.Wherein, Shown in Figure 12 it is in the block some or all can be by dedicated hardware realization.As shown in figure 12, image processing apparatus 1200 Including image processing apparatus 500 and shadowing unit 1210.
Firstly, for input face image, image processing apparatus 500 is with reference to Fig. 5 to Figure 11 from each region to determining pair The feature answered.
Then, for at least a pair of of region centering, shadowing unit 1210 judges the area in corresponding region pair Whether blocked between domain.More specifically, for example, for taking a region pair, shadowing unit 1210 calculate first from Similarity measurement between feature determined by the region of the region centering.Wherein, similarity measurement be for example calculated as cosine away from From, Euclidean distance or Mahalanobis generalised distance (Mahalanobis distance).Then, shadowing unit 1210 Judge in the case where similarity measurement is greater than or equal to predefined threshold value (for example, TH2) between zones without hiding Gear.Otherwise, the judgement of shadowing unit 1210 is blocked between zones.
In other words, it can be applied to judge facial figure according to the feature of the image procossing determination with reference to Fig. 5 to Figure 11 Whether the face as in blocks with certain.As shown in Figure 1A, for face 120 in a pair of of region 121 and region 122, root According to the image procossing with reference to Fig. 5 to Figure 11, the feature determined from region 121 and the feature determined from region 122 are almost the same. That is the similarity measurement between identified feature is greater than or equal to TH2.Therefore, it will determine that in region 121 and region 122 Between do not block.Also, in the case where not blocked between judging all areas pair in face 120, it will sentence Section portion 120 is not accompanied by any block.Also, as shown in Figure 1A, for face 110 in a pair of of region 111 and region 112, According to the image procossing with reference to Fig. 5 to Figure 11, the feature determined from region 111 and the feature determined from region 112 not phase each other Seemingly.That is, the similarity measurement between identified feature is less than TH2.Therefore, it will determine that in region 111 and region 112 Between block.Therefore, it will determine that face 110 is blocked with certain.
As described above, during video monitor, HIR technologies are commonly used in the face that is mutually related of retrieval input face image Portion's image.As the exemplary application of the above-mentioned image procossing with reference to Fig. 5 to Figure 11, described referring next to Figure 13 exemplary Image processing system 1200 (for example, HIR systems).
As shown in figure 13, image processing system 1300 includes retrieval client 1310 and retrieval server 1320.Wherein, it uses Family can be interacted by retrieving client 1310 with image processing system 1300.Retrieval server 1320 includes index server 1321 and processor 1322.Wherein, predetermined index is stored in index server 1321.Optionally, index server 1321 It can be substituted by external server.That is, predetermined index can also be stored in external server, without being stored in inspection In the internal server of rope server 1320.
In addition, in one implementation, client 1310 and retrieval server 1320 are retrieved via system bus each other Connection.In another implementation, it retrieves client 1310 and retrieval server 1320 is connected to each other via network.In addition, Retrieve client 1310 and retrieval server 1320 can via same electronic equipment (for example, computer, PDA, mobile phone, Camera) it realizes.Optionally, retrieving client 1310 and retrieval server 1320 can also come via different electronic equipments It realizes.
As shown in figure 13, first, retrieval client 1310 obtains face-image input by user (that is, input face figure Picture).Next, retrieval client 1310 is corresponding to determination from each region in input face image with reference to Fig. 5 to Figure 11 Feature.Then, retrieval client 1310 is based on the feature for obtaining input face image to determining feature from region.For example, logical It crosses link and the feature of input face image is obtained to determining feature from region.Optionally, by retrieval server 1320 It is not the feature retrieved client 1310 and obtain input face image.In other words, in this case, client 1310 is retrieved Face-image input by user is only received, and input face image is transferred to retrieval server 1320.
Then, retrieval server 1320 (especially processor 1322) is from the pre- standing wire being stored in index server 1321 It is candidate to draw middle acquisition feature.Wherein, it is based on reference to Fig. 5 to Figure 11 from a sample face image (for example, the face figure of registration Picture) feature that determines determines a feature candidate.For example, by link the feature that is determined from corresponding sample face image come Determine feature candidate.In addition, in one implementation, predetermined index includes feature candidate and hyperlink, one of hyperlink It connects corresponding to a sample face image for determining corresponding feature candidate from it.In another implementation, predetermined index Including feature candidate and corresponding sample face image.
After obtaining corresponding feature candidate, for each sample face figure corresponding with the feature candidate obtained Picture, retrieval server 1320 (especially processor 1322) determine whether the sample face image is similar to input face image. In one implementation, first, retrieval server 1320 calculates the feature of input face image obtained and corresponds to and is somebody's turn to do Similarity measurement between the feature candidate of sample face image obtained.Wherein, similarity measurement is for example calculated as remaining Chordal distance, Euclidean distance or Mahalanobis generalised distance.Then, it is greater than or equal to predefined threshold value in similarity measurement In the case of (for example, TH3), retrieval server 1320 determines that the sample face image is similar to input face image.Otherwise, it examines Rope server 1320 determines that the sample face image is dissimilar with input face image.Optionally, by retrieval client 1310 and It is not to execute similitude by retrieval server 1320 to judge.In other words, in this case, retrieval server 1320 only obtains Feature is candidate, and by corresponding feature candidate transport to retrieving client 1310.
In addition, as exemplary application, the top n sample face image being similarly determined with input face image is exported To user.Wherein, N is the integer more than or equal to 1.
As described above, during identifying processing, identified usually using FID technologies, the face-image of people whether with stepped on Remember the facial images match of the registration of one of people.As the exemplary application of the above-mentioned image procossing with reference to Fig. 5 to Figure 11, connect down 4 example image processing system (for example, accelerator control system, payment system) will be described referring to Fig.1.
As shown in figure 14, image processing system 1400 includes payment client terminal 1410 (or gate client 1420 processed) and knows Other server 1430.Wherein, identification server 1430 includes server 1431 and processor 1432.In image processing system 1400 In the case of payment system, server 1431 stores the face-image for being registered people.For example, one is registered people and corresponds to One mark (ID) number.Also, in the case where image processing system 1400 is used as accelerator control system, server 1431 stores pre- Standing wire draws.Wherein, the predetermined index being stored in server 1431 is similar to the pre- standing wire being stored in index server 1321 Draw.Optionally, server 1431 can also be substituted by external server.That is, being registered the face-image or predetermined of people Index can also be stored in external server, without being stored in the internal server of identification server 1430.
In addition, in one implementation, payment client terminal 1410, gate client 1420 processed and identification server 1430 It is connected to each other via system bus.In another implementation, payment client terminal 1410, gate client 1420 processed and identification Server 1430 is connected to each other via network.In addition, payment client terminal 1410, gate client 1420 processed and identification server 1430 can realize via same electronic equipment (for example, computer, PDA, mobile phone, camera).Optionally, payment visitor Family end 1410, gate client 1420 processed and identification server 1430 can also be realized via different electronic equipments.
As shown in figure 14, in the case where image processing system 1400 is used as payment system, in one aspect, client is paid End 1410 obtains the face-image (that is, input face image) by being registered people's input.Then, payment client terminal 1410 passes through chain It connects with reference to Fig. 5 to Figure 11 from the region in input face image to determining feature, obtains the feature of input face image.It is optional Ground, by identifying server 1430 rather than the feature of the acquisition input picture of payment client terminal 1410.In other words, in this feelings Under condition, payment client terminal 1410 only receives the face-image by being registered people's input, and input face image is transferred to identification Server 1430.In another aspect, payment client terminal 1410 obtains the ID number by being registered people's input, and the ID number is transmitted To identification server 1430.
After obtaining ID number, identification server 1430 (especially processor 1432) is (special from image processing system 1400 It is not server 1431) obtain the face-image for being registered the registration of people.For example, identification server 1430 passes through the ID that will obtain Number and be compared corresponding to the ID number of the face-image of registration, the face-image of corresponding registration is obtained from server 1431. Then, identify server 1430 (especially processor 1432) by link with reference to Fig. 5 to Figure 11 from the registration for being registered people Region in face-image obtains identified feature the feature for the face-image for being registered the registration of people.
After feature of the feature for obtaining input face image with the face-image for the registration for being registered people, identification service Device 1430 (especially processor 1432) identifies whether the face in input face image belongs to the quilt in the face-image of registration Register the face of people.In one implementation, first, identification server 1430 calculate input face image feature with stepped on Remember the similarity measurement between the feature of the face-image of the registration of people.Wherein, similarity measurement be for example calculated as cosine away from From, Euclidean distance or Mahalanobis generalised distance.Then, it is greater than or equal to predefined threshold value (example in similarity measurement Such as, TH4) in the case of, identification server 1430 determines that the face in input face image belongs in the face-image of registration It is registered the face of people.Otherwise, identification server 1430 determines, the face in input face image is not belonging to the face figure of registration The face for being registered people as in.Optionally, by payment client terminal 1410 rather than by the execution similitude of identification server 1430 Judge.In other words, in this case, identification server 1430 only obtains the spy for the face-image for being registered the registration of people Sign, and corresponding feature is transferred to payment client terminal 1410.
In addition, as exemplary application, the face in input face image is confirmed as belonging to the face-image of registration In the face for being registered people in the case of, be registered people can via image processing system 1400 execute payment activity.
In addition, as shown in figure 14, in the case where image processing system 1400 is used as accelerator control system, by image procossing system The corresponding process that system 1400 executes is similar with the corresponding process executed by image processing system 1300.Therefore, it is not repeated herein Detailed description.In addition, only difference is that, only when shot by gate client 1420 with face-image (that is, input face Image) a most like sample face image is by image processing system 1400 when can be determined, will be user opens door.
As described above, according to present invention determine that feature will be more accurate.Therefore, HIR processing and FID processing will also be improved Precision.
In addition, the other application as the above-mentioned image procossing with reference to Fig. 5 to Figure 11, the spy determined with reference to Fig. 5 to Figure 11 Sign can be used for HAR processing.More specifically, first, will determine corresponding spy from input face image with reference to Fig. 5 to Figure 11 Sign.Then, the correspondence attribute of the people in input face image will be determined based on grader and identified feature.Wherein, example Such as, the attribute of people includes that the age level (for example, old man, adult, children) of people, the race of people are (for example, white race race, the yellow race Race, race of black race), the gender (for example, male, women) etc. of people.
Above-mentioned all units contribute to realize the illustrative and/or preferred mould of the processing described in the disclosure Block.These units can be hardware cell (for example, field programmable gate array (FPGA), digital signal processor, special integrated Circuit etc.) and/or software module (for example, computer-readable program).It does not describe at large for realizing each step above Unit.However, in the case of there is the step of executing a certain processing, there may be for realizing the corresponding of the same processing Function module or unit (passing through hardware and/or software realization).The step of passing through description and unit corresponding to these steps The technical solution of all combinations is included in disclosure herein, as long as the technical solution that they are constituted is complete, suitable .
It can implement methods and apparatus of the present invention by many modes.For example, software, hardware, firmware can be passed through Or it is arbitrarily combined to implement methods and apparatus of the present invention.The said sequence of the step of this method is only intended to be illustrative, Also, the step of method of the present invention, is not limited to the sequence of above-mentioned specific descriptions, unless otherwise expressly specified.In addition, one In a little embodiments, the present invention can also be implemented as the program recorded in the recording medium comprising for realizing according to this hair The machine readable instructions of bright method.Therefore, the present invention also covers storage for realizing program according to the method for the present invention Recording medium.
Although some specific embodiments of the present invention, those skilled in the art have been shown in detail by example Member is it should be understood that above-mentioned example is only intended to be illustrative, and does not limit the scope of the invention.Those skilled in the art should Understand, above-described embodiment can be changed without departing from the scope and spirit of the present invention.The scope of the present invention is by institute Attached claim limits.

Claims (15)

1. a kind of image processing apparatus, including:
Image acquisition unit is configured to, and obtains face-image;
Region is configured to determination unit, determines at least one region pair from face-image, wherein the area of region centering Domain is mutually symmetrical about one in the line of symmetry in face;And
Characteristics determining unit is configured to, and spy is determined from one in the region of the region centering based on first direction Sign, and feature is determined from another in the region of the region centering based on second direction, wherein first direction and the Two directions are mutually symmetrical about the line of symmetry in region pair.
2. the apparatus according to claim 1, wherein the line of symmetry in face includes at least one of the following:Face Line of symmetry, the line of symmetry of facial component.
3. the apparatus of claim 2, wherein region is to determination unit based on the characteristic point determined from the component of face To determining that region pair, the wherein characteristic point of characteristic point centering are mutually symmetrical about the line of symmetry in region pair.
4. the apparatus according to claim 1, wherein for any pair of the region centering, characteristics determining unit base Feature is determined in the frequency of occurrences of the second value of the frequency of occurrences and pixel of the first value of pixel;
Wherein, it for any one in the pixel in one in the region, is determined based on first direction and difference Corresponding first value, the difference are determined based on the gray scale of the pixel and the gray scale of the neighborhood pixels of the pixel;
Wherein, true based on second direction and difference for any one in the pixel described in the region in another Fixed corresponding second value, the difference are determined based on the gray scale of the pixel and the gray scale of the neighborhood pixels of the pixel.
5. device according to claim 4, wherein the first value and second value are the intensity value of pixel or the angle of pixel Value.
6. device according to claim 4, wherein characteristics determining unit is also obtained from least a pair of the region centering At least one subregion pair is obtained, wherein the subregion of the subregion centering is right each other about the line of symmetry in corresponding region pair Claim.
7. the apparatus according to claim 1, wherein for at least a pair of of the region centering, from the region to true In the case of determining occlusion area, characteristics determining unit is based on first direction or second direction from another region of the region centering It determines feature, and determining feature is considered as to the corresponding feature of occlusion area.
8. the apparatus according to claim 1, wherein first direction is that clockwise, second direction is counterclockwise.
9. the apparatus according to claim 1, wherein described device further includes:
Shadowing unit, is configured to, for at least a pair of of the region centering, based on determined by the region Similarity measurement between feature judges whether blocked between the region.
10. a kind of image processing method, including:
Image acquisition step obtains face-image;
Region determines at least one region pair, wherein the region of region centering is about in face to determining step from face-image Line of symmetry in one be mutually symmetrical;And
Feature determines step, determines feature from one in the region of the region centering based on first direction, and be based on Second direction determines feature from another in the region of the region centering, wherein first direction and second direction about The line of symmetry in region pair is mutually symmetrical.
11. according to the method described in claim 10, wherein, for any pair of the region centering, determining and walking in feature In rapid, the frequency of occurrences of the frequency of occurrences of the first value pixel-based and the second value of pixel determines feature;
Wherein, it for any one in the pixel in one in the region, is determined and is corresponded to based on first direction and difference The first value, the difference determined based on the gray scale of the gray scale of the pixel and the neighborhood pixels of the pixel;
Wherein, for any one in the pixel in another in the region, based on second direction and difference determination pair The second value answered, the difference are determined based on the gray scale of the pixel and the gray scale of the neighborhood pixels of the pixel.
12. according to the method described in claim 10, wherein, for at least a pair of of the region centering, from the region pair In the case of determining occlusion area, in feature determines step, based on first direction or second direction from the another of the region centering One region determines feature, and the feature is considered as to the corresponding feature of occlusion area.
13. according to the method described in claim 10, wherein, the method further includes:
Shadowing step, for at least a pair of of the region centering, based on from the phase between feature determined by the region Judge whether block between the region like property measurement.
14. a kind of image processing system, including:
First image processing apparatus, is configured to, based on according to described in any one of claim 1 to claim 8 Determined by input face image feature obtains feature;
Second image processing apparatus, is configured to, and at least one spy is obtained from the predetermined index including at least feature candidate Sign is selected, wherein based on according to described in any one of claim 1 to claim 8 and from one in sample face image Feature determines one in feature candidate determined by a;And
Third image processing apparatus, is configured to, at least one of described sample face image, based on what is obtained Feature and the sample face is determined corresponding to the similarity measurement between the feature candidate of the sample face image obtained Whether image is similar to input face image.
15. a kind of image processing system, including:
First image processing apparatus, is configured to, and the face-image of a people is obtained from image processing system;
Second image processing apparatus, is configured to, based on according to described in any one of claim 1 to claim 8 Feature determined by face-image from acquisition obtains fisrt feature, also, based on according to claim 1 to claim 8 Any one of described in and determined by input face image feature obtains second feature;And
Third image processing apparatus, is configured to, and is identified based on the similarity measurement between fisrt feature and second feature Whether the face in input face image belongs to the face of the people in acquired face-image.
CN201710051462.XA 2017-01-20 2017-01-20 Image processing apparatus and method, and image processing system Active CN108334804B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710051462.XA CN108334804B (en) 2017-01-20 2017-01-20 Image processing apparatus and method, and image processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710051462.XA CN108334804B (en) 2017-01-20 2017-01-20 Image processing apparatus and method, and image processing system

Publications (2)

Publication Number Publication Date
CN108334804A true CN108334804A (en) 2018-07-27
CN108334804B CN108334804B (en) 2023-10-31

Family

ID=62922350

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710051462.XA Active CN108334804B (en) 2017-01-20 2017-01-20 Image processing apparatus and method, and image processing system

Country Status (1)

Country Link
CN (1) CN108334804B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109961436A (en) * 2019-04-04 2019-07-02 北京大学口腔医学院 A kind of median plane construction method based on artificial nerve network model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110150349A1 (en) * 2009-12-17 2011-06-23 Sanyo Electric Co., Ltd. Image processing apparatus and image sensing apparatus
CN103678315A (en) * 2012-08-31 2014-03-26 富士通株式会社 Image processing device, image processing method and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110150349A1 (en) * 2009-12-17 2011-06-23 Sanyo Electric Co., Ltd. Image processing apparatus and image sensing apparatus
CN103678315A (en) * 2012-08-31 2014-03-26 富士通株式会社 Image processing device, image processing method and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
夏亮等: "一种基于人脸核心区域灰度分布的人脸检测方法", 《计算机时代》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109961436A (en) * 2019-04-04 2019-07-02 北京大学口腔医学院 A kind of median plane construction method based on artificial nerve network model
CN109961436B (en) * 2019-04-04 2021-05-18 北京大学口腔医学院 Median sagittal plane construction method based on artificial neural network model

Also Published As

Publication number Publication date
CN108334804B (en) 2023-10-31

Similar Documents

Publication Publication Date Title
US10726244B2 (en) Method and apparatus detecting a target
US20220044040A1 (en) Liveness test method and apparatus
JP7132387B2 (en) Image processing device, image processing method and program
US10318797B2 (en) Image processing apparatus and image processing method
US8750573B2 (en) Hand gesture detection
US8792722B2 (en) Hand gesture detection
US9286537B2 (en) System and method for classifying a skin infection
CN112052186B (en) Target detection method, device, equipment and storage medium
US20200279124A1 (en) Detection Apparatus and Method and Image Processing Apparatus and System
Galdi et al. FIRE: Fast Iris REcognition on mobile phones by combining colour and texture features
US20200012887A1 (en) Attribute recognition apparatus and method, and storage medium
CN103632379A (en) Object detection apparatus and control method thereof
WO2021120961A1 (en) Brain addiction structure map evaluation method and apparatus
CN106471440A (en) Eye tracking based on efficient forest sensing
US9292752B2 (en) Image processing device and image processing method
KR20190018274A (en) Method and apparatus for recognizing a subject existed in an image based on temporal movement or spatial movement of a feature point of the image
CN110633723B (en) Image processing apparatus and method, and storage medium
CN108334804A (en) Image processing apparatus and method and image processing system
US20200167587A1 (en) Detection apparatus and method and image processing apparatus and system, and storage medium
Ng et al. Development of vision based multiview gait recognition system with MMUGait database
Luna et al. People re-identification using depth and intensity information from an overhead camera
Mursalin et al. EpNet: A deep neural network for ear detection in 3D point clouds
CN110390234B (en) Image processing apparatus and method, and storage medium
CN108133221B (en) Object shape detection device, image processing device, object shape detection method, and monitoring system
Sehgal Palm recognition using LBP and SVM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant