CN105303150A - Method and system for implementing image processing - Google Patents

Method and system for implementing image processing Download PDF

Info

Publication number
CN105303150A
CN105303150A CN201410299852.5A CN201410299852A CN105303150A CN 105303150 A CN105303150 A CN 105303150A CN 201410299852 A CN201410299852 A CN 201410299852A CN 105303150 A CN105303150 A CN 105303150A
Authority
CN
China
Prior art keywords
target image
image
human face
face region
effective
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410299852.5A
Other languages
Chinese (zh)
Other versions
CN105303150B (en
Inventor
李季檩
陈志博
邬瑞奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Tencent Cloud Computing Beijing Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201410299852.5A priority Critical patent/CN105303150B/en
Publication of CN105303150A publication Critical patent/CN105303150A/en
Application granted granted Critical
Publication of CN105303150B publication Critical patent/CN105303150B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention provides a method and a system for implementing image processing. The method comprises: acquiring an effective target image with a face area; extracting a face feature of the effective target image; matching the face feature of the effective target image with a preset face feature of a reference image; and associating the matched effective target image to image information of the reference image. The system comprises: an effective image acquisition apparatus for acquiring the effective target image with the face area; an extracting apparatus for extracting the face feature of the effective target image; a matching apparatus for matching the face feature of the effective target image with the preset face feature of the reference image; and an associating apparatus for associating the matched effective target image to the image information of the reference image. With the adoption of the method and the system, the corresponding image information can be quickly and accurately associated with the image in the first time, so that the image processing does not need to rely on manual operations.

Description

Realize the method and system of image procossing
Technical field
The present invention relates to field of computer technology, particularly relate to a kind of method and system realizing image procossing.
Background technology
Along with the development of computer calculate, increasing needs obtains the image in a large number with human face region, associates, and then realize various internet, applications based on this with the image respectively each with human face region with certain image information.
But, in traditional image processing process, be fixed against manually to the image eye recognition carried out and the image information manually selected corresponding to this image, and then selected image information is associated with this image.Therefore, traditional image processing process cannot associate with respective image information quickly and accurately acquiring the very first time with facial image, image processing process is also existed and relies on manually-operated limitation.
Summary of the invention
Based on this, be necessary to provide a kind of energy very first time quickly and accurately for image associates corresponding image information, make the process of image not need to depend on the manually-operated method realizing image procossing.
In addition, there is a need to provide a kind of energy very first time quickly and accurately for image associates corresponding image information, make the process of image not need to depend on the manually-operated system realizing image procossing.
Realize a method for image procossing, comprise the steps:
Obtain the effective target image with human face region;
Extract the face characteristic of effective target image;
The face characteristic of described effective target image is mated with the face characteristic of preset reference picture;
The effective target image of coupling is associated to the image information of reference picture.
Realize a system for image procossing, it is characterized in that, comprising:
Effective image acquisition device, for obtaining the effective target image with human face region;
Extraction element, for extracting the face characteristic of effective target image;
Coalignment, for mating the face characteristic in described effective target image with the face characteristic of preset reference picture;
Associated apparatus, for being associated to the image information of reference picture by the effective target image of coupling.
The above-mentioned method and system realizing image procossing, obtain the effective target image with human face region, face characteristic is obtained by effective target image zooming-out, the face characteristic of the face characteristic of effective target image with preset reference picture is mated, the effective target image of coupling to be associated to the image information of reference picture, and then achieve effective target image and associating between image information when not needing artificial participation, therefore, image processing process as above is by automatically associating the face in effective target image, image procossing can be realized quickly and accurately in the very first time, and do not need to depend on manual operation to realize.
Accompanying drawing explanation
Fig. 1 is the hardware running environment configuration diagram that the embodiment of the present invention relates to;
Fig. 2 is the method flow diagram realizing image procossing in an embodiment;
Fig. 3 obtains the method flow diagram with the effective target image of human face region in Fig. 2;
Fig. 4 carries out to target image the method flow diagram that Face datection obtains having the target image of human face region in Fig. 3;
Fig. 5 is the training image schematic diagram in an embodiment;
Fig. 6 is the schematic diagram of facial contour mark criterion in Fig. 4;
Fig. 7 is the schematic diagram of left eyebrow mark criterion in Fig. 4;
Fig. 8 is the schematic diagram of right eyebrow mark criterion in Fig. 4;
Fig. 9 is the schematic diagram of left eye eyeball mark criterion in Fig. 4;
Figure 10 is the schematic diagram of right eye eyeball mark criterion in Fig. 4;
Figure 11 is the schematic diagram of nose mark criterion in Fig. 4;
Figure 12 is the schematic diagram of face mark criterion in Fig. 4;
Figure 13 is the method flow diagram face characteristic of effective target image and the face characteristic of preset reference picture being carried out in Fig. 1 mating;
The effective target image do not mated is carried out cluster in Figure 13, to obtain the method flow diagram of cluster calculation results set by Figure 14;
Figure 15 is the method flow diagram carrying out the recommendation of effective target image in an embodiment according to the image information of association;
Figure 16 is the structural representation of the computer system that the method realizing image procossing in an embodiment is run;
Figure 17 is the structural representation realizing the system of image procossing in an embodiment;
Figure 18 is the structural representation of effective image acquisition device in Figure 17;
Figure 19 is the structural representation of face detection module in Figure 18;
Figure 20 is the structural representation of the system realizing image procossing in another embodiment;
Figure 21 is the structural representation of cluster calculation device in Figure 20;
Figure 22 is the structural representation of recommendation apparatus in an embodiment.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearly understand, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
As shown in Figure 1, Fig. 1 is a kind of server architecture schematic diagram that the embodiment of the present invention provides.This server 100 can produce larger difference because of configuration or performance difference, one or more central processing units (centralprocessingunits can be comprised, CPU) 122 (such as, one or more processors) and storer 132, one or more store the storage medium 130 (such as one or more mass memory units) of application program 142 or data 144.Wherein, storer 132 and storage medium 130 can be of short duration storages or store lastingly.The program being stored in storage medium 130 can comprise one or more modules (illustrating not shown), and each module can comprise a series of command operatings in server.Further, central processing unit 122 can be set to communicate with storage medium 130, and server 100 performs a series of command operatings in storage medium 130.Server 100 can also comprise one or more power supplys 126, one or more wired or wireless network interfaces 150, one or more IO interface 158, and/or, one or more operating system 141, such as WindowsServerTM, MacOSXTM, UnixTM, LinuxTM, FreeBSDTM etc.
One of ordinary skill in the art will appreciate that all or part of step realizing above-described embodiment can have been come by hardware, the hardware that also can carry out instruction relevant by program completes, described program can be stored in a kind of computer-readable recording medium, the above-mentioned storage medium mentioned can be ROM (read-only memory), disk or CD etc.
In one embodiment, as shown in Figure 2, a kind of method realizing image procossing, comprises the steps:
Step 210, obtains the effective target image with human face region.
In the present embodiment, effective target image has human face region and the higher image of image quality, such as, has human face region and there is not the image of the problem such as image blurring.
Step 230, extracts the face characteristic of effective target image.
In the present embodiment, feature extraction is carried out to the human face region comprised in effective target image, to obtain the feature in effective target image corresponding to human face region, wherein, to extract the face characteristic that obtains can be Gabor characteristic (gal cypress feature), also can be the feature of other form, not limit one by one at this.
Further, the face characteristic obtained by effective target image zooming-out is Gabor characteristic, then effective target image is carried out yardstick and unitary of illumination, obtain normalized effective target image, the Gabor filter of normalized effective target image and multiple yardstick multiple directions is carried out convolution algorithm and obtains characteristic coefficient.
Now, carry out dimensionality reduction by the characteristic coefficient obtained, such as, principal component analysis (PCA) can be adopted to be tieed up by the characteristic coefficient dimensionality reduction to 6400 of higher-dimension, to obtain face characteristic corresponding to this effective target image.
In the operation process of reality, be 80 × 80 by the wide of effective target image and height unification, pixel value average normalizing is 0, pixel value variance is normalized to 1, then the Gabor filter in normalized effective target image and 5 yardsticks, 8 directions is carried out convolution algorithm, obtain the feature of 80 × 80 × 5 × 8.
Step 250, mates the face characteristic of the face characteristic of effective target image with preset reference picture.
In the present embodiment, prestore the reference picture marking respective image information, the image information that this reference picture marks can be artificial mark, also can be obtained by mode of the present invention in advance, for the image information in effective target image corresponding to face provides known face.Wherein, this image information will include the information such as user ID, user's pet name, can know the user belonging to reference picture according to the image information of mark.
Obtain and marked the reference picture of image information, the reference picture of acquisition is mated with effective target image, to judge that according to face characteristic whether reference picture is similar to effective target image, if yes, then effective target image fails to obtain corresponding image information.
Concrete, the known face in the reference picture of acquisition is mated with the face in effective target image, to obtain the known face with the human face similarity of effective target image in reference picture.
Step 270, is associated to the image information of reference picture by the effective target image of coupling.
In the present embodiment, obtain image information according to the reference picture with effective target images match, and then this effective target image is associated to obtained image information.
By mode as above, by making the effective target image of batch also come to associate with corresponding image information accurately by feature extraction and matching, greatly improve treatment effeciency, realizing batch images process quickly and accurately.
As shown in Figure 3, in one embodiment, above-mentioned steps 210 comprises:
Step 211, carries out to target image the target image that Face datection obtains having human face region.
In the present embodiment, target image is inputted by certain way, and wherein, target image can be that user is uploaded to the server on backstage by certain page, now, and the target image that the server on backstage will acquire user and uploads.
Concrete, some for use station servers are built Distributed Calculation cluster, carries out, to improve response speed so that multiple target images are distributed to different server simultaneously.In actual operation process, the quantity of server is 100, and the Distributed Calculation cluster built will be processed hundreds of logo image of opening one's eyes wide simultaneously.
Owing to may including one or more face in target image, but also likely do not comprise face, therefore, Face datection will be carried out to target image, to judge the target image obtaining there is in target image human face region.
Step 213, filters the target image with human face region and obtains effective target image.
In the present embodiment, the target image that image quality is not high will also exist the problems such as image blurring, therefore, needing the target image with human face region to Face datection obtains to filter, being effective target image by filtering the target image with human face region obtained.
In one embodiment, the detailed process of above-mentioned steps 213 for: obtain the parameter had in the target image of human face region corresponding to human face region, judge that whether human face region fuzzy according to this parameter, if yes, then reject target image, if NO, then target image is set to effective target image.
Wherein, there is target image remaining in the target image of human face region and be effective facial image.
In the present embodiment, for differentiating that human face region whether parameter can be sharpness and/or face size.Concrete, can obtain sharpness according to human face region in target image, this sharpness is the ratio between the mean value of gradient absolute value by calculating human face region in target image and tonal range.Face size is then the area of the human face region that Face datection obtains, namely total number of pixels.
For the sharpness of human face region, will judge whether sharpness is less than the clarity threshold of setting, if yes, then represent human face region too fuzzy, filtered, if NO, be then judged to be effective target image.Face size also carries out the whether fuzzy differentiation of human face region as described above by passing through.
Along with the filtration of target image, there is target image remaining in the target image of human face region and be effective target image.
As shown in Figure 4, in one embodiment, above-mentioned steps 211 comprises:
Step 2111, carries out the extraction of characteristic of division to target image, and by characteristic of division atlas cascade of strong classifiers to obtain having the face parameter in the target image of human face region and target image.
In the present embodiment, extract by target image the characteristic of division obtained and can be haar feature (rectangular characteristic), and then Face datection can be carried out by extracting the haar characteristic sum self-adaptation boosting sorting technique obtained, to obtain the face parameter that has in the target image that inputs in the target image of human face region and this target image, wherein, face parameter includes face location in target image and face size.
In a preferred embodiment, for convenience of subsequent treatment, by according to face size, the target image with human face region is sorted according to order from big to small.
Step 2113, obtains preset sample image, is mated by target image according to face parameter with the face shape in sample image, obtains initial human face region and has the target image of initial human face region.
In the present embodiment, pre-set sample image, wherein, marked several key points existing for face in sample image, the distribution of key point will characterize the face distribution of face in sample image.
Because the face shape in sample image and the human face region in target image are in different coordinate systems respectively, therefore, need to adjust sample image according to locating the face parameter obtained, align with the coordinate system of target image to make the coordinate system at face shape place adjusted in rear sample image, and the face shape after adjustment and the face matching degree in target image reach best, that is, sum of the deviations in face shape between the coordinate of each key point and the coordinate of human face region is minimum, human face region initial in target image can be determined thus.
Further, pre-set the facial image of multiple all ages and classes, different sexes and different attitude as training image, in training image, will carry out key point mark in advance to it, face's object of mark will include eyebrow, eyes, nose, face and facial contour.
Such as, training image as shown in Figure 5, it will mark 88 key points, will be accurate to pixel during mark, and the mark criterion adopted is respectively as shown in Fig. 6 to Figure 12.
For each training image, the shape first formed its 88 key points is carried out PCA (PrincipalComponentAnalysis, principal component analysis (PCA)) modeling and is obtained average shape with the proper vector U of eigenvalue matrix Λ, then successively modeling is carried out to the texture information of 88 key points.
Wherein, in 88 key points, for arbitrary key point q, will extract centered by key point q according to specific direction, pointwise gathers 7 pixels and forms line segment, using the textural characteristics v of the gray-scale value of 7 pixels on this line segment as this key point q q.
Calculate the textural characteristics corresponding to each key point by mode as above, and then obtain the average in training image corresponding to textural characteristics and the inverse matrix M of the covariance matrix of correspondence.
K mean cluster is carried out to several training images, the template using average cluster obtained as face's object, i.e. sample image.
After obtaining sample image, by according to the face parameter in target image to the average shape in sample image adjust, this average shape be the face shape in sample image, this adjustment will include the convergent-divergent of face shape, rotation and translation, align with the coordinate system of target image to make the coordinate system of face shape after adjusting, and the face shape after this adjustment and the best of the face matching degree in target image, to obtain initial human face region s t.
In one embodiment, the detailed process of above-mentioned steps 2111 is: carry out to target image the feature that multiscale space search obtains each search window, by the feature of each search window input cascade of strong classifiers, obtain the target image with human face region with the differentiation result according to cascade of strong classifiers.
In the present embodiment, with the window of different size and position, multiscale space search is carried out, to obtain the feature of each search window to the target image of input.
Cascaded stages strong classifier is formed in advance by certain sample, to judge whether target image has human face region by cascade of strong classifiers according to the feature of each search window of input, if yes, then judge that this target image is the target image with human face region, and the face parameter exported in target image, i.e. face location and face size.
Further, for obtaining cascade of strong classifiers, using gathering the facial image of predetermined number and non-face image as positive negative sample, extract the feature in sample respectively, best feature and corresponding threshold value and weight is selected, to obtain cascade of strong classifiers by self-adaptation boosting sorter.
Step 2115, carries out the human face region that shape optimum is optimized and the target image with human face region to having human face region initial in the target image of human face region.
In the present embodiment, in the target image with human face region, iteration optimization is carried out to initial human face region, to upgrade key point position in target image, the human face region be optimized.
The iterative optimization procedure that initial human face region carries out specifically comprises:
(1) to each the key point P in initial human face region, centered by key point P, gather 15 pixels along specific direction pointwise and form line segment, for any point r on line segment, to extract centered by a r according to line segment direction, the sub-line segment be made up of 7 pixels, using the textural characteristics v of the gray-scale value of 7 pixels on sub-line segment as a r r.
(2) for the every bit r in 15 pixels, according to formula calculate the training pattern corresponding with key point P distance between M}, the some r of the correspondence that selected distance is minimum in a r *as new key point p, by process like this, all key points are upgraded successively, to obtain new shape s *.
(3) at new shape s *in, for each proper vector u in proper vector U iand this proper vector characteristic of correspondence value λ i, using the inner product between each proper vector as projection coefficient ω i, by projection coefficient ω ibe defined in scope within, namely work as ω ibe less than time directly ω is set ifor work as ω ibe greater than time directly ω is set ifor then according to formula carry out PCA reconstruction and obtain shape s t+1.
(4) to s t+1repeat described iteration optimization, until s t+1change be less than predetermined threshold, with the human face region be optimized.
Realized the optimization of human face region by mode as above, effectively ensure that the accuracy of obtained human face region.
As shown in figure 13, in one embodiment, above-mentioned steps 250 comprises:
Step 251, carries out cluster by the effective target image do not mated, to obtain cluster calculation results set.
In the present embodiment, the effective target image do not mated is generally multiple, therefore, cluster calculation results set is obtained by carrying out cluster to multiple effective target image do not mated, wherein, the set that the cluster calculation results set obtained will contain several and formed by effective target image, namely final face sorts out set, and each gathers the face existed in the effective target image comprised will be coupling mutually.
Step 253, obtains each final face in cluster calculation results set by the labeling operation triggered cluster calculation results set and sorts out image information corresponding to set, and cluster calculation results set is associated to image information.
In the present embodiment, the labeling operation of manual activation can be acquired, this labeling operation sorts out to the final face in cluster calculation results set the image information mark gathered and carry out, and makes final face sort out set be associated with certain image information by labeling operation.
Concrete, the cluster calculation results set obtained cluster is shown, manually marks so that the developer of user or rear end is able to sort out set to the final face in cluster calculation results set.
Can be virtual social Webpage for carrying out the page of cluster calculation results set display, also can be other some pages, the application scenarios according to reality sets flexibly.
As shown in figure 14, in one embodiment, above-mentioned steps 251 comprises:
Step 2511, calculates between two to the effective target image do not mated, to obtain similarity between two effective target images and Rank-Order distance.
In the present embodiment, the calculating between two carried out the effective target image not obtaining correspondence image information is what to carry out for the human face region in this effective target image.
For Rank-Order distance, given fixing similarity matrix G, respectively using human face region u, v as inquiring about all the other elements a 1, a 2..., a mand b 1, b 2..., b ncarry out sequencing of similarity, obtain the R that sorts u, R vwith function of position N (u, a k), that is:
R u=u,a 1,a 2,…,a m,v
R v=v,b 1,b 2,…,b n,u
N(u,a k)=q,ifb q=a k
Wherein, q is a setting value.
In order to u, v distinctiveness ratio that the order relation represented between the neighbours of u, v and neighbours reflects, the Rank-Order distance definition between u, v is:
Rank - Order ( u , v ) = o ( u ) + o ( v ) max { o ( u ) , o ( v ) }
o ( u ) = &Sigma; 1 < k < m N ( u , a k )
o ( v ) = &Sigma; 1 < k < n N ( u , b k )
Step 2513, using similarity and Rank-Order distance as the condition of merging, the merging carrying out effective target image is sorted out gather to be obtained several Initial Faces.
In the present embodiment, carry out the merging of effective target image according to similarity and Rank-Order distance, sort out set similar effective target image to be classified as an Initial Face.
Step 2515, sorts out set according to face number by Initial Face and is classified as remarkable class or non-significant class in Initial Face classification set.
In the present embodiment, obtain Initial Face and sort out the face number in gathering, judge whether face number is greater than remarkable threshold, if yes, then this Initial Face is sorted out set and be classified as remarkable class, if NO, then this Initial Face is sorted out set and be classified as non-significant class.
Step 2517, calculates the matching degree between remarkable class and non-significant class, and obtains cluster calculation results set according to the remarkable class of matching degree merger and non-significant class.
In the present embodiment, between remarkable class and non-significant class, the calculating by Hausdoff distance realizes by matching degree.
Concrete, Hausdoff distance is by tolerance two set, and the most very much not matching degree namely between arbitrary remarkable class A and arbitrary non-significant class B, if a k∈ A, b k∈ B, then
Hausdorff ( A , B ) = max { max i { max k { d ( a i , b k ) } , max j { min { d k ( a k , b j ) } } } }
Wherein, i and j refers to from all possible numerical value, selects one that satisfies condition best.
Judge whether Hausdoff distance (Hausdorff distance) calculated is less than matching threshold, if yes, be then in a set by remarkable class A and non-significant class B merger, set is sorted out to obtain final face, and then obtaining several final faces classification set by process as above, these final faces are sorted out set and are just defined cluster calculation results set.
In another embodiment, method as above further comprises the step of carrying out the recommendation of effective target image according to the image information of association.
In the present embodiment, by the image information associated by effective target image, this effective target image is recommended, to make corresponding user can be checked the effective target image obtaining recommending by certain page, greatly improves convenience and the accuracy of image recommendation.
By image recommendation mode as above, also can come to recommend to corresponding user exactly by Face datection, filtration, feature extraction and similarity process by making the target image of batch, greatly improve treatment effeciency, realize batch images recommendation process quickly and accurately.
In another embodiment, before above-mentioned steps 210, method as above further comprises the step being uploaded target image by virtual social Webpage.
As shown in figure 15, the step that the above-mentioned image information according to association carries out the recommendation of effective target image comprises:
Step 1501, is extracted by image information and obtains user ID.
Step 1503, pushes effective target image, effective target image to be shown in the virtual social Webpage at user ID place according to user ID.
In the present embodiment, the user ID recorded in the image information according to the association of effective target image, pushes to effective target image in the virtual social Webpage at this user ID place, to realize the real-time response of image recommendation.
By method as above, will the fast processing of magnanimity target image be achieved, and identify the user in target image corresponding to face exactly, exactly target image is recommended corresponding user, obtain excellent accuracy.
In one embodiment, realize the method for image procossing as above, the computer system that the method is run also can as shown in figure 16, and this computer system comprises virtual social network application 10 and server 30.
Wherein, the form of the Webpage loaded with browser or the form of client run in the terminal devices such as PC, notebook computer, panel computer or the smart mobile phone that user uses by virtual social network application 10.Server 30 and virtual social network application are carried out alternately.
It should be noted that, when the above-mentioned method realizing image recommendation is run in this computer system, user carries out uploading of target image by by virtual social Webpage, and do not need uploading the targeted customer specifying one by one in the target image of virtual social Webpage and carry out image recommendation, the targeted customer obtained corresponding to target image will be identified under the effect of server 30, this targeted customer is the user in target image corresponding to face, and then push in the virtual social Webpage at targeted customer place by uploading in the target image of virtual social Webpage, browse for targeted customer, thus achieve image recommendation fast and accurately, be conducive to the image recommendation carrying out batch fast.
As shown in figure 17, in one embodiment, a kind of system realizing image procossing, comprises effective image acquiring device 1710, extraction element 1730, coalignment 1750 and associated apparatus 1770.
Effective image acquisition device 1710, for obtaining the effective target image with human face region.
In the present embodiment, effective target image has human face region and the higher image of image quality, such as, has human face region and there is not the image of the problem such as image blurring.
Extraction element 1730, for extracting the face characteristic of effective target image.
In the present embodiment, the human face region comprised in extraction element 1730 pairs of effective target images carries out feature extraction, to obtain the feature in effective target image corresponding to human face region, wherein, extraction element 1730 to extract the face characteristic that obtains can be Gabor characteristic (gal cypress feature), also can be the feature of other form, not limit one by one at this.
Further, the face characteristic that extraction element 1730 is obtained by effective target image zooming-out is Gabor characteristic, then effective target image is carried out yardstick and unitary of illumination, obtain normalized effective target image, the Gabor filter of normalized effective target image and multiple yardstick multiple directions is carried out convolution algorithm and obtains characteristic coefficient.
Now, carry out dimensionality reduction by the characteristic coefficient obtained, such as, principal component analysis (PCA) can be adopted to be tieed up by the characteristic coefficient dimensionality reduction to 6400 of higher-dimension, to obtain face characteristic corresponding to this effective target image.
In the operation process of reality, be 80 × 80 by the wide of effective target image and height unification, pixel value average normalizing is 0, pixel value variance is normalized to 1, then the Gabor filter in normalized effective target image and 5 yardsticks, 8 directions is carried out convolution algorithm, obtain the feature of 80 × 80 × 5 × 8.
Coalignment 1750, for mating the face characteristic of the face characteristic in effective target image with preset reference picture.
In the present embodiment, prestore the reference picture marking respective image information, the image information that this reference picture marks can be artificial mark, also can be obtained by mode of the present invention in advance, for the image information in effective target image corresponding to face provides known face.Wherein, this image information will include the information such as user ID, user's pet name, can know the user belonging to reference picture according to the image information of mark.
Coalignment 1750 obtains and has marked the reference picture of image information, the reference picture of acquisition is mated with effective target image, to judge that according to face characteristic whether reference picture is similar to effective target image, if yes, then effective target image fails to obtain corresponding image information.
Concrete, the known face in the reference picture of acquisition mates with the face in effective target image by coalignment 1750, to obtain the known face with the human face similarity of effective target image in reference picture.
Associated apparatus 1770, for being associated to the image information of reference picture by the effective target image of coupling.
In the present embodiment, associated apparatus 1770 obtains image information according to the reference picture with effective target images match, and then this effective target image is associated to obtained image information.
By mode as above, by making the effective target image of batch also come to associate with corresponding image information accurately by feature extraction and matching, greatly improve treatment effeciency, realizing batch images process quickly and accurately.
As shown in figure 18, in one embodiment, above-mentioned effective image acquisition device 1710 comprises face detection module 1711 and filtering module 1713.
Face detection module 1711, for carrying out to target image the target image that Face datection obtains having human face region.
In the present embodiment, target image is inputted by certain way, and wherein, target image can be that user is uploaded to the server on backstage by certain page, now, and the target image that the server on backstage will acquire user and uploads.
Concrete, some for use station servers are built Distributed Calculation cluster, carries out, to improve response speed so that multiple target images are distributed to different server simultaneously.In actual operation process, the quantity of server is 100, and the Distributed Calculation cluster built will be processed hundreds of logo image of opening one's eyes wide simultaneously.
Owing to may including one or more face in target image, but also likely do not comprise face, therefore, face detection module 1711 will carry out Face datection to target image, to judge the target image obtaining having human face region in target image.
Filtering module 713, obtains effective target image for filtering the target image with human face region.
In the present embodiment, the target image that image quality is not high will also exist the problems such as image blurring, therefore, the target image with human face region needing filtering module 1713 pairs of Face datection to obtain filters, and is effective target image by filtering the target image with human face region obtained.
In one embodiment, filtering module 1713 also for obtain there is human face region target image in parameter corresponding to human face region, judge that whether human face region fuzzy according to this parameter, if yes, then reject target image, if NO, then target image is set to effective target image.
Wherein, there is target image remaining in the target image of human face region and be effective facial image.
In the present embodiment, filtering module 1713 is for differentiating that human face region whether parameter can be sharpness and/or face size.Concrete, filtering module 1713 can obtain sharpness according to human face region in target image, and this sharpness is the ratio between the mean value of gradient absolute value by calculating human face region in target image and tonal range.Face size is then the area of the human face region that Face datection obtains, namely total number of pixels.
For the sharpness of human face region, filtering module 1713 will judge whether sharpness is less than the clarity threshold of setting, if yes, then represents human face region too fuzzy, is filtered, and if NO, is then judged to be effective target image.Face size also carries out the whether fuzzy differentiation of human face region as described above by passing through.
Along with the filtration of target image, there is target image remaining in the target image of human face region and be effective target image.
As shown in figure 19, face detection module 1711 as above comprises tagsort unit 17111, sample matches unit 17113 and shape optimum unit 17115.
Tagsort unit 17111, for carrying out the extraction of characteristic of division to target image, and inputs cascade of strong classifiers to obtain having the face parameter in the target image of human face region and target image by characteristic of division.
In the present embodiment, tagsort unit 17111 extracts by target image the characteristic of division obtained and can be haar feature (rectangular characteristic), and then Face datection can be carried out by extracting the haar characteristic sum self-adaptation boosting sorting technique obtained, to obtain the face parameter that has in the target image that inputs in the target image of human face region and this target image, wherein, face parameter includes face location in target image and face size.
In a preferred embodiment, for convenience of subsequent treatment, tagsort unit 17111 sorts according to face size according to order from big to small to the target image with human face region.
Sample matches unit 17113, for obtaining preset sample image, mates target image with the face shape in sample image according to face parameter, obtains initial human face region and has the target image of initial human face region.
In the present embodiment, pre-set sample image, wherein, marked several key points existing for face in sample image, the distribution of key point will characterize the face distribution of face in sample image.
Because the face shape in sample image and the human face region in target image are in different coordinate systems respectively, therefore, need to adjust sample image according to locating the face parameter obtained, align with the coordinate system of target image to make the coordinate system at face shape place adjusted in rear sample image, and the face shape after adjustment and the face matching degree in target image reach best, that is, sum of the deviations in face shape between the coordinate of each key point and the coordinate of human face region is minimum, human face region initial in target image can be determined thus.
Further, pre-set the facial image of multiple all ages and classes, different sexes and different attitude as training image, in training image, will carry out key point mark in advance to it, face's object of mark will include eyebrow, eyes, nose, face and facial contour.
Such as, training image as shown in Figure 5, it will mark 88 key points, will be accurate to pixel during mark, and the mark criterion adopted is respectively as shown in Fig. 6 to Figure 12.
For each training image, the shape first formed its 88 key points is carried out PCA (PrincipalComponentAnalysis, principal component analysis (PCA)) modeling and is obtained average shape with the proper vector U of eigenvalue matrix Λ, then successively modeling is carried out to the texture information of 88 key points.
Wherein, in 88 key points, for arbitrary key point q, will extract centered by key point q according to specific direction, pointwise gathers 7 pixels and forms line segment, using the textural characteristics v of the gray-scale value of 7 pixels on this line segment as this key point q q.
Calculate the textural characteristics corresponding to each key point by mode as above, and then obtain the average in training image corresponding to textural characteristics and the inverse matrix M of the covariance matrix of correspondence.
K mean cluster is carried out to several training images, the template using average cluster obtained as face's object, i.e. sample image.
After obtaining sample image, sample matches unit 17113 by according to the face parameter in target image to the average shape in sample image adjust, this average shape be the face shape in sample image, this adjustment will include the convergent-divergent of face shape, rotation and translation, align with the coordinate system of target image to make the coordinate system of face shape after adjusting, and the face shape after this adjustment and the best of the face matching degree in target image, to obtain initial human face region s t.
In one embodiment, above-mentioned sample matches unit 17113 also obtains the feature of each search window for carrying out multiscale space search to target image, by the feature of each search window input cascade of strong classifiers, obtain the target image with human face region with the differentiation result according to cascade of strong classifiers.
In the present embodiment, the target image of sample matches unit 17113 to input carries out multiscale space search with the window of different size and position, to obtain the feature of each search window.
Cascaded stages strong classifier is formed in advance by certain sample, according to the feature of each search window of input, sample matches unit 17113 judges whether target image has human face region by cascade of strong classifiers, if yes, then judge that this target image is the target image with human face region, and the face parameter exported in target image, i.e. face location and face size.
Further, for obtaining cascade of strong classifiers, using gathering the facial image of predetermined number and non-face image as positive negative sample, extract the feature in sample respectively, best feature and corresponding threshold value and weight is selected, to obtain cascade of strong classifiers by self-adaptation boosting sorter.
Shape optimum unit 17115, for carrying out the human face region that shape optimum is optimized and the target image with human face region to having human face region initial in the target image of human face region.
In the present embodiment, in the target image with human face region, shape optimum unit 17115 carries out iteration optimization to initial human face region, to upgrade key point position in target image, the human face region be optimized.
The iterative optimization procedure that initial human face region carries out specifically comprises:
(1) to each the key point P in initial human face region, centered by key point P, gather 15 pixels along specific direction pointwise and form line segment, for any point r on line segment, to extract centered by a r according to line segment direction, the sub-line segment be made up of 7 pixels, using the textural characteristics v of the gray-scale value of 7 pixels on sub-line segment as a r r.
(2) for the every bit r in 15 pixels, according to formula calculate the training pattern corresponding with key point P distance between M}, the some r of the correspondence that selected distance is minimum in a r *as new key point p, by process like this, all key points are upgraded successively, to obtain new shape s *.
(3) at new shape s *in, for each proper vector u in proper vector U iand this proper vector characteristic of correspondence value λ i, using the inner product between each proper vector as projection coefficient ω i, by projection coefficient ω ibe defined in scope within, namely work as ω ibe less than time directly ω is set ifor work as ω ibe greater than time directly ω is set ifor then according to formula carry out PCA reconstruction and obtain shape s t+1.
(4) to s t+1repeat described iteration optimization, until s t+1change be less than predetermined threshold, with the human face region be optimized.
Realized the optimization of human face region by mode as above, effectively ensure that the accuracy of obtained human face region.
As shown in figure 20, in one embodiment, system as above further comprises cluster calculation device 2010 and annotation equipment 2030.
Cluster calculation device 2010, carries out cluster for the effective target image that will do not mate, to obtain cluster calculation results set.
In the present embodiment, the effective target image do not mated is generally multiple, therefore, cluster calculation device 2010 obtains cluster calculation results set by carrying out cluster to multiple effective target image do not mated, wherein, the set that the cluster calculation results set obtained will contain several and formed by effective target image, namely final face sorts out set, and each gathers the face existed in the effective target image comprised will be coupling mutually.
Annotation equipment 2030, obtains each final face in cluster calculation results set for the labeling operation by triggering cluster calculation results set and sorts out image information corresponding to set, and cluster calculation results set is associated to image information.
In the present embodiment, annotation equipment 2030 can acquire the labeling operation of manual activation, this labeling operation sorts out to the final face in cluster calculation results set the image information mark gathered and carry out, and makes final face sort out set be associated with certain image information by labeling operation.
Concrete, the cluster calculation results set obtained cluster shows by annotation equipment 2030, manually marks so that the developer of user or rear end is able to sort out set to the final face in cluster calculation results set.
Can be virtual social Webpage for carrying out the page of cluster calculation results set display, also can be other some pages, the application scenarios according to reality sets flexibly.
As shown in figure 21, in one embodiment, above-mentioned cluster calculation device 2010 comprises computing module 2011, merging module 2013, classifying module 2015 and merge module 2017 between two.
Computing module 2011 between two, for calculating between two the effective target image do not mated, to obtain similarity between two effective target images and Rank-Order distance.
In the present embodiment, the calculating between two that computing module 2011 carries out the effective target image not obtaining correspondence image information is between two what to carry out for the human face region in this effective target image.
For Rank-Order distance, given fixing similarity matrix G, respectively using human face region u, v as inquiring about all the other elements a 1, a 2..., a mand b 1, b 2..., b ncarry out sequencing of similarity, obtain the R that sorts u, R vwith function of position N (u, a k), that is:
R u=u,a 1,a 2,…,a m,v
R v=v,b 1,b 2,…,b n,u
N(u,a k)=q,ifb q=a k
Wherein, q is a setting value.
In order to u, v distinctiveness ratio that the order relation represented between the neighbours of u, v and neighbours reflects, the Rank-Order distance definition between u, v is:
Rank - Order ( u , v ) = o ( u ) + o ( v ) max { o ( u ) , o ( v ) }
o ( u ) = &Sigma; 1 < k < m N ( u , a k )
o ( v ) = &Sigma; 1 < k < n N ( u , b k )
Merge module 2013, for using similarity and Rank-Order distance as the condition of merging, the merging carrying out effective target image is sorted out gather to be obtained several Initial Faces.
In the present embodiment, merge module 2013 carries out effective target image merging according to similarity and Rank-Order distance, sort out set similar effective target image to be classified as an Initial Face.
Classifying module 2015, is classified as remarkable class or non-significant class for Initial Face being sorted out set according to face number in Initial Face classification set.
In the present embodiment, classifying module 2015 obtains Initial Face and sorts out the face number in gathering, and judges whether face number is greater than remarkable threshold, if yes, then this Initial Face is sorted out set and be classified as remarkable class, if NO, then this Initial Face is sorted out set and be classified as non-significant class.
Merge module 2017, for calculating the matching degree between remarkable class and non-significant class, and obtains cluster calculation results set according to the remarkable class of matching degree merger and non-significant class.
In the present embodiment, between remarkable class and non-significant class, the calculating by Hausdoff distance realizes by matching degree.
Concrete, Hausdoff distance is by tolerance two set, and the most very much not matching degree namely between arbitrary remarkable class A and arbitrary non-significant class B, if a k∈ A, b k∈ B, then
Hausdorff ( A , B ) = max { max i { max k { d ( a i , b k ) } , max j { min { d k ( a k , b j ) } } } }
Wherein, i and j refers to from all possible numerical value, selects one that satisfies condition best.
Merge module 2017 judges whether Hausdoff distance (Hausdorff distance) calculated is less than matching threshold, if yes, be then in a set by remarkable class A and non-significant class B merger, set is sorted out to obtain final face, and then obtaining several final faces classification set by process as above, these final faces are sorted out set and are just defined cluster calculation results set.
In another embodiment, system as above further comprises recommendation apparatus.This recommendation apparatus is used for the recommendation carrying out effective target image according to the image information of association.
In the present embodiment, recommendation apparatus is by the image information associated by effective target image, this effective target image is recommended, to make corresponding user can be checked the effective target image obtaining recommending by certain page, greatly improves convenience and the accuracy of image recommendation.
By image recommendation mode as above, also can come to recommend to corresponding user exactly by Face datection, filtration, feature extraction and similarity process by making the target image of batch, greatly improve treatment effeciency, realize batch images recommendation process quickly and accurately.
In another embodiment, system as above further comprises uploads device, for uploading target image by virtual social Webpage.
As shown in figure 22, in one embodiment, recommendation apparatus as above includes marker extraction module 2201 and pushing module 2203.
Marker extraction module 2201, obtains user ID for being extracted by image information.
Pushing module 2203, for pushing effective target image, effective target image to be shown in the virtual social Webpage at user ID place according to user ID.
In the present embodiment, the user ID recorded in the image information that pushing module 2203 associates according to effective target image, pushes to effective target image in the virtual social Webpage at this user ID place, to realize the real-time response of image recommendation.
By system as above, will the fast processing of magnanimity target image be achieved, and identify the user in target image corresponding to face exactly, exactly target image is recommended corresponding user, obtain excellent accuracy.
One of ordinary skill in the art will appreciate that all or part of flow process realized in above-described embodiment method, that the hardware that can carry out instruction relevant by computer program has come, described program can be stored in a computer read/write memory medium, as in the embodiment of the present invention, this program can be stored in the storage medium of computer system, and performed by least one processor in this computer system, to realize the flow process of the embodiment comprised as above-mentioned each side method.Wherein, described storage medium can be magnetic disc, CD, read-only store-memory body (Read-OnlyMemory, ROM) or random store-memory body (RandomAccessMemory, RAM) etc.
The above embodiment only have expressed several embodiment of the present invention, and it describes comparatively concrete and detailed, but therefore can not be interpreted as the restriction to the scope of the claims of the present invention.It should be pointed out that for the person of ordinary skill of the art, without departing from the inventive concept of the premise, can also make some distortion and improvement, these all belong to protection scope of the present invention.Therefore, the protection domain of patent of the present invention should be as the criterion with claims.

Claims (18)

1. realize a method for image procossing, comprise the steps:
Obtain the effective target image with human face region;
Extract the face characteristic of effective target image;
The face characteristic of described effective target image is mated with the face characteristic of preset reference picture;
The effective target image of coupling is associated to the image information of reference picture.
2. method according to claim 1, is characterized in that, the step that described acquisition has the target image of human face region comprises:
The target image that Face datection obtains having human face region is carried out to target image;
The target image described in filtration with human face region obtains effective target image.
3. method according to claim 2, is characterized in that, describedly carries out to target image the step that Face datection obtains having the target image of human face region and comprises:
Described target image is carried out to the extraction of characteristic of division, and by described characteristic of division input cascade of strong classifiers to obtain having the face parameter in the target image of human face region and described target image;
Obtain preset sample image, according to described face parameter, described target image is mated with the face shape in described sample image, obtain initial human face region and there is the target image of initial human face region;
There is human face region initial in the target image of human face region carry out the human face region that shape optimum is optimized and the target image with human face region to described.
4. method according to claim 2, is characterized in that, the step that the target image described in described filtration with human face region obtains effective target image comprises:
There is described in acquisition the parameter corresponding to human face region in the target image of human face region, judge that whether described human face region is fuzzy according to described parameter, if yes, then reject described target image;
The described target image with remainder in the target image of human face region is effective facial image.
5. method according to claim 1, is characterized in that, described the face characteristic information of described effective target image is carried out the step of mating with the face characteristic information of preset reference picture after, described method also comprises:
The effective target image do not mated is carried out cluster, to obtain cluster calculation results set;
Obtain each final face in described cluster calculation results set by the labeling operation triggered described cluster calculation results set and sort out image information corresponding to set, and described cluster calculation results set is associated to described image information.
6. method according to claim 5, is characterized in that, described the effective target image do not mated is carried out cluster, comprises with the step obtaining cluster calculation results set:
The effective target image do not mated is calculated between two, to obtain similarity between two effective target images and Rank-Order distance;
Using described similarity and Rank-Order distance as the condition of merging, the merging carrying out described effective target image is sorted out gather to be obtained several Initial Faces;
Sort out in set at described Initial Face, according to face number, described Initial Face classification set is classified as remarkable class or non-significant class;
Calculate the matching degree between described remarkable class and non-significant class, and remarkable class and non-significant class obtain cluster calculation results set according to described matching degree merger.
7. the method according to claim 2 or 5, is characterized in that, described method also comprises:
The recommendation of described effective target image is carried out according to the image information of described association.
8. method according to claim 7, is characterized in that, before described acquisition has the step of effective target image of human face region, described method also comprises:
Target image is uploaded by virtual social Webpage.
9. method according to claim 8, is characterized in that, the step that the described image information according to described association carries out the recommendation of described effective target image comprises:
Extracted by described image information and obtain user ID;
Described effective target image is pushed, described effective target image to be shown in the virtual social Webpage at described user ID place according to described user ID.
10. realize a system for image procossing, it is characterized in that, comprising:
Effective image acquisition device, for obtaining the effective target image with human face region;
Extraction element, for extracting the face characteristic of effective target image;
Coalignment, for mating the face characteristic in described effective target image with the face characteristic of preset reference picture;
Associated apparatus, for being associated to the image information of reference picture by the effective target image of coupling.
11. systems according to claim 10, is characterized in that, described effective image acquisition device comprises:
Face detection module, for carrying out to target image the target image that Face datection obtains having human face region;
Filtering module, obtains effective target image for the target image described in filtering with human face region.
12. methods according to claim 10, is characterized in that, described face detection module comprises:
Tagsort unit, for carrying out the extraction of characteristic of division, and by described characteristic of division input cascade of strong classifiers to obtain having the face parameter in the target image of human face region and described target image to described target image;
Sample matches unit, for obtaining preset sample image, mates described target image with the face shape in described sample image according to described face parameter, obtains initial human face region and has the target image of initial human face region;
Shape optimum unit, for having human face region initial in the target image of human face region carry out the human face region that shape optimum is optimized and the target image with human face region to described.
13. methods according to claim 11, it is characterized in that, according to described parameter, described filtering module also for having the parameter in the target image of human face region corresponding to human face region described in obtaining, judges that whether described human face region is fuzzy, if yes, then described target image is rejected;
The described target image with remainder in the target image of human face region is effective facial image.
14. systems according to claim 10, is characterized in that, described system also comprises:
Cluster calculation device, carries out cluster for the effective target image that will do not mate, to obtain cluster calculation results set;
Annotation equipment, obtains each final face in described cluster calculation results set for the labeling operation by triggering described cluster calculation results set and sorts out image information corresponding to set, and described cluster calculation results set is associated to described image information.
15. systems according to claim 14, is characterized in that, described cluster calculation device comprises:
Computing module between two, for calculating between two the effective target image do not mated, to obtain similarity between two effective target images and Rank-Order distance;
Merge module, for using described similarity and Rank-Order distance as the condition of merging, the merging carrying out described effective target image is sorted out gather to be obtained several Initial Faces;
Classifying module, is classified as remarkable class or non-significant class according to face number by described Initial Face classification set for sorting out at described Initial Face in set;
Merge module, for calculating the matching degree between described remarkable class and non-significant class, and remarkable class and non-significant class obtain cluster calculation results set according to described matching degree merger.
16. systems according to claim 11 or 14, it is characterized in that, described system also comprises:
Recommendation apparatus, for carrying out the recommendation of described effective target image according to the image information of described association.
17. systems according to claim 16, is characterized in that, described system also comprises:
Upload device, for uploading target image by virtual social Webpage.
18. systems according to claim 17, is characterized in that, described recommendation apparatus comprises:
Marker extraction module, obtains user ID for being extracted by described image information.
Pushing module, for pushing described effective target image according to described user ID, to be shown in the virtual social Webpage at described user ID place by described effective target image.
CN201410299852.5A 2014-06-26 2014-06-26 Realize the method and system of image procossing Active CN105303150B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410299852.5A CN105303150B (en) 2014-06-26 2014-06-26 Realize the method and system of image procossing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410299852.5A CN105303150B (en) 2014-06-26 2014-06-26 Realize the method and system of image procossing

Publications (2)

Publication Number Publication Date
CN105303150A true CN105303150A (en) 2016-02-03
CN105303150B CN105303150B (en) 2019-06-25

Family

ID=55200401

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410299852.5A Active CN105303150B (en) 2014-06-26 2014-06-26 Realize the method and system of image procossing

Country Status (1)

Country Link
CN (1) CN105303150B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106339219A (en) * 2016-08-19 2017-01-18 北京光年无限科技有限公司 Robot service awakening method and device
CN106777030A (en) * 2016-12-08 2017-05-31 北京小米移动软件有限公司 Information-pushing method and device
CN106980688A (en) * 2017-03-31 2017-07-25 上海掌门科技有限公司 A kind of method, equipment and system for being used to provide friend-making object
CN107229691A (en) * 2017-05-19 2017-10-03 上海掌门科技有限公司 A kind of method and apparatus for being used to provide social object
CN107341464A (en) * 2017-03-31 2017-11-10 上海掌门科技有限公司 A kind of method, equipment and system for being used to provide friend-making object
CN107665350A (en) * 2016-07-29 2018-02-06 广州康昕瑞基因健康科技有限公司 Image-recognizing method and system and autofocus control method and system
CN108985873A (en) * 2017-05-30 2018-12-11 株式会社途伟尼 Cosmetics recommended method, the recording medium for being stored with program, the computer program to realize it and cosmetics recommender system
WO2019029272A1 (en) * 2017-08-08 2019-02-14 Zhejiang Dahua Technology Co., Ltd. Systems and methods for searching images
WO2019080411A1 (en) * 2017-10-23 2019-05-02 平安科技(深圳)有限公司 Electrical apparatus, facial image clustering search method, and computer readable storage medium
CN110377774A (en) * 2019-07-15 2019-10-25 腾讯科技(深圳)有限公司 Carry out method, apparatus, server and the storage medium of personage's cluster
CN110704659A (en) * 2019-09-30 2020-01-17 腾讯科技(深圳)有限公司 Image list sorting method and device, storage medium and electronic device
CN113837949A (en) * 2021-08-19 2021-12-24 广州医软智能科技有限公司 Image processing method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101795400A (en) * 2010-03-16 2010-08-04 上海复控华龙微系统技术有限公司 Method for actively tracking and monitoring infants and realization system thereof
CN102306082A (en) * 2011-09-13 2012-01-04 富泰华工业(深圳)有限公司 Electronic device and identification method for starting and unlocking same
CN102567483A (en) * 2011-12-20 2012-07-11 华中科技大学 Multi-feature fusion human face image searching method and system
CN102663413A (en) * 2012-03-09 2012-09-12 中盾信安科技(江苏)有限公司 Multi-gesture and cross-age oriented face image authentication method
CN103365922A (en) * 2012-03-30 2013-10-23 北京千橡网景科技发展有限公司 Method and device for associating images with personal information
CN103593654A (en) * 2013-11-13 2014-02-19 智慧城市系统服务(中国)有限公司 Method and device for face location

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101795400A (en) * 2010-03-16 2010-08-04 上海复控华龙微系统技术有限公司 Method for actively tracking and monitoring infants and realization system thereof
CN102306082A (en) * 2011-09-13 2012-01-04 富泰华工业(深圳)有限公司 Electronic device and identification method for starting and unlocking same
CN102567483A (en) * 2011-12-20 2012-07-11 华中科技大学 Multi-feature fusion human face image searching method and system
CN102663413A (en) * 2012-03-09 2012-09-12 中盾信安科技(江苏)有限公司 Multi-gesture and cross-age oriented face image authentication method
CN103365922A (en) * 2012-03-30 2013-10-23 北京千橡网景科技发展有限公司 Method and device for associating images with personal information
CN103593654A (en) * 2013-11-13 2014-02-19 智慧城市系统服务(中国)有限公司 Method and device for face location

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107665350A (en) * 2016-07-29 2018-02-06 广州康昕瑞基因健康科技有限公司 Image-recognizing method and system and autofocus control method and system
CN106339219A (en) * 2016-08-19 2017-01-18 北京光年无限科技有限公司 Robot service awakening method and device
CN106777030A (en) * 2016-12-08 2017-05-31 北京小米移动软件有限公司 Information-pushing method and device
CN106777030B (en) * 2016-12-08 2020-12-25 北京小米移动软件有限公司 Information pushing method and device
CN107341464A (en) * 2017-03-31 2017-11-10 上海掌门科技有限公司 A kind of method, equipment and system for being used to provide friend-making object
CN106980688A (en) * 2017-03-31 2017-07-25 上海掌门科技有限公司 A kind of method, equipment and system for being used to provide friend-making object
WO2018176954A1 (en) * 2017-03-31 2018-10-04 上海掌门科技有限公司 Method, device and system for providing friend-making objects
CN107229691A (en) * 2017-05-19 2017-10-03 上海掌门科技有限公司 A kind of method and apparatus for being used to provide social object
CN108985873A (en) * 2017-05-30 2018-12-11 株式会社途伟尼 Cosmetics recommended method, the recording medium for being stored with program, the computer program to realize it and cosmetics recommender system
US11449702B2 (en) 2017-08-08 2022-09-20 Zhejiang Dahua Technology Co., Ltd. Systems and methods for searching images
WO2019029272A1 (en) * 2017-08-08 2019-02-14 Zhejiang Dahua Technology Co., Ltd. Systems and methods for searching images
WO2019080411A1 (en) * 2017-10-23 2019-05-02 平安科技(深圳)有限公司 Electrical apparatus, facial image clustering search method, and computer readable storage medium
CN110377774A (en) * 2019-07-15 2019-10-25 腾讯科技(深圳)有限公司 Carry out method, apparatus, server and the storage medium of personage's cluster
CN110377774B (en) * 2019-07-15 2023-08-01 腾讯科技(深圳)有限公司 Method, device, server and storage medium for person clustering
CN110704659A (en) * 2019-09-30 2020-01-17 腾讯科技(深圳)有限公司 Image list sorting method and device, storage medium and electronic device
CN110704659B (en) * 2019-09-30 2023-09-26 腾讯科技(深圳)有限公司 Image list ordering method and device, storage medium and electronic device
CN113837949A (en) * 2021-08-19 2021-12-24 广州医软智能科技有限公司 Image processing method and device
CN113837949B (en) * 2021-08-19 2024-01-19 广州医软智能科技有限公司 Image processing method and device

Also Published As

Publication number Publication date
CN105303150B (en) 2019-06-25

Similar Documents

Publication Publication Date Title
CN105303150A (en) Method and system for implementing image processing
CN108615010B (en) Facial expression recognition method based on parallel convolution neural network feature map fusion
US9514356B2 (en) Method and apparatus for generating facial feature verification model
CN108090406B (en) Face recognition method and system
CN110503076B (en) Video classification method, device, equipment and medium based on artificial intelligence
CN111310731A (en) Video recommendation method, device and equipment based on artificial intelligence and storage medium
CN111401145B (en) Visible light iris recognition method based on deep learning and DS evidence theory
CN112016464A (en) Method and device for detecting face shielding, electronic equipment and storage medium
CN110765882B (en) Video tag determination method, device, server and storage medium
US9082003B1 (en) System and method for adaptive face recognition
Chandran et al. Missing child identification system using deep learning and multiclass SVM
CN113449704B (en) Face recognition model training method and device, electronic equipment and storage medium
CN108960269A (en) Characteristic-acquisition method, device and the calculating equipment of data set
CN104978569A (en) Sparse representation based incremental face recognition method
Faraki et al. Material classification on symmetric positive definite manifolds
Devareddi et al. Review on content-based image retrieval models for efficient feature extraction for data analysis
CN113468925B (en) Occlusion face recognition method, intelligent terminal and storage medium
CN110472092B (en) Geographical positioning method and system of street view picture
Yu et al. A joint multi-task cnn for cross-age face recognition
Ameur et al. A new GLBSIF descriptor for face recognition in the uncontrolled environments
Elsayed et al. Hand gesture recognition based on dimensionality reduction of histogram of oriented gradients
CN110084110B (en) Near-infrared face image recognition method and device, electronic equipment and storage medium
Wang et al. A study of convolutional sparse feature learning for human age estimate
Zhi-Jie Image classification method based on visual saliency and bag of words model
CN111428734A (en) Image feature extraction method and device based on residual countermeasure inference learning and computer readable storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210917

Address after: 518057 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 floors

Patentee after: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

Patentee after: TENCENT CLOUD COMPUTING (BEIJING) Co.,Ltd.

Address before: 2, 518000, East 403 room, SEG science and Technology Park, Zhenxing Road, Shenzhen, Guangdong, Futian District

Patentee before: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.