CN110135268A - Face comparison method, device, computer equipment and storage medium - Google Patents

Face comparison method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN110135268A
CN110135268A CN201910309442.7A CN201910309442A CN110135268A CN 110135268 A CN110135268 A CN 110135268A CN 201910309442 A CN201910309442 A CN 201910309442A CN 110135268 A CN110135268 A CN 110135268A
Authority
CN
China
Prior art keywords
grid
face
target
image
grating image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910309442.7A
Other languages
Chinese (zh)
Inventor
鞠汶奇
张阿强
刘子威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Original Assignee
Shenzhen Heertai Home Furnishing Online Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Heertai Home Furnishing Online Network Technology Co Ltd filed Critical Shenzhen Heertai Home Furnishing Online Network Technology Co Ltd
Priority to CN201910309442.7A priority Critical patent/CN110135268A/en
Publication of CN110135268A publication Critical patent/CN110135268A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The embodiment of the invention discloses a kind of face comparison methods, comprising: obtains target facial image to be identified;Using the target facial image as the input of grid mapping model, the grid mapping model is used to carry out grid division to the target facial image according to the face characteristic extracted, obtain the target grating image of the grid mapping model output, it include multiple grid areas in the target grating image, the corresponding grid mark of each grid area, the grid mark are used for one piece of face area of unique identification;Obtain the corresponding registration grating image of registered face image in face database;According to the corresponding grid mark of each grid area by the target grating image and registration grating image progress aspect ratio pair, face alignment result is obtained.The face comparison method substantially increases the accuracy of recognition of face.Furthermore, it is also proposed that a kind of face alignment device, computer equipment and storage medium.

Description

Face comparison method, device, computer equipment and storage medium
Technical field
The present invention relates to technical field of face recognition more particularly to a kind of face comparison method, device, computer equipment and Storage medium.
Background technique
With the continuous development of artificial intelligence technology, weight of the face recognition technology more by the expert of each technical field Depending on application is also more and more extensive, for example, opening cell phone apparatus lock by face recognition technology.
Traditional recognition of face is that facial image input neural network model is directly carried out feature extraction, then passes through meter The distance between the feature extracted and registered feature both determine whether corresponding is the same person.But due to The training sample that neural network model uses when training is often more satisfactory image, and in the process of actual prediction In, the facial image shot will receive the influence of shooting angle etc., cause the accuracy of identification low.
Summary of the invention
Based on this, it is necessary in view of the above-mentioned problems, proposing a kind of face comparison method that recognition accuracy is high, device, meter Calculate machine equipment and storage medium.
A kind of face comparison method, which comprises
Obtain target facial image to be identified;
Using the target facial image as the input of grid mapping model, the grid mapping model is used for according to extraction The face characteristic arrived carries out grid division to the target facial image, obtains the target grid of the grid mapping model output Image includes multiple grid areas, the corresponding grid mark of each grid area, the grid in the target grating image Mark is used for one piece of face area of unique identification;
Obtain the corresponding registration grating image of registered face image in face database;
According to the corresponding grid mark of each grid area by the target grating image and the registration grid map As carrying out aspect ratio pair, face alignment result is obtained.
A kind of face alignment device, described device include:
First obtains module, for obtaining target facial image to be identified;
Grid mapping block, for using the target facial image as the input of grid mapping model, the grid to reflect Model is penetrated for carrying out grid division to the target facial image according to the face characteristic that extracts, the grid is obtained and maps The target grating image of model output includes multiple grid areas, each grid area corresponding one in the target grating image A grid mark, the grid mark are used for one piece of face area of unique identification;
Second obtains module, for obtaining the corresponding registration grating image of the registered face image in face database;
Comparison module, for according to the corresponding grid mark of each grid area by the target grating image and institute It states registration grating image and carries out aspect ratio pair, obtain face alignment result.
A kind of computer equipment, including memory and processor, the memory are stored with computer program, the calculating When machine program is executed by the processor, so that the processor executes following steps:
Obtain target facial image to be identified;
Using the target facial image as the input of grid mapping model, the grid mapping model is used for according to extraction The face characteristic arrived carries out grid division to the target facial image, obtains the target grid of the grid mapping model output Image includes multiple grid areas, the corresponding grid mark of each grid area, the grid in the target grating image Mark is used for one piece of face area of unique identification;
Obtain the corresponding registration grating image of registered face image in face database;
According to the corresponding grid mark of each grid area by the target grating image and the registration grid map As carrying out aspect ratio pair, face alignment result is obtained.
A kind of computer readable storage medium is stored with computer program, when the computer program is executed by processor, So that the processor executes following steps:
Obtain target facial image to be identified;
Using the target facial image as the input of grid mapping model, the grid mapping model is used for according to extraction The face characteristic arrived carries out grid division to the target facial image, obtains the target grid of the grid mapping model output Image includes multiple grid areas, the corresponding grid mark of each grid area, the grid in the target grating image Mark is used for one piece of face area of unique identification;
Obtain the corresponding registration grating image of registered face image in face database;
According to the corresponding grid mark of each grid area by the target grating image and the registration grid map As carrying out aspect ratio pair, face alignment result is obtained.
Above-mentioned face comparison method, by obtaining output using target facial image as the input of grid mapping model Target grating image, includes multiple grid areas in target grating image, and the corresponding grid mark of each grid area is different Grid mark represents is the different position of face, corresponding same grid mark is the identical position of face, according to grid Target grating image and registration grating image are carried out aspect ratio pair by region corresponding grid mark, can targetedly basis The problems such as same position of face is compared by grid mark, will not be due to the angle of collected facial image influences than alignment Exactness, to substantially increase the accuracy of comparison.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with It obtains other drawings based on these drawings.
Wherein:
Fig. 1 is the flow chart of face comparison method in one embodiment;
Fig. 2 is the schematic diagram of the grating image obtained in one embodiment to face image processing;
Fig. 3 is the grating image after side face image progress grid division in one embodiment and is converted to standard grid map The schematic diagram of picture;
Fig. 4 is the flow chart of face comparison method in another embodiment;
Fig. 5 is the method flow diagram that registered face feature vector is obtained in one embodiment;
Fig. 6 is the schematic diagram that face carries out affine transformation in one embodiment;
Fig. 7 is the structural block diagram of face alignment device in one embodiment;
Fig. 8 is the structural block diagram of face alignment device in another embodiment;
Fig. 9 is the structural block diagram of computer equipment in one embodiment.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
As shown in Figure 1, in one embodiment, provide a kind of face comparison method, which both can be with Applied to terminal, server also can be applied to, specifically includes the following steps:
Step 102, target facial image to be identified is obtained.
Wherein, target facial image is facial image to be identified.The acquisition of target facial image can be taken the photograph by calling It is obtained as head is directly shot, is also possible to the stored facial image got.In one embodiment, target facial image To carry out Face datection by the initial pictures obtained to shooting, then carry out what facial image extracted.
In another embodiment, in order to improve the accuracy of subsequent comparison, after getting facial image, according to face Key feature points in image carry out affine transformation to facial image, then obtain target facial image, i.e. target facial image For the facial image after affine transformation.Key feature points can use the point and two of the point of two eye centers, tip of the nose The point of a corners of the mouth.
Step 104, using target facial image as the input of grid mapping model, grid mapping model is used for according to extraction The face characteristic arrived carries out grid division to target facial image, obtains the target grating image of grid mapping model output, mesh Marking includes multiple grid areas in grating image, and the corresponding grid mark of each grid area, grid mark is for uniquely marking Know one piece of face area.
Wherein, grid mapping model is used to carry out rasterizing to target facial image, will according to the face characteristic extracted Target facial image carries out grid division, and the purpose of grid division is that face different parts are divided into different grid areas, The corresponding grid mark of each grid area, is used for one piece of face position (face area) of unique identification.As shown in Fig. 2, being In one embodiment, to the schematic diagram for the grating image that facial image obtain after rasterizing, grating image includes multiple lattice Sub (i.e. grid area), each grid correspond to the specific face area of face.
Step 106, the corresponding registration grating image of registered face image in face database is obtained.
Wherein, registration grating image, which refers to, carries out the grating image obtained after grid division to registered face image.Registration It include multiple grid areas in grating image, each grid area is corresponding with grid mark.It include note in face database Volume face image set includes multiple registered face images in registered face image set.Registered face image can be the figure of positive face Picture is also possible to the image of non-positive face.
Step 108, according to each grid area corresponding grid mark by target grating image and registration grating image into Row aspect ratio pair obtains face alignment result.
Wherein, registering also includes multiple grid areas in grating image, and each grid area has corresponding grid to identify. Same grid identifies corresponding face position and is the same.Different faces all includes the grid area of identical quantity, often A grid area is corresponding with grid mark, and the grid mark of different faces is also one-to-one relationship.For example, being wrapped in A face 36 grid are included, each grid is corresponding with corresponding serial number, 36, B for example, the serial number of 36 grid is respectively 1,2,3 ... It include similarly 36 grid in face, then serial number is respectively 1,2,3 ... 36, and the corresponding human face region of same serial number is phase With.So after the corresponding grating image of known target facial image, it can be pointedly by same face according to grid mark Portion region carries out aspect ratio pair, to be conducive to improve the accuracy compared.
Carry out face characteristic comparison mode can there are two types of, a kind of mode is, can according to preset grid sequence, point The face characteristic for indescribably taking each grid area in target grating image obtains target face feature vector, then obtains registration The registered face feature vector that grating image is extracted according to same grid sequence, then by target face feature vector and note Volume face feature vector is compared, and obtains face alignment result.
Another way is: the corresponding face characteristic of each grid area can be extracted respectively, then by same grid mark The face characteristic for knowing corresponding grid area is compared, and respectively obtains comparison result corresponding with each grid area, finally Final face alignment result is obtained by weighted sum.
Above-mentioned face comparison method, by obtaining output using target facial image as the input of grid mapping model Target grating image, includes multiple grid areas in target grating image, and the corresponding grid mark of each grid area is different Grid mark represents is the different position of face, according to grid area corresponding grid mark by target grating image and note Volume grating image carries out aspect ratio pair, can targetedly be compared the same position of face, will not be due to collected The angle problem of facial image influences to compare accuracy, to substantially increase the accuracy of face alignment.
In one embodiment, grating image is registered as registered standard grating image, it is corresponding according to each grid area Target grating image and registration grating image are carried out aspect ratio pair by grid mark, obtain face alignment result, comprising: according to every Multiple grid areas in target grating image are standardized by the corresponding grid mark of a grid area, are obtained and target person Face image corresponding target criteria grating image includes predetermined number in target criteria grating image and registered standard grating image Standard grid, each standard grid be corresponding with grid mark;By target criteria grating image and registered standard grating image into Row aspect ratio pair obtains face alignment result.
Wherein, standardization refers to the grid area that the grid area in grating image is converted to normal size.Target person Due to the difference of facial angle in face image, the grid area extracted may be that not of uniform size (i.e. the size of grid is not Together).In order to improve the accuracy of subsequent face alignment, grid area is all converted into normal size by scaling.
If a target facial image not instead of face image, an inclined face image, for example, being a face to the left Image, then the size that the face of the right and left is presented is grid size inconsistent, that correspondingly different face areas extract It is different.The corresponding face area of each grid area, under different faces angle, same face area is shown Size is different.As shown in figure 3, side face image carries out the schematic diagram after grid division, grid in figure in one embodiment Size be it is different, after being standardized, grid is respectively converted into normal size, and correspond to corresponding grid area, obtained To target criteria grating image.
It is the grating image for including fixed size grid in registered standard grating image, by that will register in grating image Grid be converted to the grid of normal size and can be obtained by registered standard grating image.It, will in order to improve the accuracy of comparison Target grating image, which is equally standardized, is converted to target criteria grating image, specifically by by target grating image In grid area by scale conversion be fixed size standard grid area.Then by the target criteria grid map after conversion As carrying out aspect ratio pair with registered standard grating image, the result compared in this way can be more acurrate.
In one embodiment, target criteria grating image and registered standard grating image are subjected to aspect ratio pair, obtained Face alignment result, comprising: using target criteria grating image as the input of Feature Selection Model, it is defeated to obtain Feature Selection Model Target face feature vector corresponding with target criteria grating image out;Obtain the corresponding registrant of registered standard grating image Face feature vector;Aspect ratio pair is carried out according to target face feature vector and registered face feature vector, determines face alignment knot Fruit.
Wherein, Feature Selection Model is used to carry out feature extraction to the face characteristic in target criteria grating image to obtain mesh Mark face feature vector.Feature Selection Model can be realized using deep neural network model.In target criteria grating image Include the standard grid of predetermined number, the corresponding face characteristic of each standard grid is extracted respectively, obtains grid spy Vector is levied, grid feature vector is then combined into target person face feature vector according to preset order group.In one embodiment, false If one shares 6 × 6 grid, it is assumed that the characteristic dimension extracted to each grid is (m × n × k), then 6 × 6 grid are last Characteristic dimension is 6 × 6 × m × n × k.
Registered face feature vector is obtained by same mode, in order to save comparison time, will be infused one in advance In a embodiment, it is assumed that one shares 6 × 6 grid, it is assumed that it is (m × n × k) to the characteristic dimension that each grid extracts, then 6 × The last characteristic dimension of 6 grid is 6 × 6 × m × n × k.Registered standard grating image extracts to obtain by Feature Selection Model Registered face feature vector, then with registered face image associated storage.It is subsequent when being compared, registrant can be directly acquired Face feature vector.
As shown in figure 4, in one embodiment it is proposed that a kind of face comparison method, comprising:
Step 402, target facial image to be identified is obtained.
Step 404, using target facial image as the input of grid mapping model, grid mapping model is used for according to extraction The face characteristic arrived carries out grid division to target facial image, obtains the target grating image of grid mapping model output, mesh Marking includes multiple grid areas in grating image, the corresponding grid mark of each grid area.
Step 406, it is identified according to the corresponding grid of each grid area by multiple grid areas in target grating image It is standardized, obtains target criteria grating image corresponding with target facial image, include pre- in target criteria grating image If the standard grid of number, each standard grid is corresponding with grid mark.
Step 408, using target criteria grating image as the input of Feature Selection Model, Feature Selection Model output is obtained Target face feature vector corresponding with target criteria grating image.
Step 410, the corresponding registered face feature vector of registered standard grating image is obtained.
Step 412, using target criteria grating image as the input of grid characteristic coefficient model, grid characteristic coefficient is obtained The target signature coefficient vector of model output, includes feature corresponding with each grid area system in target signature coefficient vector Number.
Wherein, grid characteristic coefficient model is used to that each grid area in target criteria grating image to be arranged corresponding Characteristic coefficient (weighting coefficient), and export target signature coefficient vector.The purpose of target signature coefficient vector is in order to enable people Having distinctive fine feature on the face can amplify (i.e. enlarging features coefficient), or reduce the feature for not having characteristic on face Coefficient.Since some fine features can be used to distinguish very much like face on face, for example, if there is black mole on face, This black mole can be used as the obvious characteristic of this people, by will include black mole this grid area characteristic coefficient (power Weight coefficient) increase the accuracy for being conducive to improve face alignment.
That is, increased the characteristic coefficient in region believable in face by setting characteristic coefficient, it will be incredible The characteristic coefficient in region is reduced, for example, being illuminated by the light condition influence, the corresponding face of the grid area at some positions of face may base Originally (feature of the grid area is insincere at this time) is not seen, then this when should just reduce the feature system of the grid area Number, therefore, grid characteristic coefficient model is in order to enable the weight coefficient of the feature with obvious distinction increases, so that not having The weight coefficient of the feature of distinction is reduced, at the same can also reduce those due to light, the power in the region not seen such as block Weight coefficient, can be improved the accuracy of subsequent comparison in this way.
Grid characteristic coefficient model can be used the training of deep learning network model and obtain, by obtaining training grating image Sample artificially marks the corresponding coefficient of grid area each in training grating image sample, then using training grid Input of the image pattern as grid characteristic coefficient model, by corresponding mark as desired output to grid characteristic coefficient mould Type is trained, and obtains target grid characteristic coefficient model.
Step 414, target face feature vector and registered face feature vector are carried out according to target signature coefficient vector It updates, obtain updating target face feature vector and updates registered face feature vector.
Wherein, target signature coefficient vector includes characteristic coefficient corresponding with each grid area.Obtaining target signature After coefficient vector, target face feature vector and registered face feature vector are updated according to target signature coefficient vector.It updates Method is to be updated the characteristic coefficient of grid area each in target signature coefficient vector to the feature of corresponding grid area, Will the corresponding feature vector of each grid area multiplied by corresponding characteristic coefficient.Finally obtain updated update target face Feature vector and update registered face feature vector.After target feature vector has been determined, due to being finally to compare target face Feature vector and registered face feature vector, so to update target face feature vector and registered face feature vector simultaneously. For example, face A vector is [1,2,3], the vector of face B is [5,6,7] in database, is input to grid feature system to face A After exponential model, obtained target signature coefficient vector is [1.0,0.1,0.5], then the updated feature vector of A is [1 × 1.0,2 × 0.1,3 × 0.5], the updated feature vector of B is [5 × 1.0,6 × 0.1,7 × 0.5].
Step 416, it is compared according to update target face feature vector and update registered face feature vector, determines ratio To result.
Wherein, calculate and update target face feature vector and update the distance between registered face feature vector, according to away from From determining whether for the same face, so that it is determined that comparison result.The calculating of distance can be calculated using Euclidean distance formula, Other distance calculation formulas can certainly be used, for example, the methods of chi-Square measure etc., cosine similarity.
Above-mentioned face comparison method, can there are still can in the case where partial occlusion or bad optical fiber in facial image It is enough to be accurately compared.
As shown in figure 5, in one embodiment, obtaining the corresponding registered face feature vector of registered face image, comprising:
Step 502, using registered standard grating image as the input of registration feature Modulus Model, registration feature Modulus Model For being adjusted to the corresponding weight coefficient of grid area in registered standard grating image, registration feature Modulus Model is obtained The registration feature coefficient vector of output;
Step 504, using registered standard grating image as the input of Feature Selection Model, the initial registration for obtaining output is special Levy vector;
Step 506, corresponding with registered face image according to registration feature coefficient vector and the determination of initial registration feature vector Registered face feature vector.
Wherein, registration feature Modulus Model is used for grid area in the corresponding registered standard grating image of registered face image The weight coefficient in domain is adjusted, the training method of registration feature Modulus Model and the training method phase of grid characteristic coefficient model Together.Feature Selection Model to the feature in grating image for extracting to obtain feature vector.By registered standard grating image As the input of Feature Selection Model, the initial registration feature vector that is exported.Finally use registration feature coefficient vector pair Initial registration feature vector is adjusted, and obtains registered face feature vector.The mesh that initial registration feature vector is adjusted Be by adjusting coefficient reduce the same person the distance between face characteristic, increase between the face characteristic of different people Distance.Being adjusted to initial registration feature vector, which may be implemented, carries out dynamic adjustment according to the clarity of registered face, for example, A certain piece of illumination condition is bad, can turn down the coefficient of this part, is compared convenient for subsequent, improves the accuracy of comparison.
In one embodiment, aspect ratio pair is carried out according to target face feature vector and registered face feature vector, really Determine face alignment result, comprising: calculate the target face feature vector spy between each registered face feature vector respectively Levy distance;Face alignment knot is determined according to the characteristic distance between target face feature vector and each registered face feature vector Fruit.
Wherein, face alignment be by calculate the feature between target face feature vector and registered face feature vector away from From what is obtained, characteristic distance is smaller to illustrate that face is more similar, when characteristic distance be less than it is preset apart from when, then it is assumed that be same People.The calculating of characteristic distance can be calculated using Euclidean distance formula, naturally it is also possible in such a way that other distances calculate. If target facial image is not less than preset distance at a distance from each registered face image in registered face library, say The bright target facial image is not present in the registered face library.When being calculated with more than two registered face images (when depositing In the case where twins or multiparity) the distance between be less than pre-determined distance, then by the corresponding registered face figure of minimum range As with the immediate facial image of target facial image.
In one embodiment, target facial image to be identified is obtained, further includes: initial pictures to be identified are obtained, It include face in initial pictures;Face in initial pictures is detected, facial image is extracted;According in facial image Facial image progress affine transformation is obtained target facial image by key feature points.
Wherein, initial pictures refer to that shooting obtained includes the image of face.In order to carry out face alignment, to initial graph Face as in is detected, and facial image is extracted, and then carries out affine transformation according to the key feature points in facial image Obtain target facial image.The purpose of affine transformation is to be aligned face according to standard mode.For example, for 1024 × Key point (for example, the position at the center of eyes, tip of the nose and two corners of the mouths) is aligned by the target facial image of 1024 sizes To corresponding position, to obtain the target facial image an of standard.Facial image is pre-processed by affine transformation, The accuracy of subsequent comparison can be improved.As shown in fig. 6, facial image progress affine transformation is obtained in one embodiment The schematic diagram of target facial image.Firstly, detecting the key feature points on facial image to be identified, standard form is then switched to In preset key feature points position after, obtain target facial image.
In one embodiment, grid mapping model is obtained based on deep neural network model training;Grid mapping The following steps are included: obtaining training face image set, it includes multiple trained face figures that training facial image, which is concentrated, for the training of model Picture;Obtain training facial image corresponding facial image mark, facial image be labeled as include multiple grid areas training Grating image;Using training facial image as the input of grid mapping model, grating image will be trained as desired accordingly Output is trained grid mapping model, obtains trained grid mapping model.
Wherein, grid mapping model is obtained using deep neural network model training, for example, can be using convolution mind It is obtained through network model training.Specifically trained mode is as follows: obtain training face image set, and with each trained face figure As corresponding facial image mark, facial image be labeled as include multiple grid areas training grating image.It will be The grating image of grid area has been divided as mark.It then can be using training facial image as the defeated of grid mapping model Enter, will accordingly include that the training grating images of multiple grid areas carries out model training as corresponding mark, then To trained grid mapping model.
In one embodiment, trained general process an are as follows: loss function (loss function) is defined, by facial image As the input of grid mapping model, obtain a reality output, according to loss function calculate reality output and desired output it Between distance, then the parameter in grid mapping model can be adjusted using gradient descent method, until being calculated The value of the loss function arrived is less than preset value, i.e., until the gap of reality output and desired output becomes very little.
As shown in fig. 7, in one embodiment it is proposed that a kind of face alignment device, the device include:
First obtains module 702, for obtaining target facial image to be identified;
Grid mapping block 704, for using the target facial image as the input of grid mapping model, the grid Mapping model is used to carry out grid division to the target facial image according to the face characteristic extracted, obtains the grid and reflects The target grating image of model output is penetrated, includes multiple grid areas in the target grating image, each grid area is corresponding One grid mark;
Second obtains module 706, for obtaining the corresponding registration grating image of the registered face image in face database;
Comparison module 708, for according to the corresponding grid mark of each grid area by the target grating image Aspect ratio pair is carried out with the registration grating image, obtains face alignment result.
In one embodiment, for the grating image of registering as registered standard grating image, comparison module 708 is also used to root Multiple grid areas in the target grating image are standardized according to each grid area corresponding grid mark, Obtain target criteria grating image corresponding with the target facial image, the target criteria grating image and registered standard grid It include the standard grid of predetermined number in table images, each standard grid is corresponding with grid mark;By the target criteria grid Image and the registered standard grating image carry out aspect ratio pair, obtain face alignment result.
In one embodiment, comparison module 708 is also used to using the target criteria grating image as feature extraction mould The input of type, obtain the target face characteristic corresponding with the target criteria grating image of Feature Selection Model output to Amount;Obtain the corresponding registered face feature vector of the registered standard grating image;According to the target face feature vector with The registered face feature vector carries out aspect ratio pair, determines face alignment result.
As shown in figure 8, in one embodiment, above-mentioned face alignment device further include:
Characteristic coefficient determining module 710, for using the target criteria grating image as grid characteristic coefficient model Input obtains the target signature coefficient vector of the grid characteristic coefficient model output, wraps in the target signature coefficient vector Include characteristic coefficient corresponding with each grid area;
Update module 712, for according to the target signature coefficient vector to the target face feature vector and described Registered face feature vector is updated, and is obtained updating target face feature vector and is updated registered face feature vector;
The comparison module is also used to according to the update target face feature vector and the update registered face feature Vector is compared, and determines comparison result.
In one embodiment, the comparison module 708 is also used to special using the registered standard grating image as registration The input of Modulus Model is levied, the registration feature Modulus Model is used for the grid area pair in the registered standard grating image The weight coefficient answered is adjusted, and obtains the registration feature coefficient vector of the registration feature Modulus Model output;By the note Input of the volume standard grating image as the Feature Selection Model, obtains the initial registration feature vector of output;According to described Registration feature coefficient vector and the initial registration feature vector determine that registered face corresponding with the registered face image is special Levy vector.
In one embodiment, comparison module is also used to calculate the target face feature vector respectively and described in each Characteristic distance between registered face feature vector;According to the target face feature vector and each registered face feature Characteristic distance between vector determines the face alignment result.
In one embodiment, the first acquisition module is also used to obtain initial pictures to be identified, in the initial pictures Including face;Face in the initial pictures is detected, facial image is extracted;According to the pass in the facial image Facial image progress affine transformation is obtained target facial image by key characteristic point.
In one embodiment, the grid mapping model is obtained based on deep neural network model training;It is above-mentioned Face alignment device further include:
Grid mapping model training module, for obtaining trained face image set, the trained facial image concentration includes Multiple trained facial images;The corresponding facial image mark of the trained facial image is obtained, the facial image is labeled as wrapping Training grating image containing multiple grid areas;Using the trained facial image as the input of the grid mapping model, The grid mapping model is trained using the corresponding trained grating image as desired output, is obtained trained Grid mapping model.
Fig. 9 shows the internal structure chart of computer equipment in one embodiment.The computer equipment specifically can be clothes Business device and terminal device, the server include but is not limited to high-performance computer and high-performance computer cluster;The terminal Equipment includes but is not limited to mobile terminal device and terminal console equipment, the mobile terminal device include but is not limited to mobile phone, Tablet computer, smartwatch and laptop, the terminal console equipment includes but is not limited to desktop computer and vehicle-mounted computer. As shown in figure 9, the computer equipment includes processor, memory and the network interface connected by system bus.Wherein, it stores Device includes non-volatile memory medium and built-in storage.The non-volatile memory medium of the computer equipment is stored with operation system System, can also be stored with computer program, when which is executed by processor, processor may make to realize face alignment side Method.Computer program can also be stored in the built-in storage, when which is executed by processor, processor may make to hold Row face comparison method.It will be understood by those skilled in the art that structure shown in Fig. 9, only related to application scheme Part-structure block diagram, do not constitute the restriction for the computer equipment being applied thereon to application scheme, it is specific to count Calculating machine equipment may include perhaps combining certain components or with different portions than more or fewer components as shown in the figure Part arrangement.
In one embodiment, face comparison method provided by the present application can be implemented as a kind of shape of computer program Formula, computer program can be run in computer equipment as shown in Figure 9.A group adult can be stored in the memory of computer equipment Each process template of face comparison device.For example, first obtains module 702, grid mapping block 704, second obtains module 706, comparison module 708.
A kind of computer equipment, including memory, processor and storage are in the memory and can be in the processing The computer program run on device, the processor realize following steps when executing the computer program: obtaining to be identified Target facial image;Using the target facial image as the input of grid mapping model, the grid mapping model is used for root Grid division is carried out to the target facial image according to the face characteristic extracted, obtains the mesh of the grid mapping model output Grating image is marked, includes multiple grid areas, the corresponding grid mark of each grid area in the target grating image;It obtains Take the corresponding registration grating image of the registered face image in face database;According to the corresponding grid of each grid area The target grating image and the registration grating image are carried out aspect ratio pair by mark, obtain face alignment result.
In one embodiment, the grating image of registering is described according to each lattice as registered standard grating image The target grating image and the registration grating image are carried out aspect ratio pair by the corresponding grid mark of subregion, obtain face Comparison result, comprising: according to the corresponding grid mark of each grid area by multiple lattice in the target grating image Subregion is standardized, and obtains target criteria grating image corresponding with the target facial image, the target criteria grid It include the standard grid of predetermined number in table images and registered standard grating image, each standard grid is corresponding with grid mark; The target criteria grating image and the registered standard grating image are subjected to aspect ratio pair, obtain face alignment result.
In one embodiment, described that the target criteria grating image and the registered standard grating image are subjected to spy Sign compares, and obtains face alignment result, comprising: using the target criteria grating image as the input of Feature Selection Model, obtain Take the target face feature vector corresponding with the target criteria grating image of the Feature Selection Model output;Described in acquisition The corresponding registered face feature vector of registered standard grating image;According to the target face feature vector and the registered face Feature vector carries out aspect ratio pair, determines face alignment result.
In one embodiment, when the computer program is executed by the processor, it is also used to execute following steps: will Input of the target criteria grating image as grid characteristic coefficient model obtains the grid characteristic coefficient model output Target signature coefficient vector includes characteristic coefficient corresponding with each grid area in the target signature coefficient vector;According to The target signature coefficient vector is updated the target face feature vector and the registered face feature vector, obtains It updates target face feature vector and updates registered face feature vector;It is described according to the target face feature vector with it is described Registered face feature vector is compared, and determines face alignment result, comprising: according to the update target face feature vector and The update registered face feature vector is compared, and determines comparison result.
In one embodiment, described to obtain the corresponding registered face feature vector of the registered face image, comprising: will Input of the registered standard grating image as registration feature Modulus Model, the registration feature Modulus Model are used for described The corresponding weight coefficient of grid area in registered standard grating image is adjusted, and it is defeated to obtain the registration feature Modulus Model Registration feature coefficient vector out;Using the registered standard grating image as the input of the Feature Selection Model, obtain defeated Initial registration feature vector out;According to the registration feature coefficient vector and the initial registration feature vector it is determining with it is described The corresponding registered face feature vector of registered face image.
In one embodiment, described to be carried out according to the target face feature vector and the registered face feature vector Aspect ratio pair determines face alignment result, comprising: calculate the target face feature vector respectively with registrant described in each Characteristic distance between face feature vector;According to the target face feature vector and each registered face feature vector it Between characteristic distance determine the face alignment result.
In one embodiment, described to obtain target facial image to be identified, comprising: to obtain initial graph to be identified Picture includes face in the initial pictures;Face in the initial pictures is detected, facial image is extracted;According to Facial image progress affine transformation is obtained target facial image by the key feature points in the facial image.
In one embodiment, the grid mapping model is obtained based on deep neural network model training;It is described The following steps are included: obtaining training face image set, it includes more that the trained facial image, which is concentrated, for the training of grid mapping model A trained facial image;Obtain the trained facial image corresponding facial image mark, the facial image be labeled as include There is the training grating image of multiple grid areas;It, will using the trained facial image as the input of the grid mapping model The corresponding trained grating image is trained the grid mapping model as desired output, obtains trained grid Lattice mapping model.
A kind of computer readable storage medium, the computer-readable recording medium storage have computer program, feature It is, the computer program realizes following steps when being executed by processor: obtains target facial image to be identified;It will be described Input of the target facial image as grid mapping model, the grid mapping model are used for according to the face characteristic pair extracted The target facial image carries out grid division, obtains the target grating image of the grid mapping model output, the target It include multiple grid areas, the corresponding grid mark of each grid area in grating image;Obtain the note in face database The corresponding registration grating image of volume facial image;According to the corresponding grid mark of each grid area by the target grid Image and the registration grating image carry out aspect ratio pair, obtain face alignment result.
In one embodiment, the grating image of registering is described according to each lattice as registered standard grating image The target grating image and the registration grating image are carried out aspect ratio pair by the corresponding grid mark of subregion, obtain face Comparison result, comprising: according to the corresponding grid mark of each grid area by multiple lattice in the target grating image Subregion is standardized, and obtains target criteria grating image corresponding with the target facial image, the target criteria grid It include the standard grid of predetermined number in table images and registered standard grating image, each standard grid is corresponding with grid mark; The target criteria grating image and the registered standard grating image are subjected to aspect ratio pair, obtain face alignment result.
In one embodiment, described that the target criteria grating image and the registered standard grating image are subjected to spy Sign compares, and obtains face alignment result, comprising: using the target criteria grating image as the input of Feature Selection Model, obtain Take the target face feature vector corresponding with the target criteria grating image of the Feature Selection Model output;Described in acquisition The corresponding registered face feature vector of registered standard grating image;According to the target face feature vector and the registered face Feature vector carries out aspect ratio pair, determines face alignment result.
In one embodiment, when the computer program is executed by the processor, it is also used to execute following steps: will Input of the target criteria grating image as grid characteristic coefficient model obtains the grid characteristic coefficient model output Target signature coefficient vector includes characteristic coefficient corresponding with each grid area in the target signature coefficient vector;According to The target signature coefficient vector is updated the target face feature vector and the registered face feature vector, obtains It updates target face feature vector and updates registered face feature vector;It is described according to the target face feature vector with it is described Registered face feature vector is compared, and determines face alignment result, comprising: according to the update target face feature vector and The update registered face feature vector is compared, and determines comparison result.
In one embodiment, described to obtain the corresponding registered face feature vector of the registered face image, comprising: will Input of the registered standard grating image as registration feature Modulus Model, the registration feature Modulus Model are used for described The corresponding weight coefficient of grid area in registered standard grating image is adjusted, and it is defeated to obtain the registration feature Modulus Model Registration feature coefficient vector out;Using the registered standard grating image as the input of the Feature Selection Model, obtain defeated Initial registration feature vector out;According to the registration feature coefficient vector and the initial registration feature vector it is determining with it is described The corresponding registered face feature vector of registered face image.
In one embodiment, described to be carried out according to the target face feature vector and the registered face feature vector Aspect ratio pair determines face alignment result, comprising: calculate the target face feature vector respectively with registrant described in each Characteristic distance between face feature vector;According to the target face feature vector and each registered face feature vector it Between characteristic distance determine the face alignment result.
In one embodiment, described to obtain target facial image to be identified, comprising: to obtain initial graph to be identified Picture includes face in the initial pictures;Face in the initial pictures is detected, facial image is extracted;According to Facial image progress affine transformation is obtained target facial image by the key feature points in the facial image.
In one embodiment, the grid mapping model is obtained based on deep neural network model training;
The training of the grid mapping model is the following steps are included: obtain training face image set, the trained face figure It include multiple trained facial images in image set;Obtain the corresponding facial image mark of the trained facial image, the face figure As be labeled as include multiple grid areas training grating image;Mould is mapped using the trained facial image as the grid The input of type is trained the grid mapping model for the corresponding trained grating image as desired output, obtains To trained grid mapping model.
It should be noted that above-mentioned face comparison method, face alignment device, computer equipment and computer-readable storage Medium belongs to a total inventive concept, face comparison method, face alignment device, computer equipment and computer-readable storage Content in media embodiment can be mutually applicable in.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with Relevant hardware is instructed to complete by computer program, the program can be stored in a non-volatile computer and can be read In storage medium, the program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, provided herein Each embodiment used in any reference to memory, storage, database or other media, may each comprise non-volatile And/or volatile memory.Nonvolatile memory may include that read-only memory (ROM), programming ROM (PROM), electricity can be compiled Journey ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms, such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) directly RAM (RDRAM), straight Connect memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodiment In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance Shield all should be considered as described in this specification.
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously The limitation to the application the scope of the patents therefore cannot be interpreted as.It should be pointed out that for those of ordinary skill in the art For, without departing from the concept of this application, various modifications and improvements can be made, these belong to the guarantor of the application Protect range.Therefore, the scope of protection shall be subject to the appended claims for the application patent.

Claims (11)

1. a kind of face comparison method, which is characterized in that the described method includes:
Obtain target facial image to be identified;
Using the target facial image as the input of grid mapping model, the grid mapping model is used for what basis was extracted Face characteristic carries out grid division to the target facial image, obtains the target grid map of the grid mapping model output Picture includes multiple grid areas, the corresponding grid mark of each grid area, the grid mark in the target grating image Know and is used for one piece of face area of unique identification;
Obtain the corresponding registration grating image of registered face image in face database;
According to the corresponding grid mark of each grid area by the target grating image and the registration grating image into Row aspect ratio pair obtains face alignment result.
2. the method according to claim 1, wherein the grating image of registering is registered standard grating image;
It is described according to the corresponding grid mark of each grid area by the target grating image and the registration grid map As carrying out aspect ratio pair, face alignment result is obtained, comprising:
Multiple grid areas in the target grating image are carried out according to each grid area corresponding grid mark Standardization, obtains target criteria grating image corresponding with the target facial image, the target criteria grating image and institute It states all including the standard grid of predetermined number in registered standard grating image, each standard grid is corresponding with grid mark;
The target criteria grating image and the registered standard grating image are subjected to aspect ratio pair, obtain face alignment knot Fruit.
3. according to the method described in claim 2, it is characterized in that, described by the target criteria grating image and the registration Standard grating image carries out aspect ratio pair, obtains face alignment result, comprising:
Using the target criteria grating image as the input of Feature Selection Model, obtain the Feature Selection Model output with The corresponding target face feature vector of the target criteria grating image;
Obtain the corresponding registered face feature vector of the registered standard grating image;
Aspect ratio pair is carried out according to the target face feature vector and the registered face feature vector, determines face alignment knot Fruit.
4. according to the method described in claim 3, it is characterized in that, the method also includes:
Using the target criteria grating image as the input of grid characteristic coefficient model, the grid characteristic coefficient model is obtained The target signature coefficient vector of output includes feature corresponding with each grid area system in the target signature coefficient vector Number;
The target face feature vector and the registered face feature vector are carried out according to the target signature coefficient vector It updates, obtain updating target face feature vector and updates registered face feature vector;
It is described to be compared according to the target face feature vector with the registered face feature vector, determine face alignment knot Fruit, comprising:
It is compared according to the update target face feature vector and the update registered face feature vector, determines and compare knot Fruit.
5. according to the method described in claim 3, it is characterized in that, described obtain the corresponding registrant of the registered face image Face feature vector, further includes:
Using the registered standard grating image as the input of registration feature Modulus Model, the registration feature Modulus Model is used for The corresponding weight coefficient of grid area in the registered standard grating image is adjusted, the registration feature coefficient is obtained The registration feature coefficient vector of model output;
Using the registered standard grating image as the input of the Feature Selection Model, obtain the initial registration feature of output to Amount;
It is corresponding with the registered face image according to the registration feature coefficient vector and initial registration feature vector determination Registered face feature vector.
6. according to the method described in claim 3, it is characterized in that, described according to the target face feature vector and the note Volume face feature vector carries out aspect ratio pair, determines face alignment result, comprising:
Calculate characteristic distance of the target face feature vector respectively between each registered face feature vector;
According to the characteristic distance determination between the target face feature vector and each registered face feature vector Face alignment result.
7. the method according to claim 1, wherein described obtain target facial image to be identified, further includes:
Initial pictures to be identified are obtained, include face in the initial pictures;
Face in the initial pictures is detected, facial image is extracted;
Facial image progress affine transformation is obtained into target facial image according to the key feature points in the facial image.
8. the method according to claim 1, wherein the grid mapping model is based on deep neural network mould Type training obtains;
The training of the grid mapping model the following steps are included:
Training face image set is obtained, it includes multiple trained facial images that the trained facial image, which is concentrated,;
The corresponding facial image mark of the trained facial image is obtained, the facial image is labeled as including multiple grid areas The training grating image in domain;
It, will the corresponding trained grating image conduct using the trained facial image as the input of the grid mapping model Desired output is trained the grid mapping model, obtains trained grid mapping model.
9. a kind of face alignment device, which is characterized in that described device includes:
First obtains module, for obtaining target facial image to be identified;
Grid mapping block, for using the target facial image as the input of grid mapping model, the grid to map mould Type is used to carry out grid division to the target facial image according to the face characteristic extracted, obtains the grid mapping model The target grating image of output includes multiple grid areas, the corresponding lattice of each grid area in the target grating image Sub-mark, the grid mark are used for one piece of face area of unique identification;
Second obtains module, for obtaining the corresponding registration grating image of the registered face image in face database;
Comparison module, for according to the corresponding grid mark of each grid area by the target grating image and the note Volume grating image carries out aspect ratio pair, obtains face alignment result.
10. a kind of computer equipment, which is characterized in that in the memory and can including memory, processor and storage The computer program run on the processor, which is characterized in that the processor is realized when executing the computer program As described in any one of claim 1 to 8 the step of face comparison method.
11. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists In the step of realization face comparison method as described in any one of claim 1 to 8 when the computer program is executed by processor Suddenly.
CN201910309442.7A 2019-04-17 2019-04-17 Face comparison method, device, computer equipment and storage medium Pending CN110135268A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910309442.7A CN110135268A (en) 2019-04-17 2019-04-17 Face comparison method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910309442.7A CN110135268A (en) 2019-04-17 2019-04-17 Face comparison method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110135268A true CN110135268A (en) 2019-08-16

Family

ID=67570346

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910309442.7A Pending CN110135268A (en) 2019-04-17 2019-04-17 Face comparison method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110135268A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021073150A1 (en) * 2019-10-16 2021-04-22 平安科技(深圳)有限公司 Data detection method and apparatus, and computer device and storage medium
TWI783723B (en) * 2021-10-08 2022-11-11 瑞昱半導體股份有限公司 Character recognition method, character recognition device and non-transitory computer readable medium
CN115623245A (en) * 2022-12-19 2023-01-17 檀沐信息科技(深圳)有限公司 Image processing method and device in live video and computer equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101582113A (en) * 2009-06-15 2009-11-18 江苏大学 Method for identifying face image with identity through layered comparison
CN102799870A (en) * 2012-07-13 2012-11-28 复旦大学 Single-training sample face recognition method based on blocking consistency LBP (Local Binary Pattern) and sparse coding
CN106951826A (en) * 2017-02-14 2017-07-14 清华大学 Method for detecting human face and device
CN108985232A (en) * 2018-07-18 2018-12-11 平安科技(深圳)有限公司 Facial image comparison method, device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101582113A (en) * 2009-06-15 2009-11-18 江苏大学 Method for identifying face image with identity through layered comparison
CN102799870A (en) * 2012-07-13 2012-11-28 复旦大学 Single-training sample face recognition method based on blocking consistency LBP (Local Binary Pattern) and sparse coding
CN106951826A (en) * 2017-02-14 2017-07-14 清华大学 Method for detecting human face and device
CN108985232A (en) * 2018-07-18 2018-12-11 平安科技(深圳)有限公司 Facial image comparison method, device, computer equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021073150A1 (en) * 2019-10-16 2021-04-22 平安科技(深圳)有限公司 Data detection method and apparatus, and computer device and storage medium
TWI783723B (en) * 2021-10-08 2022-11-11 瑞昱半導體股份有限公司 Character recognition method, character recognition device and non-transitory computer readable medium
CN115623245A (en) * 2022-12-19 2023-01-17 檀沐信息科技(深圳)有限公司 Image processing method and device in live video and computer equipment

Similar Documents

Publication Publication Date Title
CN105917353B (en) Feature extraction and matching for biological identification and template renewal
CN110135268A (en) Face comparison method, device, computer equipment and storage medium
CN110163193A (en) Image processing method, device, computer readable storage medium and computer equipment
CN105844205B (en) Character information recognition methods based on image procossing
CN106022317A (en) Face identification method and apparatus
CN104867225B (en) A kind of bank note towards recognition methods and device
CN108961675A (en) Fall detection method based on convolutional neural networks
CN107958235A (en) A kind of facial image detection method, device, medium and electronic equipment
CN108629262A (en) Iris identification method and related device
CN112651333B (en) Silence living body detection method, silence living body detection device, terminal equipment and storage medium
CN109145745A (en) A kind of face identification method under circumstance of occlusion
CN106951826B (en) Method for detecting human face and device
CN109360190A (en) Building based on image superpixel fusion damages detection method and device
CN110059700A (en) The recognition methods of image moire fringes, device, computer equipment and storage medium
CN108960344A (en) Difference detecting method, device and the terminal device of cultural relic images
CN112733581B (en) Vehicle attribute identification method and system
CN112818821B (en) Human face acquisition source detection method and device based on visible light and infrared light
CN109741232A (en) A kind of image watermark detection method, device and electronic equipment
CN107679469A (en) A kind of non-maxima suppression method based on deep learning
CN110119695A (en) A kind of iris activity test method based on Fusion Features and machine learning
CN113033305A (en) Living body detection method, living body detection device, terminal equipment and storage medium
CN107944395A (en) A kind of method and system based on neutral net verification testimony of a witness unification
CN110942067A (en) Text recognition method and device, computer equipment and storage medium
CN116188956A (en) Method and related equipment for detecting deep fake face image
CN112488062A (en) Image identification method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20200410

Address after: 1706, Fangda building, No. 011, Keji South 12th Road, high tech Zone, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province

Applicant after: Shenzhen shuliantianxia Intelligent Technology Co., Ltd

Address before: 518000, building 10, building ten, building D, Shenzhen Institute of Aerospace Science and technology, 6 hi tech Southern District, Nanshan District, Shenzhen, Guangdong 1003, China

Applicant before: SHENZHEN H & T HOME ONLINE NETWORK TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
RJ01 Rejection of invention patent application after publication

Application publication date: 20190816

RJ01 Rejection of invention patent application after publication