CN107832751A - Mask method, device and the computing device of human face characteristic point - Google Patents

Mask method, device and the computing device of human face characteristic point Download PDF

Info

Publication number
CN107832751A
CN107832751A CN201711351407.9A CN201711351407A CN107832751A CN 107832751 A CN107832751 A CN 107832751A CN 201711351407 A CN201711351407 A CN 201711351407A CN 107832751 A CN107832751 A CN 107832751A
Authority
CN
China
Prior art keywords
faceforms
face
feature point
contour feature
face picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711351407.9A
Other languages
Chinese (zh)
Inventor
肖胜涛
刘洛麒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201711351407.9A priority Critical patent/CN107832751A/en
Publication of CN107832751A publication Critical patent/CN107832751A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of mask method of human face characteristic point, device and computing device, wherein, method includes:Obtain the first face picture and its labeling position information of facial contour feature point;3D reconstructions are carried out to the first face picture, obtain the first 3D faceforms;First 3D faceforms are rotated into predetermined angle, obtain the 2nd 3D faceforms;Determine the 3D positional informations of the facial contour feature point in the 2nd 3D faceforms;2nd 3D faceforms are projected into 2D spaces, obtain the second face picture, and the 3D positional informations of the facial contour feature point in the 2nd 3D faceforms determine the positional information of the facial contour feature point in the second face picture.As can be seen here, using the present invention program, the labeling position information of the facial contour feature point of the second face picture after rotated predetermined angle can be obtained based on the first face picture of the labeling position information of existing facial contour feature point, mark difficulty thus is reduced, reduces the cost of mark.

Description

Mask method, device and the computing device of human face characteristic point
Technical field
The present invention relates to field of computer technology, and in particular to a kind of mask method of human face characteristic point, device and calculating Equipment.
Background technology
Facial modeling technology is a committed step in the man-machine interaction based on face, human face characteristic point information It can be used for the application scenarios such as Expression Recognition, Attitude estimation, recognition of face.At the same time, facial modeling algorithm is steady Qualitative and accuracy extremely relies on human face characteristic point labeled data.
In the prior art, the human face characteristic point of different angle face is marked, is the people by gathering a large amount of respective angles Face, and the face is manually marked, and then obtain the human face characteristic point labeled data of different angle.Utilize existing mark When note mode carries out human face characteristic point mark, for different angle face, it is necessary to resurvey substantial amounts of face, and needs pair These faces are manually marked, thus the difficulty marked is big, and mark cost is high, and takes time and effort.
The content of the invention
In view of the above problems, it is proposed that the present invention so as to provide one kind overcome above mentioned problem or at least in part solve on State the mask method, device and computing device of the human face characteristic point of problem.
According to an aspect of the invention, there is provided a kind of mask method of human face characteristic point, including:
Obtain the labeling position information of the facial contour feature point in the first face picture and first face picture;
The labeling position information of facial contour feature point in first face picture is to the first face figure Piece carries out 3D reconstructions, obtains the first 3D faceforms;
The first 3D faceforms are rotated into predetermined angle, obtain the 2nd 3D faceforms;
Determined according to the positional information of each pixel in the 2nd 3D faceforms in the 2nd 3D faceforms Facial contour feature point 3D positional informations;
The 2nd 3D faceforms are projected into 2D spaces, obtain the second face picture, and according to the 2nd 3D people The 3D positional informations of facial contour feature point in face model determine the facial contour feature point in second face picture Positional information.
Further, the face in first face picture and the first 3D faceforms is to be in first angle It is existing;Face in second face picture and the 2nd 3D faceforms is presented with second angle;The first angle Difference with the second angle is the predetermined angle.
Further, it is described that the first 3D faceforms are rotated into predetermined angle, it is specific to obtain the 2nd 3D faceforms For:The first 3D faceforms are rotated into predetermined angle by pivot of default center point, obtain the 2nd 3D faceforms.
Further, the 2nd 3D is being determined according to the positional information of each pixel in the 2nd 3D faceforms Before the 3D positional informations of facial contour feature point in faceform, methods described also includes:According to the position of default center point Confidence breath and predetermined angle determine the positional information of each pixel in the 2nd 3D faceforms.
Further, the positional information according to each pixel in the 2nd 3D faceforms determines described second The 3D positional informations of facial contour feature point in 3D faceforms further comprise:
According to the positional information of each pixel in the 2nd 3D faceforms, determine in the 2nd 3D faceforms The 3D positional informations of face contour feature point;Wherein, face contour feature point and the first face in the 2nd 3D faceforms Face contour feature point in picture corresponds;
And the positional information according to each pixel in the 2nd 3D faceforms, determine the 2nd 3D faces Face contour characteristic point and its 3D positional informations in model.
According to another aspect of the present invention, there is provided a kind of annotation equipment of human face characteristic point, including:
Acquisition module, suitable for obtaining the mark of the facial contour feature point in the first face picture and first face picture Note positional information;
Module is rebuild, the labeling position information suitable for the facial contour feature point in first face picture is to institute State the first face picture and carry out 3D reconstructions, obtain the first 3D faceforms;
Processing module, suitable for the first 3D faceforms are rotated into predetermined angle, obtain the 2nd 3D faceforms;
First determining module, described in being determined according to the positional information of each pixel in the 2nd 3D faceforms The 3D positional informations of facial contour feature point in 2nd 3D faceforms;
Projection module, suitable for the 2nd 3D faceforms are projected into 2D spaces, obtain the second face picture;
Labeling module, the 3D positional informations suitable for the facial contour feature point in the 2nd 3D faceforms determine The positional information of facial contour feature point in second face picture.
Further, the face in first face picture and the first 3D faceforms is to be in first angle It is existing;Face in second face picture and the 2nd 3D faceforms is presented with second angle;The first angle Difference with the second angle is the predetermined angle.
Further, the processing module is further adapted for:By the first 3D faceforms using default center point as rotation Turn central rotation predetermined angle, obtain the 2nd 3D faceforms.
Further, described device also includes:Second determining module, suitable for the positional information according to default center point and Predetermined angle determines the positional information of each pixel in the 2nd 3D faceforms.
Further, first determining module is further adapted for:
According to the positional information of each pixel in the 2nd 3D faceforms, determine in the 2nd 3D faceforms The 3D positional informations of face contour feature point;Wherein, face contour feature point and the first face in the 2nd 3D faceforms Face contour feature point in picture corresponds;
And the positional information according to each pixel in the 2nd 3D faceforms, determine the 2nd 3D faces Face contour characteristic point and its 3D positional informations in model.
According to another aspect of the invention, there is provided a kind of computing device, including:Processor, memory, communication interface and Communication bus, the processor, the memory and the communication interface complete mutual communication by the communication bus;
The memory is used to deposit an at least executable instruction, and the executable instruction makes the computing device above-mentioned Operated corresponding to the mask method of human face characteristic point.
In accordance with a further aspect of the present invention, there is provided a kind of computer-readable storage medium, be stored with the storage medium to A few executable instruction, the executable instruction make computing device be grasped as corresponding to the mask method of above-mentioned human face characteristic point Make.
As can be seen here, mask method, device and the computing device of the human face characteristic point provided using the present embodiment, can be with base 3D reconstructions are carried out in the first face picture of the labeling position information of existing facial contour feature point, by the first 3D people after reconstruction Face model rotation predetermined angle obtains the 2nd 3D faceforms of different angle, and then overcomes and need to adopt again in the prior art The problem of collecting the face of different angle, save manpower and time;And according to each pixel in the 2nd 3D faceforms Positional information determines the 3D positional informations of the facial contour feature point in the 2nd 3D faceforms, and the 2nd 3D faceforms are projected To 2D spaces, the second face picture after corresponding rotation predetermined angle is obtained, the facial contour in the 2nd 3D faceforms is special The depth information levied in the 3D positional informations of point removes, and obtains positional information of the facial contour feature point in 2D spaces, is The labeling position information of the facial contour feature point of second face picture, realizes the full-automatic of annotation process, and then reduce mark The human input of note, improve the efficiency of mark.
Described above is only the general introduction of technical solution of the present invention, in order to better understand the technological means of the present invention, And can be practiced according to the content of specification, and in order to allow above and other objects of the present invention, feature and advantage can Become apparent, below especially exemplified by the embodiment of the present invention.
Brief description of the drawings
By reading the detailed description of hereafter preferred embodiment, it is various other the advantages of and benefit it is common for this area Technical staff will be clear understanding.Accompanying drawing is only used for showing the purpose of preferred embodiment, and is not considered as to the present invention Limitation.And in whole accompanying drawing, identical part is denoted by the same reference numerals.In the accompanying drawings:
Fig. 1 shows the flow chart of the mask method of human face characteristic point according to an embodiment of the invention;
Fig. 2 shows the flow chart of the mask method of human face characteristic point in accordance with another embodiment of the present invention;
Fig. 3 shows the facial contour feature point in the 3D faceforms of determination the 2nd of a specific embodiment of the invention The schematic diagram of 3D positional informations;
Fig. 4 shows the functional block diagram of the annotation equipment of human face characteristic point according to an embodiment of the invention;
Fig. 5 shows the functional block diagram of the annotation equipment of human face characteristic point in accordance with another embodiment of the present invention;
Fig. 6 shows a kind of structural representation of computing device according to embodiments of the present invention.
Embodiment
The exemplary embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although the disclosure is shown in accompanying drawing Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here Limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure Completely it is communicated to those skilled in the art.
To overcome the mark difficulty of human face characteristic point notation methods in the prior art big, cost is high, and take time and effort Problem, the present invention propose a kind of mark of facial contour feature point that can be based on existing face picture and face picture Positional information, the mark of the facial contour feature point of the face picture of different angle is carried out, obtain the face of corresponding different angle Labeling position information, for example, using the face picture and its labeling position information of existing positive face, obtain the mark position of side face Confidence ceases.The annotation process of the present invention program, the mark without the facial contour feature point for different angle face are adopted again Collect face, and without manually being marked, you can the labeling position information of different angle face is automatically derived, thus can pole Big reduction mark cost, and save manpower.
Fig. 1 shows the flow chart of the mask method of human face characteristic point according to an embodiment of the invention.Such as Fig. 1 institutes Show, this method comprises the following steps:
Step S101:Obtain the labeling position letter of the facial contour feature point in the first face picture and the first face picture Breath.
The first face picture of the labeling position information of existing facial contour feature point is obtained, using as obtaining other angles Face picture basic picture;And the labeling position information of first face picture is obtained, using as to other angles The Back ground Information that face picture is labeled.
Wherein, facial contour feature point includes that the shape of face feature, expressive features, and/or face feature of face can be represented Etc. the characteristic point of information, for example, face contour feature point.Accordingly, the labeling position information of facial contour feature point is above-mentioned Position of the characteristic point in the first face picture.For example, coordinate value of the eye contour o'clock in the first face picture.Specifically, In the present embodiment, not to the notation methods of the labeling position information of the facial contour feature point in the first face picture of acquisition It is specifically limited, the labeling position information can be obtained by way of any mark facial contour feature point of the prior art Arrive, can also be obtained by mask method provided by the invention, in the specific implementation, those skilled in the art can be according to reality Demand is selected.
Step S102:The labeling position information of facial contour feature point in the first face picture is to the first face figure Piece carries out 3D reconstructions, obtains the first 3D faceforms.
In this step, 3D reconstructions are carried out according to the first face picture and its labeling position information, obtain the first 3D face moulds Type, to fully demonstrate the face information of the first face of corresponding first face picture, for example, the first face picture is the first face Front view picture, then the information of the first face side face can not fully demonstrate out in the first face picture, and the first 3D Faceform then adds depth information, can fully demonstrate the profile information of side face.Wherein, it is each in the first 3D faceforms Individual pixel corresponds to a positional information respectively in a coordinate system, for example, position coordinates.
Step S103:First 3D faceforms are rotated into predetermined angle, obtain the 2nd 3D faceforms.
Before being labeled to the face picture of different angle, first have to collection to should angle face picture, and In this step, then it is by the way that the first 3D faceforms are rotated, to obtain the faceform of different angle, i.e. the 2nd 3D Faceform, thus the process of the face picture of collection different angle can be saved.And by setting different preset angles Degree, or different numbers is rotated by identical predetermined angle, then it can obtain faceform at any angle.In addition, revolving During turning the first 3D faceforms, keep coordinate system constant, to ensure the faceform of different angle or face picture Normative reference is identical, and then causes the labeling position information of the face picture of corresponding different angle to have uniformity.
Specifically, the first 3D faceforms are rotated into predetermined angle, each pixel in the first 3D faceforms is relative Predetermined angle is have rotated in pivot, according to the positional information of each pixel in the first 3D faceforms and this is default Angle, the positional information of each pixel in the 2nd 3D faceforms can be obtained.
Step S104:Determined according to the positional information of each pixel in the 2nd 3D faceforms in the 2nd 3D faceforms Facial contour feature point 3D positional informations.
The positional information of each pixel in a coordinate system in the 2nd 3D faceforms, and according to facial contour Characteristic point and the corresponding relation of each pixel in the 2nd 3D faceforms, determine that the facial contour in the 2nd 3D faceforms is special Levy point 3D positional informations, and then can by it is obstructed manually mark excessively in a manner of obtain facial contour feature point 3D positions believe Breath.
Wherein, the corresponding relation of each pixel can be according to first in facial contour feature point and the 2nd 3D faceforms The corresponding relation of facial contour feature point and each pixel in the first 3D faceforms determines in 3D faceforms;And/or Determined according to the positional information of each pixel in the 2nd 3D faceforms.
Step S105:2nd 3D faceforms are projected into 2D spaces, obtain the second face picture, and according to the 2nd 3D people The 3D positional informations of facial contour feature point in face model determine the position of the facial contour feature point in the second face picture Information.
2nd 3D faceforms are projected into 2D spaces, obtain the second face picture after corresponding rotation predetermined angle, will Depth information in the 3D positional informations of facial contour feature point in 2nd 3D faceforms removes, and obtains facial contour feature Positional information of the point in 2D spaces, the labeling position information of the facial contour feature point of as the second face picture.
As can be seen here, the mask method of the human face characteristic point provided using the present embodiment, existing facial contour can be based on First face picture of the labeling position information of characteristic point carries out 3D reconstructions, the first 3D faceforms after reconstruction is rotated default Angle obtains the 2nd 3D faceforms of different angle, and then overcomes in the prior art, it is necessary to resurvey different angle The problem of face, save manpower and time;And determined according to the positional information of each pixel in the 2nd 3D faceforms The 3D positional informations of facial contour feature point in 2nd 3D faceforms, the 2nd 3D faceforms are projected into 2D spaces, obtained The second face picture after to corresponding rotation predetermined angle, by the 3D positions of the facial contour feature point in the 2nd 3D faceforms Depth information in information removes, and obtains positional information of the facial contour feature point in 2D spaces, as the second face picture Facial contour feature point labeling position information, realize the full-automatic of annotation process, and then reduce the human input of mark, Improve the efficiency of mark.
Fig. 2 shows the flow chart of the mask method of human face characteristic point in accordance with another embodiment of the present invention.Such as Fig. 2 institutes Show, this method comprises the following steps:
Step S201:Obtain the labeling position letter of the facial contour feature point in the first face picture and the first face picture Breath.
The the first face figure and its labeling position information of the labeling position information of existing facial contour feature point are obtained, so as to In the second face picture of the face that corresponding different angle is obtained according to first face picture, and it is easy to according to the mark position Confidence breath carries out the mark of facial contour feature point to the second face picture, obtains the labeling position information of the second face picture.
Step S202:The labeling position information of facial contour feature point in the first face picture is to the first face figure Piece carries out 3D reconstructions, obtains the first 3D faceforms.
The labeling position information of facial contour feature point in the first face picture, to corresponding to the first face picture The information such as the face of the first face and the positional structure of face contour, the angle of the first face is analyzed.For example, according to left and right Whether the face of labeling position information analysis first of the contour feature point of eye is positive face;Then according to analysis result to the first face Picture carries out 3D reconstructions, obtains the first 3D faceforms.
Wherein, the pixel in the first 3D faceforms is divided into two classes:The first kind is corresponding with facial contour feature point Pixel, for example, the pixel of corresponding face contour feature point, and/or, the pixel of corresponding face contour characteristic point;Second Class is common pixel, i.e., pixel not corresponding with facial contour feature point.Also, by facial contour feature o'clock first Positional information in 3D faceforms is designated as 3D positional informations, and the 3D positional informations include labeling position information and depth is believed Breath, the labeling position information and depth information form coordinate position of each facial contour feature point in three-dimensional coordinate, example Such as, the labeling position information of facial contour feature point 1 is (x1, y1), the pixel a in the first 3D faceforms is correspondence profile The pixel of characteristic point 1, pixel a positional information is (x1, y1, z1), then the 3D positional informations of facial contour feature point 1 are (x1, y1, z1), wherein z1For depth information;The positional information of second class pixel is seat of each pixel in three-dimensional coordinate Cursor position.
Step S203:First 3D faceforms are rotated into predetermined angle by pivot of default center point, obtain second 3D faceforms.
This step, by the way that the first 3D faceforms are rotated into predetermined angle by pivot of default center point, obtain not The 2nd 3D faceforms of the angle of the first face are same as, and then the first face picture of existing first angle can be utilized, The 2nd 3D faceforms of second angle are obtained, therefore the process of the face picture of collection different angle can be saved.Wherein, in advance If central point can be the arbitrfary point in the first 3D faceforms, it is generally the case that for the ease of calculating the 2nd 3D faceforms In each pixel positional information, point default center point being defined as on facial symmetry axle, or, by default center point It is defined as the origin of coordinates.Meanwhile the direction of rotation can be determined according to the demand of mark.
In addition, in the present embodiment, the face in the first face picture and the first 3D faceforms is to be in first angle It is existing;Face in second face picture and the 2nd 3D faceforms is presented with second angle;First angle and second angle it Difference is predetermined angle.
Step S204:Determined according to the positional information of default center point and predetermined angle each in the 2nd 3D faceforms The positional information of pixel.
After obtaining the 2nd 3D faceforms, the positional information of each pixel in the 2nd 3D faceforms is determined, so as to root According to the positional information of each pixel, and the corresponding relation of facial contour feature point and each pixel, the 2nd 3D people is determined The 3D positional informations of facial contour feature point in face model.
Specifically, according to the positional information of default center point, predetermined angle, direction of rotation and/or each pixel Positional information in one 3D faceforms determines the positional information of each pixel in the 2nd 3D faceforms.For example, sit Origin is marked in two centers of the first 3D faceforms, default center point is also two centers, with the first 3D faces Model is rotated clockwise 60 degree, if pixel b positional information is (4,0,0) in the first 3D faceforms, then the 2nd 3D In faceform pixel b positional information for (2,0,).With this principle, each pixel in the 2nd 3D faceforms is determined The positional information of point, optionally, in order to simplify the positional information for determining each pixel in the 2nd 3D faceforms, rotation can be introduced Torque battle array is calculated.
Step S205:Determined according to the positional information of each pixel in the 2nd 3D faceforms in the 2nd 3D faceforms Facial contour feature point 3D positional informations.
According to the positional information of each pixel in the 2nd 3D faceforms, and according to facial contour feature point and second The corresponding relation of each pixel in 3D faceforms, determine the 3D positions of the facial contour feature point in the 2nd 3D faceforms Information, so can by it is obstructed manually mark excessively in a manner of obtain the 3D positional informations of facial contour feature point.
Wherein, similar with the pixel in the first 3D faceforms, the pixel in the 2nd 3D faceforms is also classified into two Class:The first kind is pixel corresponding with facial contour feature point, for example, face contour feature point, and/or, face contour is special Sign point;Second class is common pixel, i.e., pixel not corresponding with facial contour feature point.In the present invention to facial contour In the scheme that characteristic point is labeled, the positional information of first kind pixel is mainly to determine, that is, determines the 2nd 3D faceforms In facial contour feature point 3D positional informations.
Specifically, according to the difference of facial contour feature point and the corresponding relation of each pixel, by the 2nd 3D face moulds The 3D positional informations of facial contour feature point in type are divided into two parts to determine respectively:
Part I, it is determined that the 3D positional informations of corresponding face contour feature point.According to each in the 2nd 3D faceforms The positional information of pixel, determine the 3D positional informations of face contour feature point in the 2nd 3D faceforms;Wherein, the 2nd 3D people Face contour feature point in face model in face contour feature point and the first face picture corresponds.Specifically, face wheel The corresponding relation of wide characteristic point and the first 3D faceforms and the pixel in the 2nd 3D faceforms is fixed, simply pixel The positional information of point is changed, and accordingly, the 3D positional informations of face contour feature point can also change therewith.Fig. 3 is shown The 3D positional informations of facial contour feature point in determination the 2nd 3D faceforms of a specific embodiment of the invention are shown It is intended to.As shown in figure 3, the left side is the first 3D faceforms in figure, the first 3D faceforms rotate horizontally predetermined angle, obtain the right side The 2nd 3D faceforms on side, the eyebrow outline characteristic point 1 of the first face picture is corresponding to the pixel in the first 3D faceforms Point b, then in the 2nd 3D faceforms, pixel b still be to should identical eyebrow outline characteristic point 1 pixel, simply The 3D positional informations of eyebrow outline characteristic point 1 are changed, for example, be changed into by (4,0,0) (2,0,).Therefore, at this In embodiment, then according to face contour feature point in the first 3D faceforms corresponding pixel, and according to the pixel Positional information in the 2nd 3D faceforms, it is determined that the 3D positional informations of corresponding face contour feature point.
Part II, determine the 3D positional informations of face contour characteristic point.According to each pixel in the 2nd 3D faceforms The positional information of point, determines face contour characteristic point and its 3D positional informations in the 2nd 3D faceforms.Specifically, generally come Say, when face contour characteristic point refers to face face, the characteristic point on the outermost contour line of face, it is seen then that with face Angle is different, face contour characteristic point and the corresponding relation meeting of the first 3D faceforms and the pixel in the 2nd 3D faceforms Change, i.e., corresponding relation is unfixed.Still by taking Fig. 3 as an example, pixel a is to correspond to shape of face wheel in the first 3D faceforms The pixel of wide characteristic point 2, but in the 2nd 3D faceforms, pixel a is no longer on the outermost contour line of face, accordingly , the pixel of corresponding face contour characteristic point 2 is moved to right to pixel c by pixel a;Similarly, corresponding face contour characteristic point 3 Pixel moved to right by pixel d to pixel e.Therefore, in the present embodiment, then according to each picture in the 2nd 3D faceforms The positional information of vegetarian refreshments, for example, the coordinate value of each pixel, determines face contour characteristic point and determine face contour feature The 3D positional informations of point.
More specifically, determined according to the size of the coordinate value of each pixel in the 2nd 3D faceforms and variation tendency Multiple pixels on face contour line, and, with reference to the 3D positional informations of face contour characteristic point in the first 3D faceforms, The pixel of face contour characteristic point is corresponded to from the plurality of pixel point selection, the pixel is defined as face contour characteristic point. By taking Fig. 3 as an example, it is assumed that vertically downward direction is the positive direction of the longitudinal axis, and horizontal direction to the right is the positive direction of transverse axis, then one The mode of kind of the corresponding face contour characteristic point 3 of selection for:The coordinate value of each pixel in the 2nd 3D faceforms Size and variation tendency determine multiple pixels on face mask line;Then according to face contour in the first 3D faceforms The ordinate value of ordinate value, i.e. pixel d in the 3D positional informations of characteristic point 3, multiple pixels on face contour line Some pixels of ordinate value identical with pixel d are searched in point, select abscissa value minimum from some pixels Pixel e, pixel e is defined as corresponding in the 2nd 3D faceforms to the pixel of face contour characteristic point 3.It is determined that After face contour characteristic point, according to positional information of the pixel in the 2nd 3D faceforms corresponding to face contour characteristic point Determine the 3D positional informations of face contour characteristic point.For example, pixel e coordinate value is defined as face contour characteristic point 3 3D positional informations.
Step S206:2nd 3D faceforms are projected into 2D spaces, obtain the second face picture, and according to the 2nd 3D people The 3D positional informations of facial contour feature point in face model determine the position of the facial contour feature point in the second face picture Information.
2nd 3D faceforms are projected into 2D spaces, obtain the second face picture after corresponding rotation predetermined angle, will Depth information in the 3D positional informations of facial contour feature point in 2nd 3D faceforms removes, and obtains facial contour feature Positional information of the point in 2D spaces, the labeling position information of the facial contour feature point of as the second face picture.
As can be seen here, the mask method of the human face characteristic point provided using the present embodiment, existing facial contour can be based on First face picture of the labeling position information of characteristic point carries out 3D reconstructions, the first 3D faceforms after reconstruction is rotated default Angle obtains the 2nd 3D faceforms of different angle, and then overcomes in the prior art, it is necessary to resurvey different angle The problem of face, save manpower and time;2nd 3D people is determined according to the positional information of default center point and predetermined angle The positional information of each pixel in face model, then according to the picture in the faceform of facial contour feature point and different angle The difference of the corresponding relation of vegetarian refreshments, the 3D positional informations of the facial contour feature point determined in the 2nd 3D faceforms are divided into really Determine the 3D positional informations of face contour feature point and determine the 3D positional informations of face contour characteristic point;By the 2nd 3D face moulds Type projects to 2D spaces, obtains the second face picture after corresponding rotation predetermined angle, by the face in the 2nd 3D faceforms Depth information in the 3D positional informations of contour feature point removes, and obtains position letter of the facial contour feature point in 2D spaces Breath, the labeling position information of the facial contour feature point of as the second face picture, realizes the full-automatic of annotation process, and then subtract Lack the human input of mark, improve the efficiency of mark.
The mask method of the human face characteristic point provided using above method embodiment of the present invention, can obtain second angle people The labeling position information of the facial contour feature point of face, if predetermined angle is different, or the angle of the first face is different, then can obtain To the labeling position information of each different angle face;And rotated obtained all angles face of the invention and corresponding mark Note positional information, training sample can be used as to be trained, make training sample data more comprehensively, with obtain being applied to it is live, The positioning feature point algorithm of the applications such as camera, U.S. face, and then make it that the positioning feature point algorithm that training obtains is more accurate, therefore energy Enough real-time facial contour feature points more accurately positioned in the application such as live, camera, U.S. face, for example, accurately identifying Lip, it so can aid in and realize some specific functions in such applications, for example, carrying out makeup to face.
Fig. 4 shows the functional block diagram of the annotation equipment of human face characteristic point according to an embodiment of the invention.Such as Fig. 4 institutes Show, the device includes:Acquisition module 401, rebuild module 402, processing module 403, the first determining module 404, projection module 405 And labeling module 406.
Acquisition module 401, suitable for obtaining the mark of the facial contour feature point in the first face picture and the first face picture Note positional information;
Module 402 is rebuild, the labeling position information suitable for the facial contour feature point in the first face picture is to the One face picture carries out 3D reconstructions, obtains the first 3D faceforms;
Processing module 403, suitable for the first 3D faceforms are rotated into predetermined angle, obtain the 2nd 3D faceforms;
First determining module 404, suitable for determining second according to the positional information of each pixel in the 2nd 3D faceforms The 3D positional informations of facial contour feature point in 3D faceforms;
Projection module 405, suitable for the 2nd 3D faceforms are projected into 2D spaces, obtain the second face picture;
Labeling module 406, the 3D positional informations suitable for the facial contour feature point in the 2nd 3D faceforms determine The positional information of facial contour feature point in second face picture.
Fig. 5 shows the functional block diagram of the annotation equipment of human face characteristic point in accordance with another embodiment of the present invention.Such as Fig. 5 Shown, on the basis of Fig. 4, the device also includes:Second determining module 501.
Face in first face picture and the first 3D faceforms is presented with first angle;Second face picture and Face in two 3D faceforms is presented with second angle;The difference of first angle and second angle is predetermined angle.
Processing module 403 is further adapted for:First 3D faceforms are rotated using default center point as pivot and preset Angle, obtain the 2nd 3D faceforms.
Second determining module 501, the 2nd 3D people is determined suitable for the positional information according to default center point and predetermined angle The positional information of each pixel in face model.
First determining module 404 is further adapted for:
According to the positional information of each pixel in the 2nd 3D faceforms, face profile in the 2nd 3D faceforms is determined The 3D positional informations of characteristic point;Wherein, the face in the 2nd 3D faceforms in face contour feature point and the first face picture Contour feature point corresponds;
And the positional information according to each pixel in the 2nd 3D faceforms, determine face in the 2nd 3D faceforms Type contour feature point and its 3D positional informations.
Concrete structure and operation principle on above-mentioned modules can refer to the description of corresponding steps in embodiment of the method, Here is omitted.
The embodiment of the present application provides a kind of nonvolatile computer storage media, and computer-readable storage medium is stored with least One executable instruction, the computer executable instructions can perform the mark side of the human face characteristic point in above-mentioned any means embodiment Method.
Fig. 6 shows a kind of structural representation of computing device according to embodiments of the present invention, the specific embodiment of the invention The specific implementation to computing device does not limit.
As shown in fig. 6, the computing device can include:Processor (processor) 602, communication interface (Communications Interface) 604, memory (memory) 606 and communication bus 608.
Wherein:
Processor 602, communication interface 604 and memory 606 complete mutual communication by communication bus 608.
Communication interface 604, for being communicated with the network element of miscellaneous equipment such as client or other servers etc..
Processor 602, for configuration processor 610, it can specifically perform the mask method embodiment of above-mentioned human face characteristic point In correlation step.
Specifically, program 610 can include program code, and the program code includes computer-managed instruction.
Processor 602 is probably central processor CPU, or specific integrated circuit ASIC (Application Specific Integrated Circuit), or it is arranged to implement the integrated electricity of one or more of the embodiment of the present invention Road.The one or more processors that computing device includes, can be same type of processor, such as one or more CPU;Also may be used To be different types of processor, such as one or more CPU and one or more ASIC.
Memory 606, for depositing program 610.Memory 606 may include high-speed RAM memory, it is also possible to also include Nonvolatile memory (non-volatile memory), for example, at least a magnetic disk storage.
Program 610 specifically can be used for so that processor 602 performs following operation:
Obtain the labeling position information of the facial contour feature point in the first face picture and the first face picture;
The labeling position information of facial contour feature point in the first face picture carries out 3D to the first face picture Rebuild, obtain the first 3D faceforms;
First 3D faceforms are rotated into predetermined angle, obtain the 2nd 3D faceforms;
The face wheel in the 2nd 3D faceforms is determined according to the positional information of each pixel in the 2nd 3D faceforms The 3D positional informations of wide characteristic point;
2nd 3D faceforms are projected into 2D spaces, obtain the second face picture, and according in the 2nd 3D faceforms The 3D positional informations of facial contour feature point determine the positional information of the facial contour feature point in the second face picture.
In a kind of optional mode:Face in first face picture and the first 3D faceforms is to be in first angle It is existing;Face in second face picture and the 2nd 3D faceforms is presented with second angle;First angle and second angle it Difference is predetermined angle.
In a kind of optional mode, program 610 can specifically be further used for so that processor 602 performs following behaviour Make:First 3D faceforms are rotated into predetermined angle by pivot of default center point, obtain the 2nd 3D faceforms.
In a kind of optional mode, program 610 can specifically be further used for so that processor 602 performs following behaviour Make:Believe the position that each pixel in 2nd 3D faceforms is determined according to the positional information of default center point and predetermined angle Breath.
In a kind of optional mode, program 610 can specifically be further used for so that processor 602 performs following behaviour Make:
According to the positional information of each pixel in the 2nd 3D faceforms, face profile in the 2nd 3D faceforms is determined The 3D positional informations of characteristic point;Wherein, the face in the 2nd 3D faceforms in face contour feature point and the first face picture Contour feature point corresponds;
And the positional information according to each pixel in the 2nd 3D faceforms, determine face in the 2nd 3D faceforms Type contour feature point and its 3D positional informations.
Algorithm and display be not inherently related to any certain computer, virtual system or miscellaneous equipment provided herein. Various general-purpose systems can also be used together with teaching based on this.As described above, required by constructing this kind of system Structure be obvious.In addition, the present invention is not also directed to any certain programmed language.It should be understood that it can utilize various Programming language realizes the content of invention described herein, and the description done above to language-specific is to disclose this hair Bright preferred forms.
In the specification that this place provides, numerous specific details are set forth.It is to be appreciated, however, that the implementation of the present invention Example can be put into practice in the case of these no details.In some instances, known method, structure is not been shown in detail And technology, so as not to obscure the understanding of this description.
Similarly, it will be appreciated that in order to simplify the disclosure and help to understand one or more of each inventive aspect, Above in the description to the exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes In example, figure or descriptions thereof.However, the method for the disclosure should be construed to reflect following intention:I.e. required guarantor The application claims of shield features more more than the feature being expressly recited in each claim.It is more precisely, such as following Claims reflect as, inventive aspect is all features less than single embodiment disclosed above.Therefore, Thus the claims for following embodiment are expressly incorporated in the embodiment, wherein each claim is in itself Separate embodiments all as the present invention.
Those skilled in the art, which are appreciated that, to be carried out adaptively to the module in the equipment in embodiment Change and they are arranged in one or more equipment different from the embodiment.Can be the module or list in embodiment Member or component be combined into a module or unit or component, and can be divided into addition multiple submodule or subelement or Sub-component.In addition at least some in such feature and/or process or unit exclude each other, it can use any Combination is disclosed to all features disclosed in this specification (including adjoint claim, summary and accompanying drawing) and so to appoint Where all processes or unit of method or equipment are combined.Unless expressly stated otherwise, this specification (including adjoint power Profit requires, summary and accompanying drawing) disclosed in each feature can be by providing the alternative features of identical, equivalent or similar purpose come generation Replace.
In addition, it will be appreciated by those of skill in the art that although some embodiments in this include institute in other embodiments Including some features rather than further feature, but the combination of the feature of different embodiments means to be in the scope of the present invention Within and form different embodiments.For example, in the following claims, embodiment claimed it is any it One mode can use in any combination.
The all parts embodiment of the present invention can be realized with hardware, or to be run on one or more processor Software module realize, or realized with combinations thereof.It will be understood by those of skill in the art that it can use in practice Microprocessor or digital signal processor (DSP) are realized in the annotation equipment of human face characteristic point according to embodiments of the present invention Some or all parts some or all functions.The present invention is also implemented as being used to perform side as described herein The some or all equipment or program of device (for example, computer program and computer program product) of method.It is such Realizing the program of the present invention can store on a computer-readable medium, or can have the shape of one or more signal Formula.Such signal can be downloaded from internet website and obtained, and either be provided or with any other shape on carrier signal Formula provides.
It should be noted that the present invention will be described rather than limits the invention for above-described embodiment, and ability Field technique personnel can design alternative embodiment without departing from the scope of the appended claims.In the claims, Any reference symbol between bracket should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not Element or step listed in the claims.Word "a" or "an" before element does not exclude the presence of multiple such Element.The present invention can be by means of including the hardware of some different elements and being come by means of properly programmed computer real It is existing.In if the unit claim of equipment for drying is listed, several in these devices can be by same hardware branch To embody.The use of word first, second, and third does not indicate that any order.These words can be explained and run after fame Claim.

Claims (10)

1. a kind of mask method of human face characteristic point, including:
Obtain the labeling position information of the facial contour feature point in the first face picture and first face picture;
The labeling position information of facial contour feature point in first face picture is entered to first face picture Row 3D is rebuild, and obtains the first 3D faceforms;
The first 3D faceforms are rotated into predetermined angle, obtain the 2nd 3D faceforms;
The people in the 2nd 3D faceforms is determined according to the positional information of each pixel in the 2nd 3D faceforms The 3D positional informations of face contour feature point;
The 2nd 3D faceforms are projected into 2D spaces, obtain the second face picture, and according to the 2nd 3D face moulds The 3D positional informations of facial contour feature point in type determine the position of the facial contour feature point in second face picture Information.
2. the method according to claim 11, wherein, the people in first face picture and the first 3D faceforms Face is presented with first angle;Face in second face picture and the 2nd 3D faceforms is to be in second angle It is existing;The difference of the first angle and the second angle is the predetermined angle.
3. method according to claim 1 or 2, wherein, it is described that the first 3D faceforms are rotated into predetermined angle, obtain It is specially to the 2nd 3D faceforms:The first 3D faceforms are rotated into preset angle by pivot of default center point Degree, obtains the 2nd 3D faceforms.
4. the method according to claim 11, wherein, in the position according to each pixel in the 2nd 3D faceforms Before information determines the 3D positional informations of the facial contour feature point in the 2nd 3D faceforms, methods described also includes: The positional information of each pixel in the 2nd 3D faceforms is determined according to the positional information of default center point and predetermined angle.
5. according to the method any one of claim 1-4, wherein, it is described according to each in the 2nd 3D faceforms The positional information of pixel determines that the 3D positional informations of the facial contour feature point in the 2nd 3D faceforms are further wrapped Include:
According to the positional information of each pixel in the 2nd 3D faceforms, face in the 2nd 3D faceforms are determined The 3D positional informations of contour feature point;Wherein, face contour feature point and the first face picture in the 2nd 3D faceforms In face contour feature point correspond;
And the positional information according to each pixel in the 2nd 3D faceforms, determine the 2nd 3D faceforms Middle face contour characteristic point and its 3D positional informations.
6. a kind of annotation equipment of human face characteristic point, including:
Acquisition module, suitable for obtaining the mark position of the facial contour feature point in the first face picture and first face picture Confidence ceases;
Module is rebuild, the labeling position information suitable for the facial contour feature point in first face picture is to described the One face picture carries out 3D reconstructions, obtains the first 3D faceforms;
Processing module, suitable for the first 3D faceforms are rotated into predetermined angle, obtain the 2nd 3D faceforms;
First determining module, suitable for determining described second according to the positional information of each pixel in the 2nd 3D faceforms The 3D positional informations of facial contour feature point in 3D faceforms;
Projection module, suitable for the 2nd 3D faceforms are projected into 2D spaces, obtain the second face picture;
Labeling module, suitable for described in the 3D positional informations determination of the facial contour feature point in the 2nd 3D faceforms The positional information of facial contour feature point in second face picture.
7. device according to claim 6, wherein, the people in first face picture and the first 3D faceforms Face is presented with first angle;Face in second face picture and the 2nd 3D faceforms is to be in second angle It is existing;The difference of the first angle and the second angle is the predetermined angle.
8. the device according to claim 6 or 7, wherein, the processing module is further adapted for:By the first 3D faces Model rotates predetermined angle by pivot of default center point, obtains the 2nd 3D faceforms.
9. a kind of computing device, including:Processor, memory, communication interface and communication bus, the processor, the storage Device and the communication interface complete mutual communication by the communication bus;
The memory is used to deposit an at least executable instruction, and the executable instruction makes the computing device such as right will Ask and operated corresponding to the mask method of the human face characteristic point any one of 1-5.
10. a kind of computer-readable storage medium, an at least executable instruction, the executable instruction are stored with the storage medium Make operation corresponding to the mask method of human face characteristic point of the computing device as any one of claim 1-5.
CN201711351407.9A 2017-12-15 2017-12-15 Mask method, device and the computing device of human face characteristic point Pending CN107832751A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711351407.9A CN107832751A (en) 2017-12-15 2017-12-15 Mask method, device and the computing device of human face characteristic point

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711351407.9A CN107832751A (en) 2017-12-15 2017-12-15 Mask method, device and the computing device of human face characteristic point

Publications (1)

Publication Number Publication Date
CN107832751A true CN107832751A (en) 2018-03-23

Family

ID=61644920

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711351407.9A Pending CN107832751A (en) 2017-12-15 2017-12-15 Mask method, device and the computing device of human face characteristic point

Country Status (1)

Country Link
CN (1) CN107832751A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109086798A (en) * 2018-07-03 2018-12-25 迈吉客科技(北京)有限公司 A kind of data mask method and annotation equipment
CN111695628A (en) * 2020-06-11 2020-09-22 北京百度网讯科技有限公司 Key point marking method and device, electronic equipment and storage medium
CN112101257A (en) * 2020-09-21 2020-12-18 北京字节跳动网络技术有限公司 Training sample generation method, image processing method, device, equipment and medium
WO2021003964A1 (en) * 2019-07-05 2021-01-14 深圳云天励飞技术有限公司 Method and apparatus for face shape recognition, electronic device and storage medium
CN113128320A (en) * 2020-01-16 2021-07-16 浙江舜宇智能光学技术有限公司 Face living body detection method and device based on TOF camera and electronic equipment
CN114398118A (en) * 2021-12-21 2022-04-26 深圳市易图资讯股份有限公司 Intelligent positioning system and method for smart city based on space anchor

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070152037A1 (en) * 2005-12-29 2007-07-05 Industrial Technology Research Institute Three-dimensional face recognition system and method
CN101320484A (en) * 2008-07-17 2008-12-10 清华大学 Three-dimensional human face recognition method based on human face full-automatic positioning
CN104036546A (en) * 2014-06-30 2014-09-10 清华大学 Method for carrying out face three-dimensional reconstruction at any viewing angle on basis of self-adaptive deformable model
CN105184283A (en) * 2015-10-16 2015-12-23 天津中科智能识别产业技术研究院有限公司 Method and system for marking key points in human face images
CN106022267A (en) * 2016-05-20 2016-10-12 北京师范大学 Automatic positioning method of weak feature point of three-dimensional face model
CN106203400A (en) * 2016-07-29 2016-12-07 广州国信达计算机网络通讯有限公司 A kind of face identification method and device
CN107146199A (en) * 2017-05-02 2017-09-08 厦门美图之家科技有限公司 A kind of fusion method of facial image, device and computing device
CN107423689A (en) * 2017-06-23 2017-12-01 中国科学技术大学 Intelligent interactive face key point mask method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070152037A1 (en) * 2005-12-29 2007-07-05 Industrial Technology Research Institute Three-dimensional face recognition system and method
CN101320484A (en) * 2008-07-17 2008-12-10 清华大学 Three-dimensional human face recognition method based on human face full-automatic positioning
CN104036546A (en) * 2014-06-30 2014-09-10 清华大学 Method for carrying out face three-dimensional reconstruction at any viewing angle on basis of self-adaptive deformable model
CN105184283A (en) * 2015-10-16 2015-12-23 天津中科智能识别产业技术研究院有限公司 Method and system for marking key points in human face images
CN106022267A (en) * 2016-05-20 2016-10-12 北京师范大学 Automatic positioning method of weak feature point of three-dimensional face model
CN106203400A (en) * 2016-07-29 2016-12-07 广州国信达计算机网络通讯有限公司 A kind of face identification method and device
CN107146199A (en) * 2017-05-02 2017-09-08 厦门美图之家科技有限公司 A kind of fusion method of facial image, device and computing device
CN107423689A (en) * 2017-06-23 2017-12-01 中国科学技术大学 Intelligent interactive face key point mask method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SEONG-GYUN JEONG ET AL.: "Marked point process model for facial wrinkle detection", 《 2014 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)》 *
刘杰锋 等: "《大洋非线性编辑实用教程 高级篇》", 31 October 2016, 北京:中国传媒大学出版社 *
赵宁: "人脸特征点自动标注及表情生成", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109086798A (en) * 2018-07-03 2018-12-25 迈吉客科技(北京)有限公司 A kind of data mask method and annotation equipment
WO2021003964A1 (en) * 2019-07-05 2021-01-14 深圳云天励飞技术有限公司 Method and apparatus for face shape recognition, electronic device and storage medium
CN113128320A (en) * 2020-01-16 2021-07-16 浙江舜宇智能光学技术有限公司 Face living body detection method and device based on TOF camera and electronic equipment
CN113128320B (en) * 2020-01-16 2023-05-16 浙江舜宇智能光学技术有限公司 Human face living body detection method and device based on TOF camera and electronic equipment
CN111695628A (en) * 2020-06-11 2020-09-22 北京百度网讯科技有限公司 Key point marking method and device, electronic equipment and storage medium
CN111695628B (en) * 2020-06-11 2023-05-05 北京百度网讯科技有限公司 Key point labeling method and device, electronic equipment and storage medium
CN112101257A (en) * 2020-09-21 2020-12-18 北京字节跳动网络技术有限公司 Training sample generation method, image processing method, device, equipment and medium
CN112101257B (en) * 2020-09-21 2022-05-31 北京字节跳动网络技术有限公司 Training sample generation method, image processing method, device, equipment and medium
CN114398118A (en) * 2021-12-21 2022-04-26 深圳市易图资讯股份有限公司 Intelligent positioning system and method for smart city based on space anchor

Similar Documents

Publication Publication Date Title
CN107832751A (en) Mask method, device and the computing device of human face characteristic point
EP4120199A1 (en) Image rendering method and apparatus, and electronic device and storage medium
CN104376594B (en) Three-dimensional face modeling method and device
CN108875524A (en) Gaze estimation method, device, system and storage medium
CN107481218B (en) Image aesthetic feeling evaluation method and device
WO2021233017A1 (en) Image processing method and apparatus, and device and computer-readable storage medium
CN108320325A (en) The generation method and device of dental arch model
CN105139007B (en) Man face characteristic point positioning method and device
CN103617615A (en) Radial distortion parameter obtaining method and obtaining device
CN108876893A (en) Method, apparatus, system and the computer storage medium of three-dimensional facial reconstruction
CN107452061A (en) Generation method, device, equipment and the computer-readable recording medium of building model based on oblique photograph technology
US10922852B2 (en) Oil painting stroke simulation using neural network
US11423617B2 (en) Subdividing a three-dimensional mesh utilizing a neural network
CN107784321A (en) Numeral paints this method for quickly identifying, system and computer-readable recording medium
CN109300188A (en) Threedimensional model processing method and processing device
CN104299241A (en) Remote sensing image significance target detection method and system based on Hadoop
CN112132812B (en) Certificate verification method and device, electronic equipment and medium
CN112330815A (en) Three-dimensional point cloud data processing method, device and equipment based on obstacle fusion
CN108573192B (en) Glasses try-on method and device matched with human face
Xiao Research on visual image texture rendering for artistic aided design
WO2023005934A1 (en) Data processing method and system, and electronic device
CN113361380B (en) Human body key point detection model training method, detection method and device
CN114638867A (en) Point cloud registration method and system based on feature extraction module and dual quaternion
CN113593007A (en) Single-view three-dimensional point cloud reconstruction method and system based on variational self-coding
CN109658489B (en) Three-dimensional grid data processing method and system based on neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180323

RJ01 Rejection of invention patent application after publication