CN108463823A - A kind of method for reconstructing, device and the terminal of user's Hair model - Google Patents
A kind of method for reconstructing, device and the terminal of user's Hair model Download PDFInfo
- Publication number
- CN108463823A CN108463823A CN201680060827.9A CN201680060827A CN108463823A CN 108463823 A CN108463823 A CN 108463823A CN 201680060827 A CN201680060827 A CN 201680060827A CN 108463823 A CN108463823 A CN 108463823A
- Authority
- CN
- China
- Prior art keywords
- image
- hair
- face
- pixel
- area image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Evolutionary Computation (AREA)
- Geometry (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
A kind of method for reconstructing, device and the terminal of user's Hair model, can effectively carry out Hair model reconstruction under complex environment.Method includes:The face for obtaining reconstructed user faces image;Image is faced according to the face, determines the hair zones image of the reconstructed user;The hair zones image is matched with pre-stored three-dimensional 3D Hair models in hair style database, is obtained and the immediate 3D Hair models of the hair zones image;It will be determined as being reconstructed the 3D Hair models of user with the immediate 3D Hair models of the hair zones image.
Description
The present invention relates to the three-dimensional modeling technique field (Three Dimensions, 3D) more particularly to a kind of method for reconstructing, device and the terminals of user's Hair model.
With the continuous improvement of terminal handler performance, the 3D virtual portrait role for rebuilding high quality based on the character in flat image has become reality, and by the favor of all big enterprises and users.It rebuilds during 3D virtual portrait role, it is accurate to create user's Hair model, very important effect is played to the overall image of personage, and the sense of reality of the virtual role reconstructed can be significantly increased.
During creating user's Hair model, need to be recognized accurately the hair zones of personage.Currently, the hair zones recognition methods based on hair color model is the method for mainstream.Fig. 1 provides the identification schematic diagram of the hair zones based on color model, as shown in Figure 1.This method mainly passes through face recognition technology and extracts human face region, human face region mainly includes hair zones and skin area, utilize RGB (the Red Green Black for the hair zones for including in human face region, RGB) information counts the hair style of various colors, and construct the gauss hybrid models (Gaussian Mixture Model, GMM) of hair color.Pixel judgement is carried out to human face region pixel color and identifies that the pixel belongs to human hair region if the color gamut of human face region pixel is fallen in the hair color model built.By judging all pixels in human face region, and then complete hair zones can be obtained.This method is limited to the sample of hair, recognition effect is bad if hair swatch is not complete, but also it is affected by the ambient, if ambient enviroment is more complicated, exist and the close object of hair color in environment, then identification process will appear a large amount of misrecognition, if personage's hair is dyeing, this method does not identify substantially.
The another kind of hair zones recognition methods based on machine learning, by the way that every width facial image is labeled as face, hair, background three parts, utilize convolutional neural networks (Convolutional Neural Network, CNN) model is trained the facial image largely marked, obtain the identification model of a hair zones, finally go out the hair zones in test image using the model inspection trained, realizes to hair zones
Identification, main flow is as shown in Figure 2.Hair zones recognition methods based on machine learning improves the hair zones recognition methods based on hair color model under complex environment, and the problem of a large amount of misrecognitions occurs in identification personage's hair process.But the hair zones recognition methods based on machine learning needs to carry out a large amount of training images manual mark, the time of model training is very very long, therefore application range is very narrow.
Under certain complex environments, such as, if existing and the close object of hair color and narrow space in environment, then the hair zones identification process based on hair color model will appear a large amount of misrecognition, and the hair zones identification based on machine learning, since the mobility of machine is poor, or even being in more narrow space can not placement machine, therefore the hair zones identification under not being suitable for complex environment, and then personage's Hair model can not be rebuild under complex environment.It therefore in the prior art, can not be effective to carry out Hair model reconstruction under complex environment.
Summary of the invention
The embodiment of the present invention provides method for reconstructing, device and the terminal of a kind of user's Hair model, effective to carry out Hair model reconstruction to realize under complex environment.
First aspect, a kind of method for reconstructing of user's Hair model is provided, in this method, image is faced according to the face of the reconstructed user of acquisition, the hair zones image is matched with 3D Hair model pre-stored in hair style database, is obtained and the immediate 3D Hair model of the hair zones image by the hair zones image for determining the reconstructed user, and it will be determined as being reconstructed the 3D Hair model of user with the immediate 3D Hair model of the hair zones image.
In above scheme, by facing the hair zones image for determining to be reconstructed user in image in the face of the reconstructed user got, and the hair zones image is matched with 3D Hair model pre-stored in hair style database, it obtains and the immediate 3D Hair model of the hair zones image, and it will be determined as being reconstructed the 3D Hair model of user with the immediate 3D Hair model of the hair zones image.It without being modeled to hair color, therefore will not be influenced by hair color in identification process, error is smaller when hair zones identify.And without being trained to facial image, a large amount of man-machine interactively and operation time are eliminated, detection time is shorter.Therefore the method for reconstructing of the above-mentioned user's Hair model of the embodiment of the present invention,
Hair model reconstruction can be effectively carried out under complex environment.
Wherein, the face faces image including at least human face region image, hair zones image and background area image.
In a kind of possible design, image can be faced according to the face in the following way, determine the hair zones image of the reconstructed user:
It is faced in the face and determines first area image in image, the first area image includes the human face region image, the hair zones image and part the background area image;
The part background area image is identified in the first area image, by the image in the first area image in addition to the background area image of the part identified, is determined as second area image;
The human face region image is identified in the second area image, by the image in the second area image in addition to the human face region image identified, is determined as the hair zones image.
In a kind of possible design, it can be faced in the following way in the face and determine first area image in image:
It detects the face and faces human face characteristic point in image;According to the human face characteristic point, the human face region image is determined, and the first block diagram region for being determined to cover the human face region image in image is faced in the face.On the basis of first block diagram region, expands first block diagram region, obtain the second block diagram region that can cover the human face region image, the hair zones image and part background area image.By the image in second block diagram region, as first area image.
In above-mentioned design, human face characteristic point is detected by Face datection algorithm, the Face datection algorithm used is not limited in the present invention, human face region image is determined by the human face characteristic point detected, the first block diagram region that can cover the human face region image is determined according to human face region image, can be accurately obtained the first area image comprising the human face region image, the hair zones image and part background area image by expanding the first block diagram region.
In a kind of possible design, the part background area image can be identified in the first area image in the following way:
Determine foreground pixel and background pixel, the foreground pixel, which includes in the human face characteristic point, includes
The pixel of the pixel of eye feature point, the pixel of nose characteristic point and mouth characteristic point.The background pixel includes belonging to the face to face image and be not belonging to the image pixel of the first area image.According to the pixel of the first area image, matching degree with the foreground pixel and the background pixel determines the foreground pixel and background pixel in the first area image;By the corresponding image of background pixel in the first area image, it is determined as part background area image.
In a kind of possible design, the human face region image can be identified in the second area image in the following way:
The pixel for including in the second area image is transformed into tone saturation degree brightness HSV space.According to value of the face complexion pixel on tri- components of HSV, face complexion pixel is extracted in the pixel that the second area image includes, and determines face complexion area according to the face complexion pixel.According to the human face characteristic point, facial contour region is determined;According to the face complexion area and the facial contour region, the human face region image in the second area image is determined.
In above-mentioned design, human face region image is determined according to face complexion area and facial contour region, compared to the method for only determining human face region image according to face complexion area or facial contour region, the human face region image determined is more accurate.
In a kind of possible design, the hair zones image can be matched in the following way with 3D Hair model pre-stored in hair style database, obtained and the immediate 3D Hair model of the hair zones image:
Determine that the feature of the hair zones image describes operator, the feature describes the space characteristics that operator characterizes the hair zones image;It the feature is described operator feature corresponding with 3D Hair model pre-stored in hair style database describes operator to match, obtain describing the immediate feature of operator with the feature in hair style database describing operator;The immediate feature of operator will be described in hair style database with the feature and describes the corresponding 3D Hair model of operator, is determined as and the immediate 3D Hair model of the hair zones image.
It in above-mentioned design, after the image that hair zones have accurately been determined, is matched with the hair style in hair style database, the true hair style information of user can be obtained.
In a kind of possible design, the feature of the hair zones image can be determined as follows
Operator is described:
Determine the Internal periphery and outer profile of the hair zones image.The midpoint for determining two horizontal lines of corners of the mouth characteristic point that the human face characteristic point includes, using the midpoint as the ray origin for carrying out the scanning of omnibearing angel direction to the hair zones image.The hair zones image record on each angle direction, the distance of the distance of the origin to the Internal periphery, the distance of the origin to the outer profile and the Internal periphery to the outer profile in the scanning process of omnibearing angel direction.By the distance of the distance of the origin of record to the Internal periphery, the distance of the origin to the outer profile and the Internal periphery to the outer profile, the feature as the hair zones image describes operator.
Wherein, the feature describes the space characteristics that operator characterizes the hair zones image.
In above-mentioned design, pass through the midpoint for two horizontal lines of corners of the mouth characteristic point for including by the human face characteristic point, as the ray origin for carrying out the scanning of omnibearing angel direction to the hair zones image, the feature for obtaining hair zones image describes the method for operator, can more accurately determine and the immediate 3D Hair model of the hair zones image.
Second aspect, a kind of user's Hair model reconstructing device is provided, the device rebuild for user's Hair model has the function of realizing user's Hair model method for reconstructing involved in above-mentioned first aspect, the function can also execute corresponding software realization by hardware realization by hardware.The hardware or software include one or more modules corresponding with above-mentioned function.The module can be software and/or hardware.
In a kind of possible design, user's Hair model reconstructing device includes acquiring unit, determination unit, matching unit and processing unit, and acquiring unit, determination unit, the function of matching unit and processing unit can be corresponding with various method steps, and it will not be described here.
The third aspect, the present invention also provides a kind of terminal, which includes: input equipment, memory, processor, display screen and bus.The input equipment, the memory and the display screen are connected to the processor by the bus.The input equipment, the face for obtaining reconstructed user face image.The memory, the face for storing the program of the processor execution, the reconstructed user that the input equipment obtains face image and 3D Hair model.The processor, it is specific to execute the operation that any one design is performed in first aspect for executing the program of the memory storage.The display
Screen, for showing that the face of reconstructed user of the input equipment acquisition faces the 3D Hair model for the reconstructed user that image and the processor are determined.
Fourth aspect, the present invention also provides a kind of computer readable storage mediums for storing one or more programs, one or more of programs include instruction, and described instruction executes the electronic equipment according to the arbitrarily devised operation of first aspect when being included the electronic equipment execution of multiple application programs.
Fig. 1 is the existing hair zones recognition methods schematic diagram based on hair color model;
Fig. 2 is the existing hair zones recognition methods schematic diagram based on machine learning;
Fig. 3 is a kind of method for reconstructing flow chart of user's Hair model provided in an embodiment of the present invention;
Fig. 4 is that a kind of face provided in an embodiment of the present invention faces image schematic diagram;
Fig. 5 is provided in an embodiment of the present invention a kind of the method flow diagram that image determines the hair zones image to be faced according to the face;
Fig. 6 is provided in an embodiment of the present invention a kind of the method flow diagram that first area image is determined in image to be faced in the face;
Fig. 7 is that face provided in an embodiment of the present invention faces human face characteristic point schematic diagram in image;
It includes that the face in the first block diagram region faces image schematic diagram that Fig. 8, which is provided in an embodiment of the present invention,;
Fig. 9 is a kind of method schematic diagram in the second block diagram of determination region provided in an embodiment of the present invention;
Figure 10 is a kind of method flow diagram that the part background area image is identified in the first area image provided in an embodiment of the present invention;
Figure 11 be it is provided in an embodiment of the present invention it is a kind of identified in the first area image part background area image and remove obtain the schematic diagram of second area image;
Figure 12 is that one kind provided in an embodiment of the present invention identifies the human face region image method flow chart in second area image;
Figure 13 is a kind of process flowchart for identifying the human face region image in second area image and removing provided in an embodiment of the present invention;
Figure 14 is a kind of method flow diagram of hair zones images match provided in an embodiment of the present invention;
Figure 15 is the method flow diagram that a kind of feature for determining the hair zones image provided in an embodiment of the present invention describes operator;
Figure 16 is that a kind of pair of hair area image provided in an embodiment of the present invention carries out matched process flow diagram;
Figure 17 is a kind of reconstruction effect diagram of user's Hair model provided in an embodiment of the present invention;
Figure 18 is a kind of reconstructing device schematic diagram of user's Hair model provided in an embodiment of the present invention;
Figure 19 is the reconstructing device schematic diagram of another user's Hair model provided in an embodiment of the present invention.
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention is described in detail, it is clear that described embodiments are only a part of the embodiments of the present invention, is not whole embodiments.
The embodiment of the present invention provides method for reconstructing, device and the terminal of a kind of user's Hair model, effective to carry out Hair model reconstruction to realize under complex environment.Wherein, method, apparatus and terminal be based on the same inventive concept, since the principle that method, apparatus and terminal solve the problems, such as is similar, terminal, apparatus and method implementation can be with cross-reference, overlaps will not be repeated.
To guarantee effectively to carry out Hair model reconstruction under complex environment, the embodiment of the present invention provides a kind of method for reconstructing of user's Hair model.In the method, image is faced according to the face of the reconstructed user of acquisition, determine the hair zones image of the reconstructed user, the hair zones image is matched with 3D Hair model pre-stored in hair style database, it obtains and the immediate 3D Hair model of the hair zones image, and it will be determined as being reconstructed the 3D Hair model of user with the immediate 3D Hair model of the hair zones image.
The method for reconstructing of user's Hair model provided in an embodiment of the present invention can be applied to the relatively low terminal of storage capacity, computing capability, naturally it is also possible to which the higher electronic equipment of application memory ability, computing capability, the application are not especially limited this.Below the embodiment of the present invention using the device of the reconstruction for user's Hair model as executing subject to the present embodiments relate to the method for reconstructing of user's Hair model be illustrated.
Fig. 3 shows a kind of flow chart of the method for reconstructing of user's Hair model provided in an embodiment of the present invention, as shown in Figure 3:
S101: the face that the device of the reconstruction for user's Hair model obtains reconstructed user faces image.
The device of reconstruction in the embodiment of the present invention for user's Hair model can be the terminal with image collecting function, and there is the terminal of image collecting function can face image by image capture device (such as camera) acquisition face for this.Certainly, the embodiment of the present invention does not limit the specific face that obtains and faces the implementation of image, such as the picture that image is faced including face can also be stored in advance in the device of the reconstruction for user's Hair model.
S102: facing image according to the face, determines the hair zones image of the reconstructed user.
S103: the hair zones image is matched with 3D Hair model pre-stored in hair style database, is obtained and the immediate 3D Hair model of the hair zones image.
S104: will be determined as being reconstructed the 3D Hair model of user with the immediate 3D Hair model of the hair zones image.
Wherein, the face that the device of the reconstruction of user's Hair model obtains faces image, includes at least human face region image, hair zones image and background area image.
Human face region image involved in the embodiment of the present invention be face face include in image human face region parts of images, hair zones image be face face include in image hair zones parts of images, Background regional image seem face face include in image background area parts of images.Such as the face in Fig. 4 faces image and is divided into A, B and C three parts image, wherein a-quadrant parts of images is background area image, and B area parts of images is human face region image, and the region C parts of images is hair zones image.
In the embodiment of the present invention, by identifying that the face faces human face region image, hair zones image and background area image in image, the face for finally determining the reconstructed user faces hair zones image in image.Fig. 5 show it is provided by the invention it is a kind of the method flow diagram that image determines the hair zones image is faced according to the face, as shown in Figure 5:
S201: it is faced in the face and determines first area image in image.
S202: identifying the part background area image in the first area image, by described
Image in one area image in addition to the background area image of the part identified, is determined as second area image.
S203: identifying the human face region image in the second area image, by the image in the second area image in addition to the human face region image identified, is determined as the hair zones image.
Wherein, the first area image includes the human face region image, the hair zones image and part the background area image
Face can be obtained by such as face recognition technology in the embodiment of the present invention and face human face region image, hair zones image and part background area image in image, determine first area image.Fig. 6 show it is provided by the invention it is a kind of the method flow diagram that first area image is determined in image is faced in the face, as shown in Figure 6:
S301: it detects the face and faces human face characteristic point in image.
Face can be detected according to Face datection algorithm in the embodiment of the present invention and face the human face characteristic point in image, but without limitation to specifically used Face datection algorithm.
Human face characteristic point involved in the embodiment of the present invention includes eye feature point, nose characteristic point, mouth characteristic point, eyebrow characteristic point, ear characteristic point and facial contour feature point.Fig. 7 is that face provided by the invention faces human face characteristic point schematic diagram in image, as shown in fig. 7, each number marked in figure corresponds to a human face characteristic point.Image is faced in face to get the bid out from 0 to No. 39 totally 40 human face characteristic points, wherein, human face characteristic point shown in 0 to No. 9 is eye feature point, human face characteristic point shown in 10 to No. 18 is nose characteristic point, human face characteristic point shown in 19 to No. 24 is mouth characteristic point, human face characteristic point shown in 25 to No. 32 is eyebrow characteristic point, and 33 to No. 39 human face characteristic points are face contour feature point, and 36 to No. 39 human face characteristic points are ear characteristic point in facial contour feature point.
S302: it according to the human face characteristic point, determines the human face region image, and faces the first block diagram region for being determined to cover the human face region image in image in the face.
In the embodiment of the present invention, is separated in order to which face to be faced to character image in image and background image, the first block diagram region that can cover the human face region image is determined according to human face characteristic point first.Fig. 8 shows a kind of face comprising the first block diagram region and faces image, as shown in figure 8, the white in figure
Frame is the first block diagram region determined according to human face characteristic point.
S303: on the basis of first block diagram region, expand first block diagram region, obtain the second block diagram region that can cover the human face region image, the hair zones image and part background area image.
In the embodiment of the present invention, it is not ensured that according to the first block diagram region that human face characteristic point determines and completely includes human face region image and hair zones image, removal part background image in image is faced in order to reach the present invention in face, retain the purpose of human face region image and hair zones image, it will be on the basis of first block diagram region, expand first block diagram region, obtains the second block diagram region that can cover the human face region image, the hair zones image and part background area image.Fig. 9 shows a kind of method schematic diagram in the second block diagram of determination region, as shown in Figure 9.Using the width in first block diagram region as benchmark length, both direction respectively extends 0.2 datum length to the left and right, using the length in first block diagram region as benchmark length, upwardly extend 0.5 datum length, extend downwardly 0.3 datum length, expand first block diagram region, guarantees that the second block diagram region redefined can include the complete human face region image and the complete hair zones image.
S304: by the image in second block diagram region, it is determined as first area image.
It can identify the part background area image in the embodiment of the present invention in the first area image by such as image recognition technology.Figure 10 shows a kind of method flow diagram that the part background area image is identified in the first area image provided by the invention, as shown in Figure 10:
S401: foreground pixel and background pixel are determined.
In the embodiment of the present invention, the foreground pixel includes the pixel of eye feature point for including, the pixel of the pixel of nose characteristic point and mouth characteristic point in the human face characteristic point, and the background pixel includes belonging to the face to face image and be not belonging to the image pixel of the first area image.Can be by Fig. 7 in the embodiment of the present invention, the pixel of the point of eye feature shown in 0 to No. 9, the pixel of nose characteristic point shown in 10 to No. 18, the pixel of mouth characteristic point, is determined as foreground pixel shown in 19 to No. 24.And the pixel outside the second block diagram shown in Fig. 9 region is determined as background pixel.
S402: according to the pixel of the first area image, matching degree with the foreground pixel and the background pixel determines the foreground pixel and background pixel in the first area image.
Wherein, the foreground pixel in the first area image corresponds to the human face region image and the hair zones image, and the background pixel of the first area image corresponds to part background area image
In the present invention, the pixel of the first area image can be handled by mixed Gauss model algorithm and max-flow/minimal cut algorithm, the pixel of the first area image is determined as foreground pixel and background pixel, which kind of, to specifically algorithm being used to determine foreground pixel and background pixel in the first area image in the present invention, it is not especially limited.
S403: by the corresponding image of background pixel in the first area image, it is determined as part background area image.
The embodiment of the present invention that Figure 11 gives, which identifies the part background area image and removed in the first area image, obtains the schematic diagram of second area image.As shown in figure 11, the background area image for including by first area image removes, and has obtained second area image only comprising the human face region image and the hair zones image.
In the embodiment of the present invention, after obtaining the second area image comprising the human face region image and the hair zones image, the human face region image can be further identified in second area image.
Figure 12 shows a kind of method flow diagram that the human face region image is identified in second area image provided in an embodiment of the present invention, as shown in figure 12:
S501: the pixel for including in the second area image is transformed into the space the brightness of tone saturation degree (Hue Saturation Value, HSV).
S502: according to value of the face complexion pixel on tri- components of HSV, face complexion pixel is extracted in the pixel that the second area image includes, and determines face complexion area according to the face complexion pixel.
S503: according to the human face characteristic point, facial contour region is determined.
In the embodiment of the present invention, according in the human face characteristic point shown in Fig. 7,33 to No. 39 facial contour feature points carry out facial contour fitting, obtain the facial contour region in the second area.
S504: according to the face complexion area and the facial contour region, the human face region image in the second area image is determined.
Figure 13 show the present invention identified in second area image the human face region image and remove
Process flowchart, as shown in figure 13.Face complexion area is determined by face complexion pixel, facial contour region is determined according to the human face characteristic point, and then human face region image is determined according to face complexion area and facial contour region and is removed, is obtained hair zones image.Compared to the method for only determining human face region image according to face complexion area or facial contour region, the human face region image determined is more accurate.
The present invention is by facing the hair zones image for determining to be reconstructed user in image in the face of the reconstructed user got, and the hair zones image is matched with 3D Hair model pre-stored in hair style database, it obtains and the immediate 3D Hair model of the hair zones image, and it will be determined as being reconstructed the 3D Hair model of user with the immediate 3D Hair model of the hair zones image.It without being modeled to hair color, therefore will not be influenced by hair color in identification process, error is smaller when hair zones identify.And without being trained to facial image, a large amount of man-machine interactively and operation time are eliminated, detection time is shorter.Therefore the method for reconstructing of the above-mentioned user's Hair model of the embodiment of the present invention, it can be effective to carry out Hair model reconstruction under complex environment.
In the embodiment of the present invention, after identifying that face faces the hair zones image in image, hair zones image is matched with 3D Hair model pre-stored in hair style database, obtain with the immediate 3D Hair model of the hair zones image, can be used for the scalp electroacupuncture model of hair layered modeling.Figure 14 is a kind of method flow diagram of hair zones images match provided by the invention, as shown in figure 14:
S601: determine that the feature of the hair zones image describes operator.
In the embodiment of the present invention, the feature describes the space characteristics that operator characterizes the hair zones image.
S602: describing operator feature corresponding with 3D Hair model pre-stored in hair style database for the feature and describe operator to match, and obtains describing the immediate feature of operator with the feature in hair style database describing operator.
S603: the immediate feature of operator will be described in hair style database with the feature and describes the corresponding 3D Hair model of operator, is determined as and the immediate 3D Hair model of the hair zones image.
In the embodiment of the present invention, before being matched to the hair zones image with 3D Hair model pre-stored in hair style database, it can first determine that the feature of the hair zones image describes operator, the specific implementation that a kind of feature for determining the hair zones image describes operator is given below.
Figure 15 is the method flow diagram that a kind of feature for determining the hair zones image provided by the invention describes operator, as shown in figure 15:
S701: the Internal periphery and outer profile of the hair zones image are determined.
In the embodiment of the present invention, it can determine the hair zones image Internal periphery and outer profile by carrying out binary conversion treatment to the hair zones image, what method is specifically taken to determine that the Internal periphery of the hair zones image and outer profile are not especially limited.
S702: determining the midpoint for two horizontal lines of corners of the mouth characteristic point that the human face characteristic point includes, using the midpoint as the ray origin for carrying out the scanning of omnibearing angel direction to the hair zones image.
S703: the hair zones image record on each angle direction, the distance of the distance of the origin to the Internal periphery, the distance of the origin to the outer profile and the Internal periphery to the outer profile in the scanning process of omnibearing angel direction.
S704: by the distance of the distance of the origin of record to the Internal periphery, the distance of the origin to the outer profile and the Internal periphery to the outer profile, the feature as the hair zones image describes operator.
Figure 16 provides a kind of pair of hair area image provided by the invention and carries out matched process flow diagram, as shown in figure 16, the midpoint for two horizontal lines of corners of the mouth characteristic point for including with human face characteristic point in Figure 16 is the origin of scanning, the scanning of omnibearing angel direction is carried out to hair area image, the feature for obtaining hair zones image describes operator, midpoint with two horizontal lines of corners of the mouth characteristic point is that the feature that the origin of scanning obtains describes operator, accuracy is higher, the space characteristics of the hair zones image can more accurately be characterized, real information of the 3D Hair model matched by above-mentioned design closer to user's hair zones image.
Figure 17 is a kind of reconstruction effect diagram of user's Hair model provided by the invention, as shown in figure 17.Figure 17 is that the method for reconstructing of the user's Hair model provided according to Fig. 3 rebuilds a kind of realization effect picture of Hair model, layered shaping is carried out by facing image to face, it is partitioned into hair zones image, hair zones image is matched with the 3D Hair model in database, it will be determined as the 3D scalp electroacupuncture model for hair layered modeling with the immediate 3D Hair model of the hair zones image.
It is understood that the reconstructing device for user's Hair model is in order to realize the above functions, it comprises execute the corresponding hardware configuration of each function and/or software module.In conjunction with disclosed in this invention reality
Each exemplary unit and algorithm steps of example description are applied, the embodiment of the present invention can be realized with the combining form of hardware or hardware and computer software.Some function is executed in a manner of hardware or computer software driving hardware actually, specific application and design constraint depending on technical solution.Those skilled in the art can realize described function to each specific application using different methods, but this realization is it is not considered that exceed the range of the technical solution of the embodiment of the present invention.
The embodiment of the present invention can carry out the division of functional unit according to above method example to the device of the reconstruction for user's Hair model, such as, the each functional unit of each function division can be corresponded to, two or more functions can also be integrated in a processing unit.Above-mentioned integrated unit both can take the form of hardware realization, can also realize in the form of software functional units.It should be noted that being schematically that only a kind of logical function partition, there may be another division manner in actual implementation to the division of unit in the embodiment of the present invention.
Using integrated unit, Figure 18 shows the structural schematic diagram of the reconstructing device 1000 of user's Hair model, as shown in figure 18, the reconstructing device 1000 of user's Hair model includes acquiring unit 1001, determination unit 1002, matching unit 1003, processing unit 1004, in which:
Acquiring unit 1001, the face for obtaining reconstructed user face image;
Determination unit 1002, the face for being obtained according to the acquiring unit 1001 face image, determine the hair zones image of the reconstructed user;
Matching unit 1003, the hair zones image for determining the determination unit 1002 are matched with 3D Hair model pre-stored in hair style database, are obtained and the immediate 3D Hair model of the hair zones image;
Processing unit 1004, for the matching unit 1003 is matched with the hair zones image immediate 3D Hair model, be determined as being reconstructed the 3D Hair model of user.
Wherein, the face that the acquiring unit 1001 obtains faces image including at least human face region image, hair zones image and background area image.
In one possible implementation, the determination unit 1002, which can be used, faces image according to the face that the acquiring unit 1001 obtains such as under type, determines the hair zones image of the reconstructed user:
It is faced in the face and determines first area image in image, the first area image includes the human face region image, the hair zones image and part the background area image;The part background area image is identified in the first area image, by the image in the first area image in addition to the background area image of the part identified, is determined as second area image;The human face region image is identified in the second area image, by the image in the second area image in addition to the human face region image identified, is determined as the hair zones image.
In one possible implementation, the determination unit 1002, faces in the face determine first area image in image in the following way:
It detects the face and faces human face characteristic point in image.According to the human face characteristic point, the human face region image is determined, and the first block diagram region for being determined to cover the human face region image in image is faced in the face.On the basis of first block diagram region, expands first block diagram region, obtain the second block diagram region that can cover the human face region image, the hair zones image and part background area image.By the image in second block diagram region, as first area image.
In one possible implementation, the determination unit 1002 can be used under type such as and identify part background area image in the first area image:
Determine foreground pixel and background pixel, the foreground pixel includes the pixel of eye feature point for including, the pixel of the pixel of nose characteristic point and mouth characteristic point in the human face characteristic point, and the background pixel includes belonging to the face to face image and be not belonging to the image pixel of the first area image;According to the pixel of the first area image, with the matching degree of the foreground pixel and the background pixel, the foreground pixel and background pixel in the first area image are obtained, by the corresponding image of background pixel in the first area image, is determined as part background area image.
Wherein, the foreground pixel in the first area image corresponds to the human face region image and the hair zones image, and the background pixel of the first area image corresponds to part background area image.
In one possible implementation, the determination unit 1002 can be used identified in the second area image such as under type as described in human face region image:
The pixel for including in the second area image is transformed into HSV space;According to value of the face complexion pixel on tri- components of HSV, face complexion is extracted in the pixel that the second area image includes
Pixel, and face complexion area is determined according to the face complexion pixel;According to the human face characteristic point, facial contour region is determined;According to the face complexion area and the facial contour region, the human face region image in the second area image is determined.
In one possible implementation, the matching unit 1003 in the following way matches the hair zones image with 3D Hair model pre-stored in hair style database, obtains and the immediate 3D Hair model of the hair zones image:
Determine that the feature of the hair zones image describes operator.It the feature is described operator feature corresponding with 3D Hair model pre-stored in hair style database describes operator to match, obtain describing the immediate feature of operator with the feature in hair style database describing operator;The immediate feature of operator will be described in hair style database with the feature and describes the corresponding 3D Hair model of operator, is determined as and the immediate 3D Hair model of the hair zones image.
Wherein, the feature describes the space characteristics that operator characterizes the hair zones image.
In one possible implementation, the matching unit 1003 determines that the feature of the hair zones image describes operator in the following way:
Determine the Internal periphery and outer profile of the hair zones image;The midpoint for determining two horizontal lines of corners of the mouth characteristic point that the human face characteristic point includes, using the midpoint as the ray origin for carrying out the scanning of omnibearing angel direction to the hair zones image;The hair zones image record on each angle direction, the distance of the distance of the origin to the Internal periphery, the distance of the origin to the outer profile and the Internal periphery to the outer profile in the scanning process of omnibearing angel direction;By the distance of the distance of the origin of record to the Internal periphery, the distance of the origin to the outer profile and the Internal periphery to the outer profile, the feature as the hair zones image describes operator.
In embodiments of the present invention, each functional unit can integrate in a processor, be also possible to physically exist alone, and can also be integrated in one unit with two or more units.Above-mentioned integrated unit both can take the form of hardware realization, can also be realized in the form of software function module.
Wherein, when integrated unit uses formal implementation of hardware, above-mentioned determination unit 1002, matching unit 1003 and processing unit 1004 are corresponding to can be processor 2001, the entity hardware of the corresponding reconstructing device in user's Hair model of acquiring unit 1001 in the entity hardware of the reconstructing device of user's Hair model
In can be input equipment 2003, as shown in figure 19.Figure 19 shows another structural schematic diagram of the reconstructing device of user's Hair model, and the reconstructing device of user's Hair model shown in Figure 19 can be terminal.It is illustrated so that the reconstructing device of user's Hair model is terminal as an example below, as shown in figure 19, terminal 2000 includes processor 2001, it can also include memory 2002, the face for the reconstructed user that the program executed for storage processor 2001, the input equipment 2003 obtain faces image and 3D Hair model.The processor 2001, the face for the reconstructed user that program and the memory 2002 for calling the memory 2002 to store store faces image, determine that the face faces the hair zones image in image, and the hair zones image is matched with 3D Hair model pre-stored in the memory, it obtains and the immediate 3D Hair model of the hair zones image, it will be determined as being reconstructed the 3D Hair model of user with the immediate 3D Hair model of the hair zones image.
Memory 2002 can be volatile memory (English: volatile memory), such as random access memory (English: random-access memory, abbreviation: RAM);Memory 2002 is also possible to nonvolatile memory (English: non-volatile memory), such as read-only memory (English: read-only memory, abbreviation: ROM), flash memory (English: flash memory), hard disk (English: hard disk drive, abbreviation: HDD) or solid state hard disk (English: solid-state drive, abbreviation: SSD), or memory 2002 can be used for carry or store have instruction or data structure form desired program code and can be by any other medium of computer access, but not limited to this.Memory 2002 can be the combination of above-mentioned memory.
The input equipment 2003 for including in the above-mentioned terminal 2000 being related to, the face for obtaining reconstructed user face pre-stored 3D Hair model in image and configuration hair style database.Wherein it is stored in memory 2002 after the completion of configuration.
It can also include display screen 2004 in the above-mentioned terminal 2000 being related to, for showing that the face of reconstructed user of the acquisition of input equipment 2003 faces the 3D Hair model for the reconstructed user that image and the processor 2001 are determined.
Wherein, processor 2001, memory 2002, input equipment 2003 and display screen 2004 can be connected by bus 2005.Connection type between other components is only to be schematically illustrated, does not regard it as and be limited.The bus 2005 can be divided into address bus, data/address bus, control bus etc..For convenient for
It indicates, is only indicated with a thick line in Figure 19, it is not intended that an only bus or a type of bus.
The processor 2001, the program code stored for executing the memory 2002, specifically performs the following operations:
The face for obtaining reconstructed user faces image;Image is faced according to the face, determines the hair zones image of the reconstructed user;The hair zones image is matched with three-dimensional 3D Hair model pre-stored in hair style database, is obtained and the immediate 3D Hair model of the hair zones image;It will be determined as being reconstructed the 3D Hair model of user with the immediate 3D Hair model of the hair zones image.
Wherein, the face faces image including at least human face region image, hair zones image and background area image.
Processor 2001 can face image according to the face in the following way, determine the hair zones image of the reconstructed user:
It is faced in the face and determines first area image in image, the first area image includes the human face region image, the hair zones image and part the background area image.The part background area image is identified in the first area image, by the image in the first area image in addition to the background area image of the part identified, is determined as second area image.The human face region image is identified in the second area image, by the image in the second area image in addition to the human face region image identified, is determined as the hair zones image.
Processor 2001 can be faced in the following way in the face and determine first area image in image:
It detects the face and faces human face characteristic point in image;According to the human face characteristic point, the human face region image is determined, and the first block diagram region for being determined to cover the human face region image in image is faced in the face;On the basis of first block diagram region, expands first block diagram region, obtain the second block diagram region that can cover the human face region image, the hair zones image and part background area image;By the image in second block diagram region, as first area image.
Processor 2001 can identify the part back in the first area image in the following way
Scenic spot area image:
Determine foreground pixel and background pixel, the foreground pixel includes the pixel of eye feature point for including, the pixel of the pixel of nose characteristic point and mouth characteristic point in the human face characteristic point, and the background pixel includes belonging to the face to face image and be not belonging to the image pixel of the first area image;According to the pixel of the first area image, with the matching degree of the foreground pixel and the background pixel, the foreground pixel and background pixel in the first area image are obtained, by the corresponding image of background pixel in the first area image, is determined as part background area image.
Wherein, the foreground pixel in the first area image corresponds to the human face region image and the hair zones image, and the background pixel of the first area image corresponds to part background area image.
Processor 2001 can identify the human face region image in the second area image in the following way:
The pixel for including in the second area image is transformed into HSV space;According to value of the face complexion pixel on tri- components of HSV, face complexion pixel is extracted in the pixel that the second area image includes, and determines face complexion area according to the face complexion pixel;According to the human face characteristic point, facial contour region is determined;According to the face complexion area and the facial contour region, the human face region image in the second area image is determined.
Processor 2001 can in the following way match the hair zones image with 3D Hair model pre-stored in hair style database, obtain and the immediate 3D Hair model of the hair zones image:
Determine that the feature of the hair zones image describes operator, the feature describes the space characteristics that operator characterizes the hair zones image;It the feature is described operator feature corresponding with 3D Hair model pre-stored in hair style database describes operator to match, obtain describing the immediate feature of operator with the feature in hair style database describing operator;The immediate feature of operator will be described in hair style database with the feature and describes the corresponding 3D Hair model of operator, is determined as and the immediate 3D Hair model of the hair zones image.
The feature that processor 2001 can be determined as follows the hair zones image describes operator:
Determine the Internal periphery and outer profile of the hair zones image;Determine that the human face characteristic point includes
The midpoint of two horizontal lines of corners of the mouth characteristic point, using the midpoint as the ray origin for carrying out the scanning of omnibearing angel direction to the hair zones image;The hair zones image record on each angle direction, the distance of the distance of the origin to the Internal periphery, the distance of the origin to the outer profile and the Internal periphery to the outer profile in the scanning process of omnibearing angel direction;By the distance of the distance of the origin of record to the Internal periphery, the distance of the origin to the outer profile and the Internal periphery to the outer profile, the feature as the hair zones image describes operator.
The embodiment of the present invention is by facing the hair zones image for determining to be reconstructed user in image in the face of the reconstructed user got, and the hair zones image is matched with 3D Hair model pre-stored in hair style database, it obtains and the immediate 3D Hair model of the hair zones image, and it will be determined as being reconstructed the 3D Hair model of user with the immediate 3D Hair model of the hair zones image.It without being modeled to hair color, therefore will not be influenced by hair color in identification process, error is smaller when hair zones identify.And without being trained to facial image, a large amount of man-machine interactively and operation time are eliminated, detection time is shorter.Therefore the method for reconstructing of the above-mentioned user's Hair model of the embodiment of the present invention, it can be effective to carry out Hair model reconstruction under complex environment.
It should be understood by those skilled in the art that, embodiments herein can provide as method, system or computer program product.Therefore, the form of complete hardware embodiment, complete software embodiment or embodiment combining software and hardware aspects can be used in the application.Moreover, the form for the computer program product implemented in the computer-usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) that one or more wherein includes computer usable program code can be used in the application.
The application is that reference is described according to the flowchart and/or the block diagram of the present processes, equipment (system) and computer program product.It should be understood that the combination of process and/or box in each flow and/or block and flowchart and/or the block diagram that can be realized by computer program instructions in flowchart and/or the block diagram.These computer program instructions be can provide to the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to generate a machine, so that generating by the instruction that computer or the processor of other programmable data processing devices execute for realizing the device for the function of specifying in one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, to be able to guide in computer or other programmable data processing devices computer-readable memory operate in a specific manner, so that instruction stored in the computer readable memory generates the manufacture including command device, which realizes the function of specifying in one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that series of operation steps are executed on a computer or other programmable device to generate computer implemented processing, thus the step of instruction executed on a computer or other programmable device is provided for realizing the function of specifying in one or more flows of the flowchart and/or one or more blocks of the block diagram.
Although the preferred embodiment of the application has been described, once a person skilled in the art knows basic creative concepts, then additional changes and modifications can be made to these embodiments.So it includes preferred embodiment and all change and modification for falling into the application range that the following claims are intended to be interpreted as.
Obviously, those skilled in the art can carry out various modification and variations without departing from spirit and scope to the application.If then the application is also intended to include these modifications and variations in this way, these modifications and variations of the application belong within the scope of the claim of this application and its equivalent technologies.
Claims (21)
- A kind of method for reconstructing of user's Hair model characterized by comprisingThe face for obtaining reconstructed user faces image;Image is faced according to the face, determines the hair zones image of the reconstructed user;The hair zones image is matched with three-dimensional 3D Hair model pre-stored in hair style database, is obtained and the immediate 3D Hair model of the hair zones image;It will be determined as being reconstructed the 3D Hair model of user with the immediate 3D Hair model of the hair zones image.
- The method as described in claim 1, which is characterized in that the face faces image including at least human face region image, hair zones image and background area image;It is described that image is faced according to the face, determine the hair zones image of the reconstructed user, comprising:It is faced in the face and determines first area image in image, the first area image includes the human face region image, the hair zones image and part the background area image;The part background area image is identified in the first area image, by the image in the first area image in addition to the background area image of the part identified, is determined as second area image;The human face region image is identified in the second area image, by the image in the second area image in addition to the human face region image identified, is determined as the hair zones image.
- Method according to claim 2, which is characterized in that faced in the face and determine first area image in image, comprising:It detects the face and faces human face characteristic point in image;According to the human face characteristic point, the human face region image is determined, and the first block diagram region for being determined to cover the human face region image in image is faced in the face;On the basis of first block diagram region, expands first block diagram region, obtain the second block diagram area that can cover the human face region image, the hair zones image and part background area image Domain;By the image in second block diagram region, it is determined as first area image.
- Method as claimed in claim 3, which is characterized in that the part background area image is identified in the first area image, comprising:Determine foreground pixel and background pixel, the foreground pixel includes the pixel of eye feature point for including, the pixel of the pixel of nose characteristic point and mouth characteristic point in the human face characteristic point, and the background pixel includes belonging to the face to face image and be not belonging to the image pixel of the first area image;According to the pixel of the first area image, matching degree with the foreground pixel and the background pixel determines the foreground pixel and background pixel in the first area image;By the corresponding image of background pixel in the first area image, it is determined as part background area image.
- Method according to claim 2, which is characterized in that the human face region image is identified in the second area image, comprising:The pixel for including in the second area image is transformed into tone saturation degree brightness HSV space;According to value of the face complexion pixel on tri- components of HSV, face complexion pixel is extracted in the pixel that the second area image includes, and determines face complexion area according to the face complexion pixel;According to the human face characteristic point, facial contour region is determined;According to the face complexion area and the facial contour region, the human face region image in the second area image is determined.
- The method according to claim 1 to 5, which is characterized in that the hair zones image is matched with 3D Hair model pre-stored in hair style database, is obtained and the immediate 3D Hair model of the hair zones image, comprising:Determine that the feature of the hair zones image describes operator, the feature describes the space characteristics that operator characterizes the hair zones image;It the feature is described operator feature corresponding with 3D Hair model pre-stored in hair style database describes operator to match, obtain describing the immediate feature of operator with the feature in hair style database describing operator;The immediate feature of operator will be described in hair style database with the feature and describes the corresponding 3D Hair model of operator, is determined as and the immediate 3D Hair model of the hair zones image.
- Method as claimed in claim 6, which is characterized in that determine that the feature of the hair zones image describes operator, comprising:Determine the Internal periphery and outer profile of the hair zones image;The midpoint for determining two horizontal lines of corners of the mouth characteristic point that the human face characteristic point includes, using the midpoint as the ray origin for carrying out the scanning of omnibearing angel direction to the hair zones image;The hair zones image record on each angle direction, the distance of the distance of the origin to the Internal periphery, the distance of the origin to the outer profile and the Internal periphery to the outer profile in the scanning process of omnibearing angel direction;By the distance of the distance of the origin of record to the Internal periphery, the distance of the origin to the outer profile and the Internal periphery to the outer profile, the feature as the hair zones image describes operator.
- A kind of user's Hair model reconstructing device characterized by comprisingAcquiring unit, the face for obtaining reconstructed user face image;Determination unit, the face for being obtained according to the acquiring unit face image, determine the hair zones image of the reconstructed user;Matching unit, the hair zones image for determining the determination unit are matched with 3D Hair model pre-stored in hair style database, are obtained and the immediate 3D Hair model of the hair zones image;Processing unit, for the matching unit is matched with the hair zones image immediate 3D Hair model, be determined as being reconstructed the 3D Hair model of user.
- Device as claimed in claim 8, which is characterized in that the face that the acquiring unit obtains faces image including at least human face region image, hair zones image and background area image;The determination unit faces image according to the face that the acquiring unit obtains in the following way, determines the hair zones image of the reconstructed user:It is faced in the face and determines first area image in image, the first area image includes the human face region image, the hair zones image and part the background area image;The part background area image is identified in the first area image, by the image in the first area image in addition to the background area image of the part identified, is determined as second area image;The human face region image is identified in the second area image, by the image in the second area image in addition to the human face region image identified, is determined as the hair zones image.
- Device as claimed in claim 9, which is characterized in that the determination unit is faced in the face determine first area image in image in the following way:It detects the face and faces human face characteristic point in image;According to the human face characteristic point, the human face region image is determined, and the first block diagram region for being determined to cover the human face region image in image is faced in the face;On the basis of first block diagram region, expands first block diagram region, obtain the second block diagram region that can cover the human face region image, the hair zones image and part background area image;By the image in second block diagram region, it is determined as first area image.
- Device as claimed in claim 10, which is characterized in that the determination unit identifies the part background area image in the first area image in the following way:Determine foreground pixel and background pixel, the foreground pixel includes the pixel of eye feature point for including, the pixel of the pixel of nose characteristic point and mouth characteristic point in the human face characteristic point, and the background pixel includes belonging to the face to face image and be not belonging to the image pixel of the first area image;According to the pixel of the first area image, matching degree with the foreground pixel and the background pixel determines the foreground pixel and background pixel in the first area image;By the corresponding image of background pixel in the first area image, it is determined as part background area image.
- Device as claimed in claim 9, which is characterized in that the determination unit identifies the human face region image in the second area image in the following way:The pixel for including in the second area image is transformed into tone saturation degree brightness HSV space;According to value of the face complexion pixel on tri- components of HSV, include in the second area image Face complexion pixel is extracted in pixel, and determines face complexion area according to the face complexion pixel;According to the human face characteristic point, facial contour region is determined;According to the face complexion area and the facial contour region, the human face region image in the second area image is determined.
- Such as the described in any item devices of claim 8-12, it is characterized in that, the matching unit in the following way matches the hair zones image with 3D Hair model pre-stored in hair style database, obtains and the immediate 3D Hair model of the hair zones image:Determine that the feature of the hair zones image describes operator, the feature describes the space characteristics that operator characterizes the hair zones image;It the feature is described operator feature corresponding with 3D Hair model pre-stored in hair style database describes operator to match, obtain describing the immediate feature of operator with the feature in hair style database describing operator;The immediate feature of operator will be described in hair style database with the feature and describes the corresponding 3D Hair model of operator, is determined as and the immediate 3D Hair model of the hair zones image.
- Device as claimed in claim 13, which is characterized in that the matching unit determines that the feature of the hair zones image describes operator in the following way:Determine the Internal periphery and outer profile of the hair zones image;The midpoint for determining two horizontal lines of corners of the mouth characteristic point that the human face characteristic point includes, using the midpoint as the ray origin for carrying out the scanning of omnibearing angel direction to the hair zones image;The hair zones image record on each angle direction, the distance of the distance of the origin to the Internal periphery, the distance of the origin to the outer profile and the Internal periphery to the outer profile in the scanning process of omnibearing angel direction;By the distance of the distance of the origin of record to the Internal periphery, the distance of the origin to the outer profile and the Internal periphery to the outer profile, the feature as the hair zones image describes operator.
- A kind of terminal, which is characterized in that including input equipment, memory, processor, display screen and bus, wherein the input equipment, the memory and the display screen are connected to the processor by the bus, whereinThe input equipment, the face for obtaining reconstructed user face image;The memory, the face for storing the program of the processor execution, the reconstructed user that the input equipment obtains face image and 3D Hair model;The processor, for calling the program of the memory storage and the face of the reconstructed user of memory storage to face image, determine that the face faces the hair zones image in image, and the hair zones image is matched with 3D Hair model pre-stored in the memory, it obtains and the immediate 3D Hair model of the hair zones image, it will be determined as being reconstructed the 3D Hair model of user with the immediate 3D Hair model of the hair zones image;The display screen, for showing that the face of reconstructed user of the input equipment acquisition faces the 3D Hair model for the reconstructed user that image and the processor are determined.
- Terminal as claimed in claim 15, which is characterized in that the face that the input equipment obtains faces image including at least human face region image, hair zones image and background area image;The processor faces image according to the face that the acquiring unit obtains in the following way, determines the hair zones image of the reconstructed user:It is faced in the face and determines first area image in image, the first area image includes the human face region image, the hair zones image and part the background area image;The part background area image is identified in the first area image, by the image in the first area image in addition to the background area image of the part identified, is determined as second area image;The human face region image is identified in the second area image, by the image in the second area image in addition to the human face region image identified, is determined as the hair zones image.
- Terminal as claimed in claim 16, which is characterized in that the processor is faced in the face determine first area image in image in the following way:It detects the face and faces human face characteristic point in image;According to the human face characteristic point, the human face region image is determined, and the first block diagram region for being determined to cover the human face region image in image is faced in the face;On the basis of first block diagram region, expands first block diagram region, obtain that institute can be covered State the second block diagram region of human face region image, the hair zones image and part background area image;By the image in second block diagram region, it is determined as first area image.
- Terminal as claimed in claim 17, which is characterized in that the processor identifies the part background area image in the first area image in the following way:Determine foreground pixel and background pixel, the foreground pixel includes the pixel of eye feature point for including, the pixel of the pixel of nose characteristic point and mouth characteristic point in the human face characteristic point, and the background pixel includes belonging to the face to face image and be not belonging to the image pixel of the first area image;According to the pixel of the first area image, matching degree with the foreground pixel and the background pixel determines the foreground pixel and background pixel in the first area image;By the corresponding image of background pixel in the first area image, it is determined as part background area image.
- Terminal as claimed in claim 16, which is characterized in that the processor identifies the human face region image in the second area image in the following way:The pixel for including in the second area image is transformed into tone saturation degree brightness HSV space;According to value of the face complexion pixel on tri- components of HSV, face complexion pixel is extracted in the pixel that the second area image includes, and determines face complexion area according to the face complexion pixel;According to the human face characteristic point, facial contour region is determined;According to the face complexion area and the facial contour region, the human face region image in the second area image is determined.
- Such as the described in any item terminals of claim 15-19, it is characterized in that, the processor in the following way matches the hair zones image with 3D Hair model pre-stored in hair style database, obtains and the immediate 3D Hair model of the hair zones image:Determine that the feature of the hair zones image describes operator, the feature describes the space characteristics that operator characterizes the hair zones image;It the feature is described operator feature corresponding with 3D Hair model pre-stored in hair style database describes operator to match, obtain describing the immediate feature of operator in hair style database with the feature Operator is described;The immediate feature of operator will be described in hair style database with the feature and describes the corresponding 3D Hair model of operator, is determined as and the immediate 3D Hair model of the hair zones image.
- Terminal as claimed in claim 20, which is characterized in that the processor determines that the feature of the hair zones image describes operator in the following way:Determine the Internal periphery and outer profile of the hair zones image;The midpoint for determining two horizontal lines of corners of the mouth characteristic point that the human face characteristic point includes, using the midpoint as the ray origin for carrying out the scanning of omnibearing angel direction to the hair zones image;The hair zones image record on each angle direction, the distance of the distance of the origin to the Internal periphery, the distance of the origin to the outer profile and the Internal periphery to the outer profile in the scanning process of omnibearing angel direction;By the distance of the distance of the origin of record to the Internal periphery, the distance of the origin to the outer profile and the Internal periphery to the outer profile, the feature as the hair zones image describes operator.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2016/107121 WO2018094653A1 (en) | 2016-11-24 | 2016-11-24 | User hair model re-establishment method and apparatus, and terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108463823A true CN108463823A (en) | 2018-08-28 |
CN108463823B CN108463823B (en) | 2021-06-01 |
Family
ID=62194696
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201680060827.9A Active CN108463823B (en) | 2016-11-24 | 2016-11-24 | Reconstruction method and device of user hair model and terminal |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN108463823B (en) |
WO (1) | WO2018094653A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110910487A (en) * | 2018-09-18 | 2020-03-24 | Oppo广东移动通信有限公司 | Construction method, construction apparatus, electronic apparatus, and computer-readable storage medium |
CN111510769A (en) * | 2020-05-21 | 2020-08-07 | 广州华多网络科技有限公司 | Video image processing method and device and electronic equipment |
CN112862807A (en) * | 2021-03-08 | 2021-05-28 | 网易(杭州)网络有限公司 | Data processing method and device based on hair image |
CN113269822A (en) * | 2021-05-21 | 2021-08-17 | 山东大学 | Person hair style portrait reconstruction method and system for 3D printing |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109408653B (en) | 2018-09-30 | 2022-01-28 | 叠境数字科技(上海)有限公司 | Human body hairstyle generation method based on multi-feature retrieval and deformation |
CN109299323B (en) * | 2018-09-30 | 2021-05-25 | Oppo广东移动通信有限公司 | Data processing method, terminal, server and computer storage medium |
CN109544445B (en) * | 2018-12-11 | 2023-04-07 | 维沃移动通信有限公司 | Image processing method and device and mobile terminal |
CN113763228B (en) * | 2020-06-01 | 2024-03-19 | 北京达佳互联信息技术有限公司 | Image processing method, device, electronic equipment and storage medium |
CN111833240B (en) * | 2020-06-03 | 2023-07-25 | 北京百度网讯科技有限公司 | Face image conversion method and device, electronic equipment and storage medium |
CN113538455B (en) * | 2021-06-15 | 2023-12-12 | 聚好看科技股份有限公司 | Three-dimensional hairstyle matching method and electronic equipment |
CN113962845B (en) * | 2021-08-25 | 2023-08-29 | 北京百度网讯科技有限公司 | Image processing method, image processing apparatus, electronic device, and storage medium |
CN113744286A (en) * | 2021-09-14 | 2021-12-03 | Oppo广东移动通信有限公司 | Virtual hair generation method and device, computer readable medium and electronic equipment |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101354743A (en) * | 2007-08-09 | 2009-01-28 | 湖北莲花山计算机视觉和信息科学研究院 | Image base for human face image synthesis |
CN101593365A (en) * | 2009-06-19 | 2009-12-02 | 电子科技大学 | A kind of method of adjustment of universal three-dimensional human face model |
CN101630363A (en) * | 2009-07-13 | 2010-01-20 | 中国船舶重工集团公司第七○九研究所 | Rapid detection method of face in color image under complex background |
CN101923637A (en) * | 2010-07-21 | 2010-12-22 | 康佳集团股份有限公司 | Mobile terminal as well as human face detection method and device thereof |
CN102419868A (en) * | 2010-09-28 | 2012-04-18 | 三星电子株式会社 | Device and method for modeling 3D (three-dimensional) hair based on 3D hair template |
CN102567998A (en) * | 2012-01-06 | 2012-07-11 | 西安理工大学 | Head-shoulder sequence image segmentation method based on double-pattern matching and edge thinning |
CN103235931A (en) * | 2013-03-29 | 2013-08-07 | 天津大学 | Human eye fatigue detecting method |
CN103366400A (en) * | 2013-07-24 | 2013-10-23 | 深圳市华创振新科技发展有限公司 | Method for automatically generating three-dimensional head portrait |
CN103400110A (en) * | 2013-07-10 | 2013-11-20 | 上海交通大学 | Abnormal face detection method in front of ATM (automatic teller machine) |
CN103905733A (en) * | 2014-04-02 | 2014-07-02 | 哈尔滨工业大学深圳研究生院 | Method and system for conducting real-time tracking on faces by monocular camera |
CN104157001A (en) * | 2014-08-08 | 2014-11-19 | 中科创达软件股份有限公司 | Method and device for drawing head caricature |
CN105139415A (en) * | 2015-09-29 | 2015-12-09 | 小米科技有限责任公司 | Foreground and background segmentation method and apparatus of image, and terminal |
CN105389548A (en) * | 2015-10-23 | 2016-03-09 | 南京邮电大学 | Love and marriage evaluation system and method based on face recognition |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103065360B (en) * | 2013-01-16 | 2016-08-24 | 中国科学院重庆绿色智能技术研究院 | A kind of hair shape effect map generalization method and system |
CN105279186A (en) * | 2014-07-17 | 2016-01-27 | 腾讯科技(深圳)有限公司 | Image processing method and system |
CN105069180A (en) * | 2015-06-19 | 2015-11-18 | 上海卓易科技股份有限公司 | Hair style design method and system |
-
2016
- 2016-11-24 CN CN201680060827.9A patent/CN108463823B/en active Active
- 2016-11-24 WO PCT/CN2016/107121 patent/WO2018094653A1/en active Application Filing
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101354743A (en) * | 2007-08-09 | 2009-01-28 | 湖北莲花山计算机视觉和信息科学研究院 | Image base for human face image synthesis |
CN101593365A (en) * | 2009-06-19 | 2009-12-02 | 电子科技大学 | A kind of method of adjustment of universal three-dimensional human face model |
CN101630363A (en) * | 2009-07-13 | 2010-01-20 | 中国船舶重工集团公司第七○九研究所 | Rapid detection method of face in color image under complex background |
CN101923637A (en) * | 2010-07-21 | 2010-12-22 | 康佳集团股份有限公司 | Mobile terminal as well as human face detection method and device thereof |
CN102419868A (en) * | 2010-09-28 | 2012-04-18 | 三星电子株式会社 | Device and method for modeling 3D (three-dimensional) hair based on 3D hair template |
CN102567998A (en) * | 2012-01-06 | 2012-07-11 | 西安理工大学 | Head-shoulder sequence image segmentation method based on double-pattern matching and edge thinning |
CN103235931A (en) * | 2013-03-29 | 2013-08-07 | 天津大学 | Human eye fatigue detecting method |
CN103400110A (en) * | 2013-07-10 | 2013-11-20 | 上海交通大学 | Abnormal face detection method in front of ATM (automatic teller machine) |
CN103366400A (en) * | 2013-07-24 | 2013-10-23 | 深圳市华创振新科技发展有限公司 | Method for automatically generating three-dimensional head portrait |
CN103905733A (en) * | 2014-04-02 | 2014-07-02 | 哈尔滨工业大学深圳研究生院 | Method and system for conducting real-time tracking on faces by monocular camera |
CN104157001A (en) * | 2014-08-08 | 2014-11-19 | 中科创达软件股份有限公司 | Method and device for drawing head caricature |
CN105139415A (en) * | 2015-09-29 | 2015-12-09 | 小米科技有限责任公司 | Foreground and background segmentation method and apparatus of image, and terminal |
CN105389548A (en) * | 2015-10-23 | 2016-03-09 | 南京邮电大学 | Love and marriage evaluation system and method based on face recognition |
Non-Patent Citations (2)
Title |
---|
JIN-LI SUO.ET AL: ""Design sparse features for age estimation using hierarchical face model "", 《 8TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2008), AMSTERDAM, THE NETHERLANDS》 * |
金遥力等: ""复杂背景下发型轮廓的自动提取方法"", 《系统仿真学报》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110910487A (en) * | 2018-09-18 | 2020-03-24 | Oppo广东移动通信有限公司 | Construction method, construction apparatus, electronic apparatus, and computer-readable storage medium |
CN110910487B (en) * | 2018-09-18 | 2023-07-25 | Oppo广东移动通信有限公司 | Construction method, construction device, electronic device, and computer-readable storage medium |
CN111510769A (en) * | 2020-05-21 | 2020-08-07 | 广州华多网络科技有限公司 | Video image processing method and device and electronic equipment |
CN111510769B (en) * | 2020-05-21 | 2022-07-26 | 广州方硅信息技术有限公司 | Video image processing method and device and electronic equipment |
CN112862807A (en) * | 2021-03-08 | 2021-05-28 | 网易(杭州)网络有限公司 | Data processing method and device based on hair image |
CN113269822A (en) * | 2021-05-21 | 2021-08-17 | 山东大学 | Person hair style portrait reconstruction method and system for 3D printing |
Also Published As
Publication number | Publication date |
---|---|
WO2018094653A1 (en) | 2018-05-31 |
CN108463823B (en) | 2021-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108463823A (en) | A kind of method for reconstructing, device and the terminal of user's Hair model | |
KR102045695B1 (en) | Facial image processing method and apparatus, and storage medium | |
CN109829930B (en) | Face image processing method and device, computer equipment and readable storage medium | |
CN109859098B (en) | Face image fusion method and device, computer equipment and readable storage medium | |
US10403036B2 (en) | Rendering glasses shadows | |
EP3992919B1 (en) | Three-dimensional facial model generation method and apparatus, device, and medium | |
CN108550176A (en) | Image processing method, equipment and storage medium | |
US10824910B2 (en) | Image processing method, non-transitory computer readable storage medium and image processing system | |
CN113628327B (en) | Head three-dimensional reconstruction method and device | |
JP2015215895A (en) | Depth value restoration method of depth image, and system thereof | |
JP2024500896A (en) | Methods, systems and methods for generating 3D head deformation models | |
CN116997933A (en) | Method and system for constructing facial position map | |
US20190197204A1 (en) | Age modelling method | |
CN108475424A (en) | Methods, devices and systems for 3D feature trackings | |
CN111652123A (en) | Image processing method, image synthesizing method, image processing apparatus, image synthesizing apparatus, and storage medium | |
KR20230110787A (en) | Methods and systems for forming personalized 3D head and face models | |
CN111080746A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
JP2024503794A (en) | Method, system and computer program for extracting color from two-dimensional (2D) facial images | |
CN109919030A (en) | Black eye kind identification method, device, computer equipment and storage medium | |
CN114155569B (en) | Cosmetic progress detection method, device, equipment and storage medium | |
CN115270184A (en) | Video desensitization method, vehicle video desensitization method and vehicle-mounted processing system | |
CN116580445B (en) | Large language model face feature analysis method, system and electronic equipment | |
CN116977539A (en) | Image processing method, apparatus, computer device, storage medium, and program product | |
CN113837018B (en) | Cosmetic progress detection method, device, equipment and storage medium | |
CN115861122A (en) | Face image processing method and device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20210428 Address after: Unit 3401, unit a, building 6, Shenye Zhongcheng, No. 8089, Hongli West Road, Donghai community, Xiangmihu street, Futian District, Shenzhen, Guangdong 518040 Applicant after: Honor Device Co.,Ltd. Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen Applicant before: HUAWEI TECHNOLOGIES Co.,Ltd. |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |