CN110349269A - A kind of target wear try-in method and system - Google Patents
A kind of target wear try-in method and system Download PDFInfo
- Publication number
- CN110349269A CN110349269A CN201910425132.1A CN201910425132A CN110349269A CN 110349269 A CN110349269 A CN 110349269A CN 201910425132 A CN201910425132 A CN 201910425132A CN 110349269 A CN110349269 A CN 110349269A
- Authority
- CN
- China
- Prior art keywords
- dimensional model
- target
- face
- facial image
- facial
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2016—Rotation, translation, scaling
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Architecture (AREA)
- Processing Or Creating Images (AREA)
Abstract
The present invention relates to a kind of target wear try-in method and systems.Try-in method includes: to obtain the facial image including target object face;Facial image is mapped in stereo scene, and establishes the three-dimensional model of target wear in stereo scene;When the pose of target object face changes, the location information and facial parameter information of target object face are obtained from facial image;Three-dimensional model is adjusted to the predeterminated position of facial image by location information and facial parameter information.The facial image that the embodiment of the present invention passes through acquisition target object face, and construct stereo scene, facial image is mapped in stereo scene, and construct the three-dimensional model of target wear, when the pose of target object face changes, the location information and facial parameter information of target object face are obtained, by three-dimensional model rotation, predeterminated position is moved to and tries on, allow user's wearing effect from multi-angle, improves the really degree tried on.
Description
Technical field
The present invention relates to technical field of face recognition more particularly to a kind of target wear try-in method and systems.
Background technique
In general, the facial structure of different user is different, Most users are dressed to choose suitable glasses or wig etc.
Object needs to go to try on to solid shop/brick and mortar store and choose, the inefficiency for wasting the time cost of user, and trying on.
With the development of internet technology, many users have selected shopping online.When choosing glasses or wig on the net,
User can only predict the effect that oneself is worn according to previous experiences and model's shape of face, and the product for often resulting in purchase is dissatisfied etc.
Situation occurs.
The prior art realizes wearing effect using static images, can only realize static wearing effect observation, user can only
Wear is worn over effect on the face from going from the angle for shooting photo at that time.It, can not be from side because wear is picture
Face observation wearing effect, the getable wearing effect information of user institute is less, can not confirm whole wearing effect.
Summary of the invention
Of the existing technology in order to solve the problems, such as, at least one embodiment of the present invention provides a kind of target wear
Try-in method and system.
In a first aspect, the embodiment of the invention provides a kind of target wear try-in method, the try-in method includes:
Obtain the facial image including target object face;
The facial image is mapped in stereo scene, and establishes the solid of target wear in the stereo scene
Model;
Judge whether the pose of the target object face changes according to the facial image;
When the pose of the target object face changes, the target object face is obtained from the facial image
The location information and facial parameter information in portion;
The three-dimensional model is adjusted to the facial image by the location information and the facial parameter information
Predeterminated position.
Based on the above-mentioned technical proposal, the embodiment of the present invention can also make following improvement.
With reference to first aspect, in the first embodiment of first aspect, it is described obtained from the facial image it is described
The location information and facial parameter information of target object face, comprising:
The facial image is handled based on face tracking algorithm;It obtains including the default of the target object face
The location information of the center position of position, the facial parameter information including face's rotation angle and face's width.
The first embodiment with reference to first aspect, it is described to pass through institute's rheme in second of embodiment of first aspect
Confidence breath and the facial parameter information adjust the three-dimensional model to the predeterminated position of the facial image, comprising:
Turn 3D algorithm based on 2D, angle is rotated by the face and face's width revolves the three-dimensional model
Turn scaling;
The center position of the predeterminated position is converted into solid space coordinate, by the three-dimensional mould after rotation scaling
Type is moved to the corresponding position of the solid space coordinate.
With reference to first aspect, in the third embodiment of first aspect, target wear try-in method further include:
When the three-dimensional model is in the predeterminated position of the facial image, the transparency of the three-dimensional model is reduced
To the first preset value;
When the three-dimensional model is not in the predeterminated position of the facial image, by the transparency liter of the three-dimensional model
Up to the second preset value.
With reference to first aspect or first, second or third kind of embodiment of first aspect, in the 4th kind of reality of first aspect
It applies in example, the target wear try-in method further include:
When receiving the operational order of replacement target wear, the three-dimensional model is deleted, and constructs fresh target wearing
The corresponding new three-dimensional model of object;
The new three-dimensional model is adjusted by the location information and the facial parameter information to the facial image
Predeterminated position.
The 4th kind of embodiment with reference to first aspect, in the 5th kind of embodiment of first aspect, the target wear
For glasses or wig.
Second aspect, the embodiment of the invention provides a kind of target wears to try system, the target wear wearing on
System includes:
Camera unit, for obtaining the facial image including target object face;
Modeling unit for mapping to the facial image in stereo scene, and establishes mesh in the stereo scene
Mark the three-dimensional model of wear;
Change monitoring unit, for judging whether the pose of the target object face becomes according to the facial image
Change;
Acquiring unit, for being obtained from the facial image when the pose of the target object face changes
The location information and facial parameter information of the target object face;
Processing unit, for being adjusted the three-dimensional model to institute by the location information and the facial parameter information
State the predeterminated position of facial image.
In conjunction with second aspect, in the first embodiment of second aspect, the acquiring unit is specifically used for, and is based on face
Portion's tracing algorithm handles the facial image;Obtain include the predeterminated position of the target object face center point
The location information set, the facial parameter information including face's rotation angle and face's width.
In conjunction with the first embodiment of second aspect, in second of embodiment of second aspect, the processor, specifically
For turning 3D algorithm based on 2D, rotating angle by the face and face's width rotates the three-dimensional model
The center position of the predeterminated position is converted to solid space coordinate by scaling, and the three-dimensional model after rotation scaling is moved
Move position corresponding to the solid space coordinate.
In conjunction with second aspect, in the third embodiment of second aspect, the processing unit is also used to when the solid
When model is in the predeterminated position of the facial image, the transparency of the three-dimensional model is reduced to the first preset value;Work as institute
When stating three-dimensional model and being not in the predeterminated position of the facial image, the transparency of the three-dimensional model is increased to second and is preset
Value.
In conjunction with second aspect or first, second or third kind of embodiment of second aspect, in the 4th kind of reality of second aspect
It applies in example, the target wear tries system on further include: operational order receiving unit, for receiving replacement target wear
Operational order;
The modeling unit is also used to when receiving the operational order of replacement target wear, deletes the three-dimensional mould
Type, and construct the corresponding new three-dimensional model of fresh target wear;
The processing unit is also used to the new three-dimensional model by the location information and the facial parameter information institute
State the predeterminated position adjusted to the facial image.
Above-mentioned technical proposal of the invention has the advantages that the embodiment of the present invention by obtaining mesh compared with prior art
The facial image of subject face is marked, and constructs stereo scene, facial image is mapped in stereo scene, and constructs target wearing
The three-dimensional model of object obtains location information and the face of target object face when the pose of target object face changes
Parameter information by three-dimensional model rotation, is moved to predeterminated position and tries on, user is tried on from multi-angle
Effect improves the really degree tried on.
Detailed description of the invention
Fig. 1 is a kind of schematic diagram for the terminal that each embodiment of the present invention provides;
Fig. 2 is a kind of target wear try-in method flow diagram provided in an embodiment of the present invention;
Fig. 3 be another embodiment of the present invention provides a kind of target wear try-in method flow diagram;
Fig. 4 is a kind of target wear try-in method flow diagram that further embodiment of this invention provides;
Fig. 5 is the glasses try-on method flow diagram that further embodiment of this invention provides;
Fig. 6 is the glasses try-on method flow diagram that further embodiment of this invention provides;
Fig. 7 is that a kind of target wear that further embodiment of this invention provides tries system structure diagram on;
Fig. 8 is that a kind of target wear that further embodiment of this invention provides tries system structure diagram on.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is
A part of the embodiments of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, ordinary skill people
Member's every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
As shown in Figure 1, the embodiment of the invention provides a kind of realities each to realize the present invention provided in an embodiment of the present invention
A kind of hardware structural diagram of terminal of example is applied, terminal includes: photographic device, display device, processor, memory and communication
Bus, each electronic component complete mutual communication by communication bus, wherein terminal can be such as mobile phone, plate electricity
Brain, laptop, palm PC, personal digital assistant (PersonalDigitalAssistant, PDA), portable media
Player (PortableMediaPlayer, PMP), navigation device, wearable device, Intelligent bracelet, pedometer etc. are mobile eventually
End, and the fixed terminals such as number TV, desktop computer.
As shown in Fig. 2, a kind of target wear try-in method provided in an embodiment of the present invention, try-in method include:
S11, the facial image including target object face is obtained.
In the present embodiment, can be shot by photographic device include target object face facial image, can also be from mutual
The facial image including target object face is obtained in networking.Target object can be the people to stand in predeterminable area, can also be with
It is the people that spacing is preset apart from photographic device.
S12, facial image is mapped in stereo scene, and establishes the three-dimensional mould of target wear in stereo scene
Type.
In the present embodiment, stereo scene, that is, 3D scene, virtual reality technology are that one kind can create and experiencing virtual generation
The computer simulation system on boundary, it generates a kind of simulated environment using computer, is a kind of Multi-source Information Fusion, interactive
The system emulation of Three-Dimensional Dynamic what comes into a driver's and entity behavior is immersed to user in the environment, and facial image is mapped to stereo scene
In, and in stereo scene load target wear three-dimensional model, wherein the three-dimensional model of target wear can be default
Creation, when need to use, it is loaded directly into corresponding three-dimensional model, alternatively, can be according to the user's choice, in real time
It is loaded, wherein target wear can be glasses, be also possible to wig, can also be that other can be in face wearing
Wear.
S13, judge whether the pose of target object face changes according to facial image.
In the present embodiment, whether the pose of the target object face in real-time confirmation facial image changes, for example,
It can confirm whether the pose of target object face changes by image recognition technology, face tracking algorithm can also be passed through
The pose of the target object face in facial image is tracked, and whether is become according to the pose for comparing confirmation target object face
Change, wherein pose includes: location information and posture information, when in position and posture it is any change when, then confirm target pair
As the pose of face changes.
S14, when the pose of target object face changes, the position of target object face is obtained from facial image
Information and facial parameter information.
In the present embodiment, facial image can be identified by face tracking algorithm, obtains mesh in facial image
The location information and facial parameter information for marking subject face, for example, the location information of target object face includes: face's plane position
Confidence breath, two eyeballs location informations, the facial parameter information such as center, hairline line position, ear location include: face
Relative to parameters such as the rotation angle of direct-view photographic device, face's width, hairline line width, ear widths, certainly, to adapt to not
Same target wear needs to obtain the location information and ginseng of user face corresponding position according to the wearing position of target wear
Number information.
Specifically, being carried out based on face tracking algorithm to facial image when the pose of target object face changes
Processing;Obtain include the center position of the predeterminated position of target object face location information including face rotation angle and
The facial parameter information of face's width.
S15, three-dimensional model is adjusted to the predeterminated position of facial image by location information and facial parameter information.
In the present embodiment, by location information obtained in above-mentioned steps and facial parameter information by the position of three-dimensional model
It is adjusted with posture, until three-dimensional model is adjusted to the predeterminated position of facial image, for example, three-dimensional model is first moved to people
Then the predeterminated position of face image adjusts the posture of three-dimensional model according to facial parameter information, can pass through rotary stereo model
Adjust the posture of three-dimensional model.
As shown in figure 3, in the present embodiment, target wear try-in method further include:
S21, when receive replacement target wear operational order when, delete three-dimensional model, and construct fresh target wearing
The corresponding new three-dimensional model of object.
In the present embodiment, the operational order that user's input is received by operational order reception device, for example, can pass through
The desired target wear of touching display screen selection can also select fresh target wear by virtual push button or entity button,
When receiving the operational order of replacement target wear, original three-dimensional model is deleted, and dress according to the target newly selected
Object establishes new three-dimensional model, is tried in stereo scene.
S22, new three-dimensional model opsition dependent information and facial parameter information are adjusted to the predeterminated position of facial image.
In the present embodiment, opsition dependent information and facial parameter information adjust the position of new three-dimensional model and posture
It is whole, ibid, repeated no more in this step.
In the present embodiment, when three-dimensional model is in the predeterminated position of facial image, the transparency of three-dimensional model is dropped
Down to the first preset value;When three-dimensional model is not in the predeterminated position of facial image, the transparency of three-dimensional model is increased to
Second preset value, wherein the first preset value can be 0, three-dimensional model is opaque at this time, and user can see three-dimensional model and try on
Effect, the second preset value can be 100, and three-dimensional model is transparent at this time, the solid mould when three-dimensional model is not in predeterminated position
Type is transparent, and three-dimensional model does not interfere with the usage experience of user.
In the present embodiment, target wear can be glasses, wig or other face's wears, by real time to people
Face is tracked, and is realized automatic replacement glasses model, is realized wearing effect real-time, true to nature, and not by human face action or position
The limitation of variation.Using face technology, corresponding glasses model can be allowed when the pose of face changes, automatically with
With face, position of human eye is fitted to, user is without doing any operation, to realize wearing effect true to nature.
As shown in figure 4, the embodiment of the invention provides a kind of target wear try-in methods to include:
S31, the facial image including target object face is obtained.
Related step S31 can be found in the description in step S11 in detail, and details are not described herein for the present embodiment.
S32, facial image is mapped in stereo scene, and establishes the three-dimensional mould of target wear in stereo scene
Type.
Related step S32 can be found in the description in step S12 in detail, and details are not described herein for the present embodiment.
S33, confirm whether the pose of target object face changes according to facial image.
Related step S33 can be found in the description in step S13 in detail, and details are not described herein for the present embodiment.
S34, when the pose of target object face changes, the position of target object face is obtained from facial image
Information and facial parameter information.
Related step S34 can be found in the description in step S14 in detail, and details are not described herein for the present embodiment.
S35, turn 3D algorithm based on 2D, angle is rotated by face and face's width carries out rotation scaling to three-dimensional model.
In the present embodiment, structural remodeling technology is an important research direction in computer vision, is widely used in
Historic site is rebuild, film making, the fields such as City Modeling.Since this technology can be clapped from the video camera of static scene, movement
The Depth cue that scene is obtained in the pictures taken the photograph, conforms exactly to a kind of situation in 2D/3D conversion method, is rotated by face
Angle and face's width carry out rotation scaling to three-dimensional model, to adapt to face's angle case of active user.
S36, the center position of predeterminated position is converted into solid space coordinate, the three-dimensional model after rotation scaling is moved
Move position corresponding to solid space coordinate.
In the present embodiment, the three-dimensional model that pose adjustment will be completed after rotation scaling, is moved to solid space coordinate pair
Trying on for target wear is completed, naturally it is also possible to be that three-dimensional model is first moved to solid space coordinate pair to answer in the position answered
Position after, then rotation scaling is carried out to three-dimensional model, to complete trying on for target wear.
As shown in figure 5, in a specific embodiment, when the target wear is glasses, the embodiment of the present invention
Give the realization step of following try-in method:
S41, camera is directed at predeterminable area.
In the present embodiment, camera is directed at predeterminable area, user stands can be to the face of user in predeterminable area
Portion's image is obtained.
S42,3D scene is established, and creates glasses 3D model in the 3D scene.
In the present embodiment, 3D scene can be established by VR technology, AR technology or MR technology, and selects to try according to user
The parameter for the glasses worn creates glasses 3D model.Wherein, the glasses 3D model loaded in this step can be set to hidden state,
Avoid model when not reaching predeterminated position, user perceives model, and user experience is caused to reduce.
S43, by camera acquisition in real time include the color image data of user face, and handled.
As shown in fig. 6, in the present embodiment, color image data is handled as follows:
S51, the background that collected color image data is mapped as to 3D scene.
Color image data is mapped as to the background of 3D scene, so that showing active user face image in 3D scene.
S52, general face algorithm is used to color image data, provides center position coordinates, the face of two eyeballs
Portion rotates 3 data of angle and face's width.
S53, using standard 3D algorithm, eyeball center position coordinates are switched to for 3d space coordinate, by glasses 3D model tune
The coordinate of whole to the corresponding 3d space.
S54, angle is rotated according to the face, glasses 3D model is rotated, so that the glasses 3D model and the colour
User face in image data is parallel.
S55, using standard 3D algorithm, face's width is switched into 3d space length, according to 3d space length by glasses 3D mould
Type zooms in and out.
For example, being P and rotation from the position that face algorithm obtains user face after recognizing face in the present embodiment
Angle A.Glasses 3D model is moved to P position in the next steps and rotates angle A, that is, is completed the tune of glasses 3D model
It is whole, since the deflection on head in 3D scene influences whether the display of glasses model, so needing glasses 3D according to face's width
The rotation of model is converted to 3D by 2D, can turn the realization of 3D algorithm by 2D.
In the present embodiment, the above-mentioned process that the position of glasses 3D model and posture are adjusted can out of order into
Row, can be after the posture first to glasses 3D model be adjusted, then is adjusted to the position of glasses 3D model, meanwhile, in conjunction with
Hidden state is set by glasses 3D model in above-mentioned steps, it, can be by glasses 3D model after completing the adjustment of position and posture
It is set as non-transparent state, user is allowed to see the eyes 3D model for adapting to face, completes trying on for glasses 3D model.
This programme can complete glasses try-in of the user in 3D scene through the above steps, meanwhile, try-in method may be used also
To include: through the above steps, to be acquired again when the position of the user face in color image data and posture change
The location information and posture information of user face realize glasses mould according to location information and posture information adjustment glasses model posture
Type follows the movement of user face and moves.
In the present embodiment, further includes: the object dresses try-in method further include: user selects a new glasses
When, the operational order of user is received, corresponding glasses parameter is obtained according to operational order, new eye is generated according to spectacle eyes parameter
Mirror 3D model is replaced the glasses 3D model in above-described embodiment, model in scene is changed to new model in real time.Replacement
Face is not influenced in the process, and user does not have to return to particular state, the result after can immediately seeing replacement.
In the present embodiment, by tracking in real time to face, automatic replacement glasses model is realized, realization is real-time, forces
Genuine wearing effect, and do not limited by human face action or shift in position.Using face technology, glasses 3D model can be allowed
Automatically face is followed, fits to position of human eye, user is without doing any operation, to realize wearing effect true to nature.
In a specific embodiment, the target wear can also be that the faces such as wig, ear pendant, eyelash dress
The try-in method of object, above-mentioned face's wear can refer to target wear try-in method provided by the above embodiment, more preferably
, facial dressing model can also be constructed, facial dressing model is tried on, user is carried out to dressing effect true
Recognize, avoids repeating to make up and removing ornaments and formal dress.
As shown in fig. 7, the embodiment of the invention provides a kind of target wears to try system, target wear donning system on
It include: camera unit, modeling unit, change monitoring unit, acquiring unit and processing unit.
In the present embodiment, camera unit, for obtaining the facial image including target object face.
In the present embodiment, modeling unit for mapping to facial image in stereo scene, and is built in stereo scene
The three-dimensional model of vertical target wear.
In the present embodiment, change monitoring unit, for judged according to facial image target object face pose whether
It changes.
In the present embodiment, acquiring unit, for when the pose of target object face changes, from facial image
Obtain the location information and facial parameter information of target object face.
In the present embodiment, processing unit, for by location information and facial parameter information by three-dimensional model adjust to
The predeterminated position of facial image.
In a specific embodiment, acquiring unit is specifically used for, and is carried out based on face tracking algorithm to facial image
Processing;Obtain include the center position of the predeterminated position of target object face location information including face rotation angle and
The facial parameter information of face's width.
In a specific embodiment, target wear tries system on further include: operational order receiving unit, for connecing
Receive the operational order of replacement target wear;
In the present embodiment, modeling unit is also used to delete vertical when receiving the operational order of replacement target wear
Body Model, and construct the corresponding new three-dimensional model of fresh target wear.
In the present embodiment, processing unit is also used to adjust new three-dimensional model opsition dependent information and facial parameter information
To the predeterminated position of facial image.
In a specific embodiment, processing unit is also used to be in the predeterminated position of facial image when three-dimensional model
When, the transparency of three-dimensional model is reduced to the first preset value;It, will when three-dimensional model is not in the predeterminated position of facial image
The transparency of three-dimensional model is increased to the second preset value.
The embodiment of the invention provides a kind of target wears to try system on, and compared with trying system on shown in Fig. 5, difference exists
In processor is specifically used for, and turns 3D algorithm based on 2D, rotates angle by face and face's width revolves three-dimensional model
Turn scaling and the center position of predeterminated position is converted into solid space coordinate, the three-dimensional model after rotation scaling is moved to vertical
The corresponding position of body space coordinate.
As shown in figure 8, the embodiment of the invention provides a kind of target wears to try system on, comprising: memory, processor
It is stored in memory at least one and is configured as the computer program executed by processor, computer program is configured
For for realizing following steps:
Obtain the facial image including target object face;
The facial image is mapped in stereo scene, and establishes the solid of target wear in the stereo scene
Model;
Confirm whether the pose of the target object face changes according to the facial image;
When the pose of the target object face changes, the target object face is obtained from the facial image
The location information and facial parameter information in portion;
The three-dimensional model is adjusted to the facial image by the location information and the facial parameter information
Predeterminated position.
The embodiment of the invention provides a kind of computer readable storage medium, meter is stored in computer readable storage medium
Calculation machine program, computer program can be executed by processor the target wear try-in method to realize any of the above-described embodiment.
In the above-described embodiments, can come wholly or partly by software, hardware, firmware or any combination thereof real
It is existing.When implemented in software, it can entirely or partly realize in the form of a computer program product.Computer program product
Including one or more computer instructions.When loading on computers and executing computer program instructions, all or part of real estate
Raw process or function according to the embodiment of the present invention.Computer can be general purpose computer, special purpose computer, computer network,
Or other programmable devices.Computer instruction may be stored in a computer readable storage medium, or from a computer
Readable storage medium storing program for executing to another computer readable storage medium transmit, for example, computer instruction can from a web-site,
Computer, server or data center by wired (such as coaxial cable, optical fiber, Digital Subscriber Line (DSL)) or wireless (such as
Infrared, wireless, microwave etc.) mode transmitted to another web-site, computer, server or data center.Computer
Readable storage medium storing program for executing can be any usable medium or include one or more usable medium collection that computer can access
At the data storage devices such as server, data center.Usable medium can be magnetic medium, (for example, floppy disk, hard disk, magnetic
Band), optical medium (for example, DVD) or semiconductor medium (such as solid state hard disk SolidStateDisk (SSD)) etc..
Finally, it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although
Present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: it still may be used
To modify the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features;
And these are modified or replaceed, technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution spirit and
Range.
Claims (11)
1. a kind of target wear try-in method, which is characterized in that the try-in method includes:
Obtain the facial image including target object face;
The facial image is mapped in stereo scene, and establishes the three-dimensional mould of target wear in the stereo scene
Type;
Judge whether the pose of the target object face changes according to the facial image;
When the pose of the target object face changes, the target object face is obtained from the facial image
Location information and facial parameter information;
The three-dimensional model is adjusted into presetting to the facial image by the location information and the facial parameter information
Position.
2. target wear try-in method according to claim 1, which is characterized in that described to be obtained from the facial image
To the location information and facial parameter information of the target object face, comprising:
The facial image is handled based on face tracking algorithm;Obtain include the target object face predeterminated position
Center position the location information, including face rotation angle and face's width the facial parameter information.
3. target wear try-in method according to claim 2, which is characterized in that it is described by the location information and
The facial parameter information adjusts the three-dimensional model to the predeterminated position of the facial image, comprising:
Turn 3D algorithm based on 2D, angle is rotated by the face and face's width carries out rotation contracting to the three-dimensional model
It puts;
The center position of the predeterminated position is converted into solid space coordinate, the three-dimensional model after rotation scaling is moved
Move position corresponding to the solid space coordinate.
4. target wear try-in method according to claim 1, which is characterized in that target wear try-in method also wraps
It includes:
When the three-dimensional model is in the predeterminated position of the facial image, the transparency of the three-dimensional model is reduced to
One preset value;
When the three-dimensional model is not in the predeterminated position of the facial image, the transparency of the three-dimensional model is increased to
Second preset value.
5. target wear try-in method according to any one of claims 1 to 4, which is characterized in that the target wearing
Object try-in method further include:
When receiving the operational order of replacement target wear, the three-dimensional model is deleted, and construct fresh target wear pair
The new three-dimensional model answered;
The new three-dimensional model is adjusted into presetting to the facial image by the location information and the facial parameter information
Position.
6. target wear try-in method according to claim 5, which is characterized in that the target wear be glasses or
Wig.
7. a kind of target wear tries system on, which is characterized in that the target wear donning system includes:
Camera unit, for obtaining the facial image including target object face;
Modeling unit for mapping to the facial image in stereo scene, and is established target in the stereo scene and is worn
Wear the three-dimensional model of object;
Change monitoring unit, for judging whether the pose of the target object face changes according to the facial image;
Acquiring unit, it is described for being obtained from the facial image when the pose of the target object face changes
The location information and facial parameter information of target object face;
Processing unit, for being adjusted the three-dimensional model to the people by the location information and the facial parameter information
The predeterminated position of face image.
8. target wear according to claim 7 tries system on, which is characterized in that the acquiring unit is specifically used for,
The facial image is handled based on face tracking algorithm;It obtains including in the predeterminated position of the target object face
The location information of heart point position, the facial parameter information including face's rotation angle and face's width.
9. target wear according to claim 8 tries system on, which is characterized in that the processor is specifically used for, base
Turn 3D algorithm in 2D, angle and face's width is rotated by the face, rotation scaling is carried out by institute to the three-dimensional model
The center position for stating predeterminated position is converted to solid space coordinate, the three-dimensional model after rotation scaling is moved to described
The corresponding position of solid space coordinate.
10. target wear according to claim 7 tries system on, which is characterized in that the processing unit is also used to work as
When the three-dimensional model is in the predeterminated position of the facial image, it is reduced to first to preset the transparency of the three-dimensional model
Value;When the three-dimensional model is not in the predeterminated position of the facial image, the transparency of the three-dimensional model is increased to
Second preset value.
11. trying system on according to the target wear any in claim 7~10, which is characterized in that the target is worn
It wears object and tries system on further include: operational order receiving unit, for receiving the operational order of replacement target wear;
The modeling unit is also used to delete the three-dimensional model when receiving the operational order of replacement target wear, and
Construct the corresponding new three-dimensional model of fresh target wear;
The processing unit is also used to the new three-dimensional model by tune described in the location information and the facial parameter information
The predeterminated position of the facial image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910425132.1A CN110349269A (en) | 2019-05-21 | 2019-05-21 | A kind of target wear try-in method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910425132.1A CN110349269A (en) | 2019-05-21 | 2019-05-21 | A kind of target wear try-in method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110349269A true CN110349269A (en) | 2019-10-18 |
Family
ID=68174271
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910425132.1A Pending CN110349269A (en) | 2019-05-21 | 2019-05-21 | A kind of target wear try-in method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110349269A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111862338A (en) * | 2020-06-23 | 2020-10-30 | 深圳市新镜介网络有限公司 | Display method and device for simulating glasses wearing image |
CN112037143A (en) * | 2020-08-27 | 2020-12-04 | 腾讯音乐娱乐科技(深圳)有限公司 | Image processing method and device |
CN112562063A (en) * | 2020-12-08 | 2021-03-26 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for carrying out three-dimensional attempt on object |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103413118A (en) * | 2013-07-18 | 2013-11-27 | 毕胜 | On-line glasses try-on method |
US20140201023A1 (en) * | 2013-01-11 | 2014-07-17 | Xiaofan Tang | System and Method for Virtual Fitting and Consumer Interaction |
CN104408764A (en) * | 2014-11-07 | 2015-03-11 | 成都好视界眼镜有限公司 | Method, device and system for trying on glasses in virtual mode |
CN104881526A (en) * | 2015-05-13 | 2015-09-02 | 深圳彼爱其视觉科技有限公司 | Article wearing method and glasses try wearing method based on 3D (three dimensional) technology |
CN107103513A (en) * | 2017-04-23 | 2017-08-29 | 广州帕克西软件开发有限公司 | A kind of virtual try-in method of glasses |
CN107111370A (en) * | 2014-12-30 | 2017-08-29 | 微软技术许可有限责任公司 | The virtual representation of real world objects |
US10008039B1 (en) * | 2015-12-02 | 2018-06-26 | A9.Com, Inc. | Augmented reality fitting approach |
CN109345337A (en) * | 2018-09-14 | 2019-02-15 | 广州多维魔镜高新科技有限公司 | A kind of online shopping examination method of wearing, virtual mirror, system and storage medium |
CN109508128A (en) * | 2018-11-09 | 2019-03-22 | 北京微播视界科技有限公司 | Search for control display methods, device, equipment and computer readable storage medium |
CN109671159A (en) * | 2018-12-26 | 2019-04-23 | 贵州锦微科技信息有限公司 | The virtual try-in method of ethnic group's hairdressing based on 3D VR technology |
-
2019
- 2019-05-21 CN CN201910425132.1A patent/CN110349269A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140201023A1 (en) * | 2013-01-11 | 2014-07-17 | Xiaofan Tang | System and Method for Virtual Fitting and Consumer Interaction |
CN103413118A (en) * | 2013-07-18 | 2013-11-27 | 毕胜 | On-line glasses try-on method |
CN104408764A (en) * | 2014-11-07 | 2015-03-11 | 成都好视界眼镜有限公司 | Method, device and system for trying on glasses in virtual mode |
CN107111370A (en) * | 2014-12-30 | 2017-08-29 | 微软技术许可有限责任公司 | The virtual representation of real world objects |
CN104881526A (en) * | 2015-05-13 | 2015-09-02 | 深圳彼爱其视觉科技有限公司 | Article wearing method and glasses try wearing method based on 3D (three dimensional) technology |
US10008039B1 (en) * | 2015-12-02 | 2018-06-26 | A9.Com, Inc. | Augmented reality fitting approach |
CN107103513A (en) * | 2017-04-23 | 2017-08-29 | 广州帕克西软件开发有限公司 | A kind of virtual try-in method of glasses |
CN109345337A (en) * | 2018-09-14 | 2019-02-15 | 广州多维魔镜高新科技有限公司 | A kind of online shopping examination method of wearing, virtual mirror, system and storage medium |
CN109508128A (en) * | 2018-11-09 | 2019-03-22 | 北京微播视界科技有限公司 | Search for control display methods, device, equipment and computer readable storage medium |
CN109671159A (en) * | 2018-12-26 | 2019-04-23 | 贵州锦微科技信息有限公司 | The virtual try-in method of ethnic group's hairdressing based on 3D VR technology |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111862338A (en) * | 2020-06-23 | 2020-10-30 | 深圳市新镜介网络有限公司 | Display method and device for simulating glasses wearing image |
CN112037143A (en) * | 2020-08-27 | 2020-12-04 | 腾讯音乐娱乐科技(深圳)有限公司 | Image processing method and device |
CN112562063A (en) * | 2020-12-08 | 2021-03-26 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for carrying out three-dimensional attempt on object |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI755671B (en) | Virtual try-on systems and methods for spectacles | |
US9990780B2 (en) | Using computed facial feature points to position a product model relative to a model of a face | |
US11164381B2 (en) | Clothing model generation and display system | |
GB2564745B (en) | Methods for generating a 3D garment image, and related devices, systems and computer program products | |
US9842246B2 (en) | Fitting glasses frames to a user | |
TWI659335B (en) | Graphic processing method and device, virtual reality system, computer storage medium | |
US11733769B2 (en) | Presenting avatars in three-dimensional environments | |
CN109671141B (en) | Image rendering method and device, storage medium and electronic device | |
CN104881526B (en) | Article wearing method based on 3D and glasses try-on method | |
CN110378914A (en) | Rendering method and device, system, display equipment based on blinkpunkt information | |
CN110349269A (en) | A kind of target wear try-in method and system | |
CN107944420A (en) | The photo-irradiation treatment method and apparatus of facial image | |
CN109117779A (en) | One kind, which is worn, takes recommended method, device and electronic equipment | |
JP2022545598A (en) | Virtual object adjustment method, device, electronic device, computer storage medium and program | |
WO2023226454A1 (en) | Product information processing method and apparatus, and terminal device and storage medium | |
CN110348936A (en) | A kind of glasses recommended method, device, system and storage medium | |
CN116012564B (en) | Equipment and method for intelligent fusion of three-dimensional model and live-action photo | |
CN110941974B (en) | Control method and device of virtual object | |
CN114863005A (en) | Rendering method and device for limb special effect, storage medium and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191018 |
|
RJ01 | Rejection of invention patent application after publication |