CN105354792A - Method for trying virtual glasses and mobile terminal - Google Patents

Method for trying virtual glasses and mobile terminal Download PDF

Info

Publication number
CN105354792A
CN105354792A CN201510707664.6A CN201510707664A CN105354792A CN 105354792 A CN105354792 A CN 105354792A CN 201510707664 A CN201510707664 A CN 201510707664A CN 105354792 A CN105354792 A CN 105354792A
Authority
CN
China
Prior art keywords
face
picture
glasses
angle value
central point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510707664.6A
Other languages
Chinese (zh)
Other versions
CN105354792B (en
Inventor
刘岱昕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Landsky Network Technology Co Ltd
Original Assignee
Shenzhen Landsky Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Landsky Network Technology Co Ltd filed Critical Shenzhen Landsky Network Technology Co Ltd
Priority to CN201510707664.6A priority Critical patent/CN105354792B/en
Publication of CN105354792A publication Critical patent/CN105354792A/en
Application granted granted Critical
Publication of CN105354792B publication Critical patent/CN105354792B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the present invention provides a method for trying virtual glasses. The method comprises: acquiring an image input by a user; performing face detection on the image; in a case in which a human face is detected, determining an angle value and a face shape of the human face, and central points of two eyes of the human face; determining a target model picture corresponding to the angle value and the face shape; acquiring a style of glasses selected by the user; determining a glasses picture corresponding to the style of the glasses according to the angle value; scaling the glasses picture according to a proportional relationship between a second distance on the target model picture and a first distance between the central points of the two eyes of the human face, wherein the second distance refers to a distance between central points of two eyes of the model on the target model picture; and superimposing the scaled glasses picture on the human face. The method provided by the embodiment of the present invention enables a user to view a wearing effect of glasses more directly, thus improving user experience.

Description

A kind of virtual glasses try-on method and mobile terminal
Technical field
The embodiment of the present invention relates to the technical field that computer vision combines with virtual wearing, is specifically related to a kind of virtual glasses try-on method and mobile terminal.
Background technology
Along with the fast development of development of Mobile Internet technology, mobile terminal (as mobile phone, panel computer etc.) has become a part indispensable in life.Compared with doing shopping with conventional entity shop, shopping at network has can not be subject to time, place constraint and great variety of goods, the advantage such as cheap, but in shopping at network process, often can only simply by picture or simplify video commodity are shown, the more difficult commodity selecting oneself to want according to which of user.
In life, glasses are as the one in numerous consumption kinds of shopping at network, and its use is very popular.But in the online process selecting glasses, glasses only carry out effect observation by existing model's picture, so, the wearing effect of glasses is shown not directly perceived, thus, user is difficult in the process of shopping at network, experience the sensation self tried on, thus, reduces Consumer's Experience.
Summary of the invention
Embodiments provide a kind of virtual glasses try-on method and mobile terminal, to the wearing effect allowing user understand user's selection more intuitively, meanwhile, improve Consumer's Experience.
Embodiment of the present invention first aspect provides a kind of virtual glasses try-on method, comprising:
Obtain the image of user's input;
Face datection is carried out to described image;
When face being detected, determine the central point of the angle value of described face, shape of face and described face two;
Determine the target model picture corresponding with described angle value and described shape of face;
Obtain the eyewear style that user selects;
The glasses picture that described eyewear style is corresponding is determined according to described angle value;
According in the second distance in described target model picture and described face two central point first distance between proportionate relationship convergent-divergent process is carried out to described glasses picture, wherein, described second distance is the spacing of the central point of two of model in described target model picture;
Glasses picture after described convergent-divergent process is superposed with described face.
In conjunction with embodiment of the present invention first aspect, in the first possible embodiment of first aspect, described when face being detected, the first distance between the central point determining the central point of the angle value of described face, shape of face and described face two and described two comprises:
When multiple face being detected, select the face i in described multiple face, wherein, described face i is the arbitrary face in described multiple face;
Determine the central point of in the angle value of described face i, shape of face and described face i two.
In conjunction with embodiment of the present invention first aspect, in the embodiment that the second of first aspect is possible, describedly determine that glasses picture corresponding to described eyewear style comprises according to described angle value:
From picture library, the glasses picture corresponding with described eyewear style is searched for according to described angle value.
In conjunction with embodiment of the present invention first aspect, in the third possible embodiment of first aspect, described glasses picture after described convergent-divergent process and described face are carried out superposing comprising:
Glasses picture after described convergent-divergent process covers above described face by the position relationship according to the central point of two of described face.
In conjunction with embodiment of the present invention first aspect or the first is in any one possible embodiment in the third, in the 4th kind of possible embodiment of first aspect, described glasses picture after described convergent-divergent process is superposed with described face after, described method also comprises:
Obtain the scene mode that user selects;
From default picture library, select the glasses picture corresponding with described scene mode to be superimposed upon above described face.
Embodiment of the present invention second aspect provides a kind of mobile terminal, comprising:
Acquiring unit, for obtaining the image of user's input;
Detecting unit, carries out Face datection for the image got described acquiring unit;
First determining unit, for when described detecting unit detects face, determines the central point of the angle value of described face, shape of face and described face two;
Second determining unit, for determining the target model picture corresponding with the angle value that described first determining unit is determined and shape of face;
Acquiring unit also specifically for:
Obtain the eyewear style that user selects;
3rd determining unit, the angle value for determining according to described first determining unit determines the glasses picture that eyewear style that described acquiring unit gets is corresponding;
Unit for scaling, for in the face that the second distance in the target model picture determined according to described second determining unit and described first determining unit are determined two central point the first distance between proportionate relationship convergent-divergent process is carried out to the glasses picture that described 3rd determining unit is determined, wherein, described second distance is the spacing of the central point of two of model in described target model picture;
First superpositing unit, superposes for the face glasses picture after the process of described unit for scaling convergent-divergent and described detecting unit detected.
In conjunction with embodiment of the present invention second aspect, in the first possible embodiment of second aspect, described first determining unit comprises:
Selection unit, for when described detecting unit detects multiple face, selects the face i in described multiple face, and wherein, described face i is the arbitrary face in described multiple face;
First determines subelement, for determining the central point of in the angle value of the face i that described selection unit is selected, shape of face and described face i two.
In conjunction with embodiment of the present invention second aspect, in the embodiment that the second of second aspect is possible, described second determining unit specifically for:
From picture library, the glasses picture corresponding with described eyewear style is searched for according to described angle value.
In conjunction with embodiment of the present invention second aspect, in the third possible embodiment of second aspect, described superpositing unit specifically for:
Glasses picture after described convergent-divergent process covers above described face by the position relationship according to the central point of two of described face.
In conjunction with embodiment of the present invention second aspect or the first is in any one possible embodiment in the third, in the 4th kind of possible embodiment of second aspect, described acquiring unit also specifically for:
Obtain the scene mode that user selects;
Described mobile terminal also comprises:
Second superpositing unit, is superimposed upon above the face that described recognition unit recognizes for selecting the glasses picture corresponding with the scene mode that described acquiring unit gets from default picture library.
Adopt the embodiment of the present invention, there is following beneficial effect:
The image of user's input is obtained by the embodiment of the present invention; Face datection is carried out to described image; When face being detected, determine the central point of the angle value of described face, shape of face and described face two; Determine the target model picture corresponding with described angle value and described shape of face; Obtain the eyewear style that user selects; The glasses picture that described eyewear style is corresponding is determined according to described angle value; According in the second distance in described target model picture and described face two central point first distance between proportionate relationship convergent-divergent process is carried out to described glasses picture, wherein, described second distance is the spacing of the central point of two of model in described target model picture; Glasses picture after described convergent-divergent process is superposed with described face.Thus user more directly can observe the wearing effect of glasses, improves Consumer's Experience.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme in the embodiment of the present invention, below the accompanying drawing used required in describing embodiment is briefly described, apparently, accompanying drawing in the following describes is some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the first embodiment schematic flow sheet of a kind of virtual glasses try-on method that the embodiment of the present invention provides;
Fig. 2 is the second embodiment schematic flow sheet of a kind of virtual glasses try-on method that the embodiment of the present invention provides;
Fig. 3 a is the first example structure schematic diagram of a kind of mobile terminal that the embodiment of the present invention provides;
Fig. 3 b is the another structural representation of the first embodiment of a kind of mobile terminal that the embodiment of the present invention provides;
Fig. 3 c is the another structural representation of the first embodiment of a kind of mobile terminal that the embodiment of the present invention provides;
Fig. 4 is the second example structure schematic diagram of a kind of mobile terminal that the embodiment of the present invention provides.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
Below be described in detail respectively.
Term " first ", " second ", " the 3rd " and " the 4th " etc. in instructions of the present invention and claims and described accompanying drawing are for distinguishing different object, instead of for describing particular order.In addition, term " comprises " and " having " and their any distortion, and intention is to cover not exclusive comprising.Such as contain the process of series of steps or unit, method, system, product or equipment and be not defined in the step or unit listed, but also comprise the step or unit do not listed alternatively, or also comprise alternatively for other intrinsic step of these processes, method, product or equipment or unit.
Mobile terminal described by the embodiment of the present invention can comprise smart mobile phone (as Android phone, iOS mobile phone, WindowsPhone mobile phone etc.), panel computer, palm PC, notebook computer, mobile internet device (MID, or Wearable etc. MobileInternetDevices), above-mentioned mobile terminal is only citing, and non exhaustive, including but not limited to above-mentioned mobile terminal.
Referring to Fig. 1, is the first embodiment schematic flow sheet of a kind of virtual glasses method that the embodiment of the present invention provides.Virtual glasses try-on method described in the present embodiment, comprises the following steps:
The image of S101, acquisition user input.
In the present embodiment, the image of user's input can contain at least one face, the picture that the camera that this image can come from mobile terminal photographs, and also can be the photo stored in mobile terminal, can also be network photo.The angle of the face in this image can be positive face, or the side face of unspecified angle.Preferably, in the present embodiment, this image can in have and one-man's face.
S102, Face datection is carried out to described image.
In specific implementation, mobile terminal can carry out Face datection to the image of user's input, and the object of this Face datection is to detect in this picture whether comprise face, or comprises a face or multiple face.
Alternatively, mobile terminal when detect in image do not comprise facial image, can reminding user re-start shooting or alternative picture.
Alternatively, mobile terminal, when facial image being detected but human eye not detected, can re-start shooting or alternative picture by reminding user, further, when user has compared eyes, then point out user to open other eyes.
S103, when face being detected, determine the central point of the angle value of described face, shape of face and described face two.
In specific implementation, when mobile terminal only detects a face, the central point of in the angle value of this face, shape of face and this face two can be determined, also can determine the distance between this central point of two.
Further, when mobile terminal detects multiple face, can select the face i in the plurality of face, wherein, this face i is the arbitrary face in described multiple face; Determine the central point of in the angle value of this face i, shape of face and this face i two again.
Further, when mobile terminal detects multiple face, the central point of two of each face in the angle value of at least two faces in the plurality of face, shape of face and this at least two faces can be determined simultaneously.
S104, determine the target model picture corresponding with described angle value and described shape of face.
In specific implementation, mobile terminal can be searched in picture library according to angle value and shape of face, and this picture library can be positioned at mobile terminal or server, using the picture that mates with this angle value and shape of face as target model picture.
The eyewear style that S105, acquisition user select.
In specific implementation, the classification of glasses can be full frame, half frame and without frame.A glasses of eyewear style then under this this arbitrary classification glasses.
Further, the classification of glasses can include but are not limited to: fashion (500 yuan-800 yuan), gently luxurious (800 yuan-2000 yuan), luxurious (2000 yuan-5000 yuan) and ultimate attainment (more than 5000 yuan).A glasses of eyewear style then under this this arbitrary classification glasses.
Further, the classification of glasses also can include but are not limited to: sheet material, metal, titanium and mixed materials.A glasses of eyewear style then under this this arbitrary classification glasses.
Further, the classification of glasses can include but are not limited to: Armani, like to think Berlin, A Shalu, Polaroid, Porsche, landscape, golf, card jar (unit of capacitance), Victoria, perfume (or spice) how youngster and Zuo Fu.A glasses of eyewear style then under this this arbitrary classification glasses.
The classification of glasses also can include but are not limited to: man's sunglasses, Ms's sunglasses, man's optical frames and woman style optical frames.A glasses of eyewear style then under this this arbitrary classification glasses.
S106, determine according to described angle value the glasses picture that described eyewear style is corresponding.
In specific implementation, mobile terminal can search for the glasses picture corresponding with this eyewear style according to this angle value from picture library.It should be noted that, can comprise multiple model's picture in picture library, this model's picture can wear the glasses of various style.Such as, these glasses can be carried out unspecified angle shooting by certain a glasses, and obtain a series of glasses pictures that these glasses are corresponding, this glasses picture can be named by angle value, thus, effectively can distinguish this series of glasses picture sheet.Alternatively, the glasses picture that search is corresponding with described eyewear style from picture library.Namely store the glasses serial picture of different style in picture library, in each glasses serial picture, comprise the glasses picture of the unspecified angle of certain glasses,
S107, according in the second distance in described target model picture and described face two central point first distance between proportionate relationship convergent-divergent process is carried out to described glasses picture, wherein, described second distance is the spacing of the central point of two of model in described target model picture.
In specific implementation, distance between the central point of two of the face detected in the process that mobile terminal detects face can be the first distance, the central point of two, be the central point of each glasses, second distance is the distance between the central point of two of model in target model picture, and second distance and glasses picture match.Can according to the scaling of the proportionate relationship determination glasses picture between the first distance and second distance.Suppose that the first distance is 1, second distance is 2, and display second distance is 2 times of the first distance, then can by the scale smaller half of glasses picture.
S108, the glasses picture after described convergent-divergent process to be superposed with described face.
In specific implementation, the glasses picture after convergent-divergent process can be superimposed upon human face region by mobile terminal, namely according to the putting position of the central point determination glasses of two in face, according to this putting position, glasses is superimposed upon human face region.
The image of user's input is obtained by the embodiment of the present invention; Face datection is carried out to described image; When face being detected, determine the central point of the angle value of described face, shape of face and described face two; Determine the target model picture corresponding with described angle value and described shape of face; Obtain the eyewear style that user selects; The glasses picture that described eyewear style is corresponding is determined according to described angle value; According in the second distance in described target model picture and described face two central point first distance between proportionate relationship convergent-divergent process is carried out to described glasses picture, wherein, described second distance is the spacing of the central point of two of model in described target model picture; Glasses picture after described convergent-divergent process is superposed with described face.Thus user more directly can observe the wearing effect of glasses, improves Consumer's Experience.
Referring to Fig. 2, is the second embodiment schematic flow sheet of a kind of virtual glasses try-on method that the embodiment of the present invention provides.Virtual glasses try-on method described in the present embodiment, comprises the following steps:
The image of S201, acquisition user input.
In the present embodiment, the image of user's input can contain at least one face, the picture that the camera that this image can come from mobile terminal photographs, and also can be the photo stored in mobile terminal, can also be network photo.The angle of the face in this image can be positive face, or the side face of unspecified angle.Preferably, in the present embodiment, this image can in have and one-man's face.
S202, Face datection is carried out to described image.
In specific implementation, mobile terminal can carry out Face datection to the image of user's input, and the object of this Face datection is to detect in this picture whether comprise face, or comprises a face or multiple face.
Alternatively, mobile terminal when detect in image do not comprise facial image, can reminding user re-start shooting or alternative picture.
Alternatively, mobile terminal, when facial image being detected but human eye not detected, can re-start shooting or alternative picture by reminding user, further, when user has compared eyes, then point out user to open other eyes.
S203, when face being detected, determine the central point of the angle value of described face, shape of face and described face two.
In specific implementation, when mobile terminal only detects a face, the central point of in the angle value of this face, shape of face and this face two can be determined, also can determine the distance between this central point of two.
Further, when mobile terminal detects multiple face, can select the face i in the plurality of face, wherein, this face i is the arbitrary face in described multiple face; Determine the central point of in the angle value of this face i, shape of face and this face i two again.This arbitrary face can include but are not limited to: the minimum face of the maximum face of the face area in multiple face, face area, face show most complete face, first is detected in Face datection process face etc.
Further, when mobile terminal detects multiple face, the central point of two of each face in the angle value of at least two faces in the plurality of face, shape of face and this at least two faces can be determined simultaneously.
S204, determine the target model picture corresponding with described angle value and described shape of face.
In specific implementation, mobile terminal can be searched in picture library according to angle value and shape of face, and this picture library can be positioned at mobile terminal or server, using the picture that mates with this angle value and shape of face as target model picture.
The eyewear style that S205, acquisition user select.
In specific implementation, the classification of glasses can be full frame, half frame and without frame.A glasses of eyewear style then under this this arbitrary classification glasses.
Further, the classification of glasses can include but are not limited to: fashion (500 yuan-800 yuan), gently luxurious (800 yuan-2000 yuan), luxurious (2000 yuan-5000 yuan) and ultimate attainment (more than 5000 yuan).A glasses of eyewear style then under this this arbitrary classification glasses.
Further, the classification of glasses also can include but are not limited to: sheet material, metal, titanium and mixed materials.A glasses of eyewear style then under this this arbitrary classification glasses.
Further, the classification of glasses can include but are not limited to: Armani, like to think Berlin, A Shalu, Polaroid, Porsche, landscape, golf, card jar (unit of capacitance), Victoria, perfume (or spice) how youngster and Zuo Fu.A glasses of eyewear style then under this this arbitrary classification glasses.
The classification of glasses also can include but are not limited to: man's sunglasses, Ms's sunglasses, man's optical frames and woman style optical frames.A glasses of eyewear style then under this this arbitrary classification glasses.
S206, determine according to described angle value the glasses picture that described eyewear style is corresponding.
In specific implementation, mobile terminal can search for the glasses picture corresponding with this eyewear style according to this angle value from picture library.It should be noted that, can comprise multiple model's picture in picture library, this model's picture can wear the glasses of various style.Such as, these glasses can be carried out unspecified angle shooting by certain a glasses, and obtain a series of glasses pictures that these glasses are corresponding, this glasses picture can be named by angle value, thus, effectively can distinguish this series of glasses picture sheet.Alternatively, the glasses picture that search is corresponding with described eyewear style from picture library.Namely store the glasses serial picture of different style in picture library, in each glasses serial picture, comprise the glasses picture of the unspecified angle of certain glasses.
S207, according in the second distance in described target model picture and described face two central point first distance between proportionate relationship convergent-divergent process is carried out to described glasses picture, wherein, described second distance is the spacing of the central point of two of model in described target model picture.
In specific implementation, distance between the central point of two of the face detected in the process that mobile terminal detects face can be the first distance, the central point of two, be the central point of each glasses, second distance is the distance between the central point of two of model in target model picture, and second distance and glasses picture match.Can according to the scaling of the proportionate relationship determination glasses picture between the first distance and second distance.Suppose that the first distance is 1, second distance is 2, and display second distance is 2 times of the first distance, then can by the scale smaller half of glasses picture.
S208, the glasses picture after described convergent-divergent process to be superposed with described face.
In specific implementation, the glasses picture after convergent-divergent process can be superimposed upon human face region by mobile terminal, namely according to the putting position of the central point determination glasses of two in face, according to this putting position, glasses is superimposed upon human face region.
The scene mode that S209, acquisition user select.
Alternatively, mobile terminal can obtain the scene mode that user selects, and this scene mode can include but are not limited to: coffee shop, park, square, club, seashore, the setting sun, sunset clouds.
S210, from default picture library select the glasses picture corresponding with described scene mode be superimposed upon above described face.
In specific implementation, and then mobile terminal can select the glasses picture corresponding with scene mode to be superimposed upon above described facial image from default picture library.Default picture library can comprise one group or many picture groups valut, such as, and can to scene setting picture library.
The image of user's input is obtained by the embodiment of the present invention; Face datection is carried out to described image; When face being detected, determine the central point of the angle value of described face, shape of face and described face two; Determine the target model picture corresponding with described angle value and described shape of face; Obtain the eyewear style that user selects; The glasses picture that described eyewear style is corresponding is determined according to described angle value; According in the second distance in described target model picture and described face two central point first distance between proportionate relationship convergent-divergent process is carried out to described glasses picture, wherein, described second distance is the spacing of the central point of two of model in described target model picture; Glasses picture after described convergent-divergent process is superposed with described face; Obtain the scene mode that user selects; From default picture library, select the glasses picture corresponding with described scene mode to be superimposed upon above described facial image.Thus user more directly can observe the wearing effect of glasses, improves Consumer's Experience.
Refer to Fig. 3 a-Fig. 3 c, wherein, the first example structure schematic diagram of a kind of mobile terminal that Fig. 3 a provides for the embodiment of the present invention.Mobile terminal in Fig. 3 a described in the present embodiment, comprising: acquiring unit 301, detecting unit 302, first determining unit 303, second determining unit 304, the 3rd determining unit 305, unit for scaling 306 and superpositing unit 307, specific as follows:
Acquiring unit 301, for obtaining the image of user's input
Detecting unit 302, carries out Face datection for the image got described acquiring unit 301;
First determining unit 303, for when described detecting unit 302 detects face, determines the central point of the angle value of described face, shape of face and described face two;
Second determining unit 304, for determining the target model picture corresponding with the angle value that described first determining unit 303 is determined and shape of face.
Acquiring unit 301 also specifically for:
Obtain the eyewear style that user selects;
3rd determining unit 305, the angle value for determining according to described first determining unit 303 determines the glasses picture that eyewear style that described acquiring unit 301 gets is corresponding;
Unit for scaling 306, for in the face that the second distance in the target model picture determined according to described second determining unit 304 and described first determining unit are determined two central point the first distance between proportionate relationship convergent-divergent process is carried out to the glasses picture that described 3rd determining unit 305 is determined, wherein, described second distance is the spacing of the central point of two of model in described target model picture;
First superpositing unit 307, superposes for the face glasses picture after the process of described unit for scaling 306 convergent-divergent and described detecting unit 302 detected.
As a kind of possible embodiment, the second determining unit 304 specifically for:
From picture library, the glasses picture corresponding with described eyewear style is searched for according to described angle value.
As a kind of possible embodiment, the first superpositing unit 307 specifically for:
Glasses picture after described convergent-divergent process covers above described face by the position relationship according to the central point of two of described face.
As a kind of possible embodiment, as shown in Figure 3 b, in Fig. 3 a, the first determining unit 303 comprises: selection unit 3031 and first determines subelement 3032, specific as follows:
Selection unit 3031, for when described detecting unit 302 detects multiple face, selects the face i in described multiple face, and wherein, described face i is the arbitrary face in described multiple face;
First determines subelement 3032, for determining the central point of in the angle value of the face i that described selection unit 3031 is selected, shape of face and described face i two.
As a kind of possible embodiment, as shown in Figure 3 c, the acquiring unit of Fig. 3 a or Fig. 3 b also specifically for:
Obtain the scene mode that user selects;
Mobile terminal also comprises:
Second superpositing unit, for being superimposed upon the lens area of the glasses after described first superpositing unit 307 superposes by the reflective scene image corresponding with the scene mode that described acquiring unit 301 gets.
The image of user's input can be obtained by the mobile terminal described by the embodiment of the present invention; Face datection is carried out to described image; When face being detected, determine the central point of the angle value of described face, shape of face and described face two; Determine the target model picture corresponding with described angle value and described shape of face; Obtain the eyewear style that user selects; The glasses picture that described eyewear style is corresponding is determined according to described angle value; According in the second distance in described target model picture and described face two central point first distance between proportionate relationship convergent-divergent process is carried out to described glasses picture, wherein, described second distance is the spacing of the central point of two of model in described target model picture; Glasses picture after described convergent-divergent process is superposed with described face.Thus user more directly can observe the wearing effect of glasses, improves Consumer's Experience.
Referring to Fig. 4, is the second example structure schematic diagram of a kind of mobile terminal that the embodiment of the present invention provides.Mobile terminal described in the present embodiment, comprising: at least one input equipment 1000; At least one output device 2000; At least one processor 3000, such as CPU; With storer 4000, above-mentioned input equipment 1000, output device 2000, processor 3000 are connected by bus 5000 with storer 4000.
Wherein, above-mentioned input equipment 1000 specifically can be camera, physical button, touch-screen, person's touch pad or pointer.
Above-mentioned output device 2000 specifically can be display screen.
Above-mentioned storer 4000 can be high-speed RAM storer, also can be non-labile storer (non-volatilememory), such as magnetic disk memory.Above-mentioned storer 4000 is for storing batch processing code, and above-mentioned input equipment 1000, output device 2000 and processor 3000, for calling the program code stored in storer 4000, perform and operate as follows:
Above-mentioned processor 3000, for
Obtain the image of user's input;
Face datection is carried out to described image;
When face being detected, determine the central point of the angle value of described face, shape of face and described face two;
Determine the target model picture corresponding with described angle value and described shape of face;
Obtain the eyewear style that user selects;
The glasses picture that described eyewear style is corresponding is determined according to described angle value;
According in the second distance in described target model picture and described face two central point first distance between proportionate relationship convergent-divergent process is carried out to described glasses picture, wherein, described second distance is the spacing of the central point of two of model in described target model picture;
Glasses picture after described convergent-divergent process is superposed with described face.
As a kind of possible embodiment, above-mentioned processor 3000 is when face being detected, and the first distance between the central point determining the central point of the angle value of described face, shape of face and described face two and described two comprises:
When multiple face being detected, select the face i in described multiple face, wherein, described face i is the arbitrary face in described multiple face;
Determine the central point of in the angle value of described face i, shape of face and described face i two.
As a kind of possible embodiment, according to described angle value, above-mentioned processor 3000 determines that glasses picture corresponding to described eyewear style comprises:
From picture library, the glasses picture corresponding with described eyewear style is searched for according to described angle value.
As a kind of possible embodiment, the glasses picture after described convergent-divergent process and described face carry out superposing comprising by above-mentioned processor 3000:
Glasses picture after described convergent-divergent process covers above described face by the position relationship according to the central point of two of described face.
As a kind of possible embodiment, after the glasses picture after described convergent-divergent process superposes with described face by above-mentioned processor 3000, described method also comprises:
Obtain the scene mode that user selects;
From default picture library, select the glasses picture corresponding with described scene mode to be superimposed upon above described face.
In specific implementation, input equipment 1000 described in the embodiment of the present invention, output device 2000 and processor 3000 can perform the first embodiment of a kind of virtual glasses try-on method that the embodiment of the present invention provides and the implementation described in the second embodiment, also can perform the implementation of the terminal described in the first embodiment of a kind of mobile terminal that the embodiment of the present invention provides, not repeat them here.
The image of user's input can be obtained by the mobile terminal described by the embodiment of the present invention; Face datection is carried out to described image; When face being detected, determine the central point of the angle value of described face, shape of face and described face two; Determine the target model picture corresponding with described angle value and described shape of face; Obtain the eyewear style that user selects; The glasses picture that described eyewear style is corresponding is determined according to described angle value; According in the second distance in described target model picture and described face two central point first distance between proportionate relationship convergent-divergent process is carried out to described glasses picture, wherein, described second distance is the spacing of the central point of two of model in described target model picture; Glasses picture after described convergent-divergent process is superposed with described face.Thus user more directly can observe the wearing effect of glasses, improves Consumer's Experience.
The embodiment of the present invention also provides a kind of computer-readable storage medium, and wherein, this computer-readable storage medium can have program stored therein, and comprises the part or all of step of any one signal processing method recorded in said method embodiment when this program performs.
In the above-described embodiments, the description of each embodiment is all emphasized particularly on different fields, in certain embodiment, there is no the part described in detail, can see the associated description of other embodiments.
It should be noted that, for aforesaid each embodiment of the method, in order to simple description, therefore it is all expressed as a series of combination of actions, but those skilled in the art should know, the present invention is not by the restriction of described sequence of movement, because according to the present invention, some step may can adopt other orders or carry out simultaneously.Secondly, those skilled in the art also should know, the embodiment described in instructions all belongs to preferred embodiment, and involved action and module might not be that the present invention is necessary.
In several embodiments that the application provides, should be understood that, disclosed device, the mode by other realizes.Such as, device embodiment described above is only schematic, the division of such as said units, be only a kind of logic function to divide, actual can have other dividing mode when realizing, such as multiple unit or assembly can in conjunction with or another system can be integrated into, or some features can be ignored, or do not perform.Another point, shown or discussed coupling each other or direct-coupling or communication connection can be by some interfaces, and the indirect coupling of device or unit or communication connection can be electrical or other form.
The above-mentioned unit illustrated as separating component or can may not be and physically separates, and the parts as unit display can be or may not be physical location, namely can be positioned at a place, or also can be distributed in multiple network element.Some or all of unit wherein can be selected according to the actual needs to realize the object of the present embodiment scheme.
In addition, each functional unit in various embodiments of the present invention can be integrated in a processing unit, also can be that the independent physics of unit exists, also can two or more unit in a unit integrated.Above-mentioned integrated unit both can adopt the form of hardware to realize, and the form of SFU software functional unit also can be adopted to realize.
If above-mentioned integrated unit using the form of SFU software functional unit realize and as independently production marketing or use time, can be stored in a computer read/write memory medium.Based on such understanding, the part that technical scheme of the present invention contributes to prior art in essence in other words or all or part of of this technical scheme can embody with the form of software product, this computer software product is stored in a storage medium, comprise all or part of step of some instructions in order to make a computer equipment (can be personal computer, server or the network equipment etc., can be specifically the processor in computer equipment) perform each embodiment said method of the present invention.Wherein, and aforesaid storage medium can comprise: USB flash disk, portable hard drive, magnetic disc, CD, ROM (read-only memory) are (English: Read-OnlyMemory, abbreviation: ROM) or random access memory (English: RandomAccessMemory, abbreviation: RAM) etc. various can be program code stored medium.
The above, above embodiment only in order to technical scheme of the present invention to be described, is not intended to limit; Although with reference to previous embodiment to invention has been detailed description, those of ordinary skill in the art is to be understood that: it still can be modified to the technical scheme described in foregoing embodiments, or carries out equivalent replacement to wherein portion of techniques feature; And these amendments or replacement, do not make the essence of appropriate technical solution depart from the spirit and scope of various embodiments of the present invention technical scheme.

Claims (10)

1. a virtual glasses try-on method, is characterized in that, comprising:
Obtain the image of user's input;
Face datection is carried out to described image;
When face being detected, determine the central point of the angle value of described face, shape of face and described face two;
Determine the target model picture corresponding with described angle value and described shape of face;
Obtain the eyewear style that user selects;
The glasses picture that described eyewear style is corresponding is determined according to described angle value;
According in the second distance in described target model picture and described face two central point first distance between proportionate relationship convergent-divergent process is carried out to described glasses picture, wherein, described second distance is the spacing of the central point of two of model in described target model picture;
Glasses picture after described convergent-divergent process is superposed with described face.
2. the method for claim 1, is characterized in that, described when face being detected, and the first distance between the central point determining the central point of the angle value of described face, shape of face and described face two and described two comprises:
When multiple face being detected, select the face i in described multiple face, wherein, described face i is the arbitrary face in described multiple face;
Determine the central point of in the angle value of described face i, shape of face and described face i two.
3. the method for claim 1, is characterized in that, describedly determines that glasses picture corresponding to described eyewear style comprises according to described angle value:
From picture library, the glasses picture corresponding with described eyewear style is searched for according to described angle value.
4. the method for claim 1, is characterized in that, describedly glasses picture after described convergent-divergent process and described face is carried out superposing comprising:
Glasses picture after described convergent-divergent process covers above described face by the position relationship according to the central point of two of described face.
5. the method as described in any one of Claims 1-4, is characterized in that, described glasses picture after described convergent-divergent process is superposed with described face after, described method also comprises:
Obtain the scene mode that user selects;
From default picture library, select the glasses picture corresponding with described scene mode to be superimposed upon above described face.
6. a mobile terminal, is characterized in that, comprising:
Acquiring unit, for obtaining the image of user's input;
Detecting unit, carries out Face datection for the image got described acquiring unit;
First determining unit, for when described detecting unit detects face, determines the central point of the angle value of described face, shape of face and described face two;
Second determining unit, for determining the target model picture corresponding with the angle value that described first determining unit is determined and shape of face;
Acquiring unit also specifically for:
Obtain the eyewear style that user selects;
3rd determining unit, the angle value for determining according to described first determining unit determines the glasses picture that eyewear style that described acquiring unit gets is corresponding;
Unit for scaling, for in the face that the second distance in the target model picture determined according to described second determining unit and described first determining unit are determined two central point the first distance between proportionate relationship convergent-divergent process is carried out to the glasses picture that described 3rd determining unit is determined, wherein, described second distance is the spacing of the central point of two of model in described target model picture;
First superpositing unit, superposes for the face glasses picture after the process of described unit for scaling convergent-divergent and described detecting unit detected.
7. mobile terminal as claimed in claim 6, it is characterized in that, described first determining unit comprises:
Selection unit, for when described detecting unit detects multiple face, selects the face i in described multiple face, and wherein, described face i is the arbitrary face in described multiple face;
First determines subelement, for determining the central point of in the angle value of the face i that described selection unit is selected, shape of face and described face i two.
8. mobile terminal as claimed in claim 6, is characterized in that, described second determining unit specifically for:
From picture library, the glasses picture corresponding with described eyewear style is searched for according to described angle value.
9. mobile terminal as claimed in claim 6, is characterized in that, described superpositing unit specifically for:
Glasses picture after described convergent-divergent process covers above described face by the position relationship according to the central point of two of described face.
10. the mobile terminal as described in any one of claim 6 to 9, is characterized in that, described acquiring unit also specifically for:
Obtain the scene mode that user selects;
Described mobile terminal also comprises:
Second superpositing unit, is superimposed upon above described face for selecting the glasses picture corresponding with the scene mode that described acquiring unit gets from default picture library.
CN201510707664.6A 2015-10-27 2015-10-27 A kind of virtual glasses try-on method and mobile terminal Active CN105354792B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510707664.6A CN105354792B (en) 2015-10-27 2015-10-27 A kind of virtual glasses try-on method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510707664.6A CN105354792B (en) 2015-10-27 2015-10-27 A kind of virtual glasses try-on method and mobile terminal

Publications (2)

Publication Number Publication Date
CN105354792A true CN105354792A (en) 2016-02-24
CN105354792B CN105354792B (en) 2019-05-28

Family

ID=55330759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510707664.6A Active CN105354792B (en) 2015-10-27 2015-10-27 A kind of virtual glasses try-on method and mobile terminal

Country Status (1)

Country Link
CN (1) CN105354792B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203364A (en) * 2016-07-14 2016-12-07 广州帕克西软件开发有限公司 System and method is tried in a kind of 3D glasses interaction on
CN106843677A (en) * 2016-12-29 2017-06-13 华勤通讯技术有限公司 A kind of method for displaying image of Virtual Reality glasses, equipment and terminal
CN107563874A (en) * 2017-09-14 2018-01-09 广州便雅悯视光网络科技有限公司 The virtual try-in method and device of a kind of sunglasses
CN107945102A (en) * 2017-10-23 2018-04-20 深圳市朗形网络科技有限公司 A kind of picture synthetic method and device
CN108319943A (en) * 2018-04-25 2018-07-24 北京优创新港科技股份有限公司 A method of human face recognition model performance under the conditions of raising is worn glasses
CN108376252A (en) * 2018-02-27 2018-08-07 广东欧珀移动通信有限公司 Control method, control device, terminal, computer equipment and storage medium
CN109660717A (en) * 2018-11-26 2019-04-19 深圳艺达文化传媒有限公司 From the stacking method and Related product of the earphone image that shoots the video
CN109978655A (en) * 2019-01-14 2019-07-05 明灏科技(北京)有限公司 A kind of virtual frame matching method and system
CN114489326A (en) * 2021-12-30 2022-05-13 南京七奇智能科技有限公司 Crowd-oriented gesture control device and method driven by virtual human interaction attention
US11335028B2 (en) 2018-02-27 2022-05-17 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Control method based on facial image, related control device, terminal and computer device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1971607A (en) * 2006-12-04 2007-05-30 谢亦玲 Picture database processing method for computerized glasses fitting system and glasses fitting device
CN103413118A (en) * 2013-07-18 2013-11-27 毕胜 On-line glasses try-on method
CN103456008A (en) * 2013-08-26 2013-12-18 刘晓英 Method for matching face and glasses
US20140240354A1 (en) * 2013-02-28 2014-08-28 Samsung Electronics Co., Ltd. Augmented reality apparatus and method
CN104299143A (en) * 2014-10-20 2015-01-21 上海电机学院 Virtual try-in method and device
CN104992464A (en) * 2015-06-19 2015-10-21 上海卓易科技股份有限公司 Virtual garment try-on system and garment try-on method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1971607A (en) * 2006-12-04 2007-05-30 谢亦玲 Picture database processing method for computerized glasses fitting system and glasses fitting device
US20140240354A1 (en) * 2013-02-28 2014-08-28 Samsung Electronics Co., Ltd. Augmented reality apparatus and method
CN103413118A (en) * 2013-07-18 2013-11-27 毕胜 On-line glasses try-on method
CN103456008A (en) * 2013-08-26 2013-12-18 刘晓英 Method for matching face and glasses
CN104299143A (en) * 2014-10-20 2015-01-21 上海电机学院 Virtual try-in method and device
CN104992464A (en) * 2015-06-19 2015-10-21 上海卓易科技股份有限公司 Virtual garment try-on system and garment try-on method

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203364B (en) * 2016-07-14 2019-05-24 广州帕克西软件开发有限公司 System and method is tried in a kind of interaction of 3D glasses on
CN106203364A (en) * 2016-07-14 2016-12-07 广州帕克西软件开发有限公司 System and method is tried in a kind of 3D glasses interaction on
CN106843677A (en) * 2016-12-29 2017-06-13 华勤通讯技术有限公司 A kind of method for displaying image of Virtual Reality glasses, equipment and terminal
CN107563874A (en) * 2017-09-14 2018-01-09 广州便雅悯视光网络科技有限公司 The virtual try-in method and device of a kind of sunglasses
CN107945102A (en) * 2017-10-23 2018-04-20 深圳市朗形网络科技有限公司 A kind of picture synthetic method and device
US11335028B2 (en) 2018-02-27 2022-05-17 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Control method based on facial image, related control device, terminal and computer device
CN108376252A (en) * 2018-02-27 2018-08-07 广东欧珀移动通信有限公司 Control method, control device, terminal, computer equipment and storage medium
CN108319943B (en) * 2018-04-25 2021-10-12 北京优创新港科技股份有限公司 Method for improving face recognition model performance under wearing condition
CN108319943A (en) * 2018-04-25 2018-07-24 北京优创新港科技股份有限公司 A method of human face recognition model performance under the conditions of raising is worn glasses
CN109660717A (en) * 2018-11-26 2019-04-19 深圳艺达文化传媒有限公司 From the stacking method and Related product of the earphone image that shoots the video
CN109978655A (en) * 2019-01-14 2019-07-05 明灏科技(北京)有限公司 A kind of virtual frame matching method and system
CN114489326A (en) * 2021-12-30 2022-05-13 南京七奇智能科技有限公司 Crowd-oriented gesture control device and method driven by virtual human interaction attention
CN114489326B (en) * 2021-12-30 2023-12-15 南京七奇智能科技有限公司 Crowd-oriented virtual human interaction attention driven gesture control device and method

Also Published As

Publication number Publication date
CN105354792B (en) 2019-05-28

Similar Documents

Publication Publication Date Title
CN105354792A (en) Method for trying virtual glasses and mobile terminal
KR102596920B1 (en) Camera platform and object inventory control
KR102173123B1 (en) Method and apparatus for recognizing object of image in electronic device
RU2654145C2 (en) Information search method and device and computer readable recording medium thereof
US9342930B1 (en) Information aggregation for recognized locations
EP2894634B1 (en) Electronic device and image compostition method thereof
CN110716645A (en) Augmented reality data presentation method and device, electronic equipment and storage medium
CN110113534B (en) Image processing method, image processing device and mobile terminal
CN107766349B (en) Method, device, equipment and client for generating text
CN104345886A (en) Intelligent glasses system for fashion experience and personalized fashion experience method
US20230274340A1 (en) Automating the creation of listings using augmented reality computer technology
CN112991555B (en) Data display method, device, equipment and storage medium
CN112527165A (en) Method, device and equipment for adjusting interface display state and storage medium
CN109544262A (en) Item recommendation method, device, electronic equipment, system and readable storage medium storing program for executing
CN106096043A (en) A kind of photographic method and mobile terminal
CN111767817A (en) Clothing matching method and device, electronic equipment and storage medium
US20160110909A1 (en) Method and apparatus for creating texture map and method of creating database
WO2019085519A1 (en) Method and device for facial tracking
KR102303206B1 (en) Method and apparatus for recognizing object of image in electronic device
US10304120B2 (en) Merchandise sales service device based on dynamic scene change, merchandise sales system based on dynamic scene change, method for selling merchandise based on dynamic scene change and non-transitory computer readable storage medium having computer program recorded thereon
EP3088991B1 (en) Wearable device and method for enabling user interaction
CN113034213B (en) Cartoon content display method, device, equipment and readable storage medium
KR102605451B1 (en) Electronic device and method for providing multiple services respectively corresponding to multiple external objects included in image
CN114067084A (en) Image display method and device
CN105303619B (en) A kind of sequence image generation method and mobile terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant