CN105809507A - Virtualized wearing method and virtualized wearing apparatus - Google Patents

Virtualized wearing method and virtualized wearing apparatus Download PDF

Info

Publication number
CN105809507A
CN105809507A CN201610113132.4A CN201610113132A CN105809507A CN 105809507 A CN105809507 A CN 105809507A CN 201610113132 A CN201610113132 A CN 201610113132A CN 105809507 A CN105809507 A CN 105809507A
Authority
CN
China
Prior art keywords
face
glasses
picture
user
nose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610113132.4A
Other languages
Chinese (zh)
Inventor
郭祖坤
干宏江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuparts Technology Co Ltd
Original Assignee
Beijing Kuparts Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuparts Technology Co Ltd filed Critical Beijing Kuparts Technology Co Ltd
Priority to CN201610113132.4A priority Critical patent/CN105809507A/en
Publication of CN105809507A publication Critical patent/CN105809507A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images

Landscapes

  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Theoretical Computer Science (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a virtualized wearing method and a virtualized wearing apparatus. The method comprises the following steps: acquiring the face images of at least one group; conducting identification detection according to the face images so as to obtain the coordinates of eyes and nose bridges in the images; obtaining glasses images by enlarging or narrowing in a certain proportion the interpupillary distance between two eyes in the coordinate of eyes; calculating the fitting position between the face images and the glasses images according to the coordinate of the nose bridge and obtaining face images with glasses based on the fitting position and the glasses images; and completing virtualized wearing by outputting the face images. According to the invention, since a user's face images are detected, a more accurate wearing effect can be achieved in a better looking way. As glasses images can be obtained by the enlarging and narrowing the interpupillary distance between two eyes in the coordinate of eyes on a user's face in a certain proportion, the wearing apparatus can cater for faces and glasses of different types. In addition, the virtualized wearing method can be realized through a computer or a smart phone in a simple way while giving real time feedback on the wearing in the process.

Description

A kind of virtual try-in method, virtual try-in device
Technical field
The present invention relates to field of computer technology, particularly to a kind of virtual try-in method, virtual try-in device.
Background technology
Now, " lazyness " people are more is owing to allegro animation is compeled, and wishes that exchanging the more time for " lazyness " goes reading, body-building, social activity, rest etc. after work.Along with the continuous expansion of this colony, " lazyboot " has become as the target consumer group that more and more businessman is new, and businessman meets the requirement of lazyboots' " lazyness is on earth " by every possible means, prepares to dig out a gold with them, and " lazyboot " economy is also just born therewith.
In prior art, mostly adopting material object to try on, user tries in optician's shop or ophthalmologic hospital.Real O2O it would be desirable to provide be the personalized service customized, rather than simple on-line off-line interconnection, this can embody the value of this pattern.Such as visit massage, beauty treatment, haircut, medical treatment etc., such service mode heavily experienced specially enjoying formula is only real O2O pattern.At present in eyeglasses selection industry, lack can reduce user go out, simple to operate, realize accurately joining mirror, glasses select to try mode on the personalization of purchase etc..
Summary of the invention
The technical problem to be solved in the present invention is to provide the method for wearing effect simple to operate, that present in real time online.
Solve above-mentioned technical problem, the invention provides a kind of virtual try-in method, including:
Gather least one set user's face image;
It is identified detection according to described user's face image, obtains face eyes coordinates and face bridge of the nose coordinate;
The mode of the interpupillary distance proportionally scaling of two in described face eyes coordinates is obtained glasses picture picture;
Calculate the matching position of face image and glasses picture picture according to face bridge of the nose coordinate, and obtain trying on the facial image of glasses by described matching position and described glasses picture picture;
Described facial image is exported, completes virtual trying on.
Further, when described user's face image is acquired, user is at original position first 90-degree rotation to the left, then 90-degree rotation to the right, then goes back to original position;Or user is at original position first 90-degree rotation to the right, then 90-degree rotation to the left, then go back to original position;Described original position be the eye level direction of visual lines of user towards position.
Further, adopt aam algorithm location the face eyes of described user, bridge of the nose characteristic point, obtain face eyes coordinates and face bridge of the nose coordinate
Further, the mode of described scale is, according to the right oculocentric distance in the face left eye center of described user to face, and glasses left eye center is to the right oculocentric distance of glasses, calculates glasses picture picture.
Further, described matching position is the nose support central point in described glasses picture picture and the overlapping positions of described face bridge of the nose point.
Based on above-mentioned method, present invention also offers a kind of virtual try-in device, including:
Client, described client includes: collecting unit, recognition unit, forming unit and fitting unit,
Described collecting unit, in order to gather least one set user's face image;
Described recognition unit, in order to be identified detection according to described user's face image, obtains face eyes coordinates and face bridge of the nose coordinate;
Described forming unit, in order to obtain glasses picture picture by the mode of the interpupillary distance proportionally scaling of two in described face eyes coordinates;
Described fitting unit, in order to calculate the matching position of face image and glasses picture picture according to face bridge of the nose coordinate, and obtains trying on the facial image of glasses by described matching position and described glasses picture picture;
Application server, described application server is in order to receive the solicited message of described client input, and responds;
Output unit, described output unit is in order to export described facial image according to rotating the consecutive image obtained on fixed-direction.
Further, described client is desktop computer, notebook computer, intelligent mobile phone or PAD.
Further, described collecting unit is PC photographic head or mobile phone camera.
Further, described application server initiates request to local data base or high in the clouds, in order to carry out data call and synchronization.
Further, the output of described output unit 2D picture format, 3D picture format, film format.
Beneficial effects of the present invention:
1) method in the present invention is owing to being identified detection according to described user's face image, obtains face eyes coordinates and face bridge of the nose coordinate;Make the result tried on more accurate, more attractive in appearance.
2) method in the present invention is owing to obtaining glasses picture picture by the mode of the interpupillary distance proportionally scaling of two in described face eyes coordinates, it is possible to meeting can not user's shape of face and different types of glasses.
3) method in the present invention realizes trying on online by computer or mobile phone, the effect that simple to operate and Real-time Feedback is tried on.
4) system in the present invention, it is possible to the image of output 2D picture format, 3D picture format, film format etc., allows user understand situation and the aesthetic measure of glasses try-in in real time.
5) system in the present invention, includes the clients such as desktop computer, notebook computer, intelligent mobile phone or PAD, has provided the user operation entry easily.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of the virtual try-in method in one embodiment of the invention.
Fig. 2 is the schematic flow sheet of concrete grammar when described user's face image is acquired in Fig. 1.
Fig. 3 is the concrete schematic diagram obtaining glasses picture picture in Fig. 1.
Fig. 4 is the concrete schematic diagram of matching location determining method in Fig. 1.
Fig. 5 is the virtual try-in device structural representation in one embodiment of the invention.
Detailed description of the invention
For making the object, technical solutions and advantages of the present invention clearly understand, below in conjunction with specific embodiment, and with reference to accompanying drawing, the present invention is described in more detail.
Fig. 1 is the schematic flow sheet of the virtual try-in method in one embodiment of the invention.
Step S101 is gathered least one set user's face image, and one group of user's face image includes but not limited in the present embodiment, gathers user's face image one group complete, including front, left surface, right flank etc..The mode gathered includes but not limited to, camera collection picture or photographic head recorded film.Preferred as in the present embodiment, it is possible to gather many group user's face images, be used for revising face image.
Step S102 is identified detection according to described user's face image, obtain face eyes coordinates and face bridge of the nose coordinate, those skilled in the art can understand, when being identified the face image of user detecting, the face identification method based on geometric properties can be adopted, include but not limited to: by one geometric properties vector representation of face, with the hierarchical clustering thought design category device in pattern recognition, face is identified.It requires that this vector has certain uniqueness, it is possible to the difference of reflection different people facial characteristics.Due to the method to face towards change very sensitive, it is desirable to have certain elasticity, to eliminate the impact of time span and illumination.Identify that the flow process of work is substantially as follows: first detect face feature point, by measuring the relative distance between these key points, obtain describing the characteristic vector of each face, the position of such as eyes, nose and mouth and width, the thickness of eyebrow and degree of crook etc., and the relation between these features.By the relative distance obtained between key point of eyes, nose etc. in the characteristic vector of above-mentioned each face, and then obtain face eyes coordinates and face bridge of the nose coordinate.
Preferred as in the present embodiment, can adopt aam algorithm or asm algorithm location the face eyes of described user, bridge of the nose characteristic point, obtain face eyes coordinates and face bridge of the nose coordinate.
Based on asm algorithm, initially set up shape;
A) n training sample of search, obtains the samples pictures of face face area based on the least one set user's face image collected;
B) k key feature points in each training sample is recorded,
C) coordinate of key feature points in training set is conspired to create characteristic vector;
D) shape is normalized and aligns (alignment adopts Procrustes method);
E) shape facility after alignment is done PCA process;
F) position relationship according to eyes, the bridge of the nose, builds local feature for each characteristic point.
In each iterative search procedures, each characteristic point can find new position.Local feature generally uses Gradient Features, in case illumination variation.Such as can extract along the normal direction at edge, such as can extract the rectangular area near characteristic point again.
Secondly, scan for;
A) calculating the position of eyes (or eyes and nose), carry out simple yardstick and rotationally-varying, align face;
B) mate each local feature region (such as, frequently with mahalanobis distance), calculate new position;
C) obtaining the parameter of affine transformation, iteration is until restraining.
Preferably, multiple dimensioned method can be adopted to accelerate.
The process of above-mentioned search finally converges on high-resolution original image.
Owing to asm algorithm has only used shape constraining (feature plus near characteristic point), so preferred as in the present embodiment, add again the textural characteristics of whole face area based on aam algorithm.
Therefore when scanning for;Owing to textural characteristics dimension is higher, adopt prior learning to go out the linear model that texture prediction needs, adjust parameter according to this model, improve search efficiency.
The mode of the interpupillary distance proportionally scaling of two in described face eyes coordinates is obtained glasses picture picture by step S103;Owing to the difference of face is fat or thin, so shape of face is fat or thin.Length, all can impact the scaling of glasses picture picture.Interpupillary distance according to described two, it is possible to determine the shape of glasses, i.e. glasses size.The interpupillary distance of described two can determine the coordinate of eyes according to recognition of face in front step, further determines that the distance of pupil.Preferred as in the present embodiment, needs when wearing glasses to measure interpupillary distance, and interpupillary distance is divided into: far by interpupillary distance, closely by interpupillary distance, and conventional interpupillary distance.During mensuration, these three interpupillary distance can be measured by certain distance.
Step S104 calculates the matching position of face image and glasses picture picture according to face bridge of the nose coordinate, and obtains trying on the facial image of glasses by described matching position and described glasses picture picture.The image of the coincidence of described matching positional representation face image and glasses picture picture, owing to glasses picture image position is in the top of described face image, so needing described face bridge of the nose coordinate is calculated, so that it is determined that the image that glasses picture image position is on the face bridge of the nose.Further, since bridge of the nose coordinate is uniquely determine and be not subjected to Rotation of eyeball impact, so obtaining matching position according to face bridge of the nose coordinate.Preferred as in the present embodiment, according to described matching position, in conjunction with glasses picture picture, it is possible to obtain trying on the facial image of glasses.
Described facial image is exported by step S105, completes virtual trying on.When described facial image is exported, it is possible to output 2D picture format, 3D picture format, film format.By PC or handheld mobile device display.
Fig. 2 is the schematic flow sheet of concrete grammar when described user's face image is acquired in Fig. 1.
When described user's face image is acquired by step S201, represent that user rises when starting acquisition function, time period during to end acquisition function.
Step S202 user is at the first 90-degree rotation to the left of original position, 90-degree rotation to the right again, go back to original position again, in described original position, 90-degree rotation to the left, the image information complete in order to gather right face, namely right visual angle, 45 degree of angles, same, then 90-degree rotation to the right, the image information complete in order to gather left face, i.e. left visual angle, 45 degree of angles.It is the rotation that user carries out in horizontal direction to the requirement gathered.
Preferred as in the present embodiment, it is possible to by PC photographic head or mobile phone front-facing camera, gather the facial photo of one group of user, can point out user first 90-degree rotation to the left, then go back to front in gatherer process, then 90-degree rotation to the right, then goes back to front.User is in rotary course, photographic head multiple face picture of collection per second also send the server in backstage to, picture is detected after receiving face picture by server, find eyes, bridge of the nose position coordinates, the scaling of glasses picture is calculated according to the interpupillary distances of two, calculate glasses and face picture matching position according to bridge of the nose position, use scaling and matching position to generate the face picture put on one's glasses.
Or, enter step S203 user at original position first 90-degree rotation to the right, then 90-degree rotation to the left, then go back to original position, during image data extraction, orientation angle that restriction does not rotate or order.
Step S204 original position be the eye level direction of visual lines of user towards position.
Fig. 3 is the concrete schematic diagram obtaining glasses picture picture in Fig. 1.
Preferred as in the present embodiment, gathers least one set user's face image in virtual try-in method;It is identified detection according to described user's face image, obtains face eyes coordinates and face bridge of the nose coordinate;The mode of the interpupillary distance proportionally scaling of two in described face eyes coordinates is obtained glasses picture picture;Calculate the matching position of face image and glasses picture picture according to face bridge of the nose coordinate, and obtain trying on the facial image of glasses by described matching position and described glasses picture picture;Described facial image is exported, completes virtual trying on.Specifically, the mode of described scale is, according to the right oculocentric distance in the face left eye center of described user to face, and glasses left eye center is to the right oculocentric distance of glasses, calculates glasses picture picture.Point a in figure 3 represents the center of glasses left eye, and some b represents the center of glasses right eye, by the right oculocentric distance in glasses left eye center to glasses, and the right and left eyes center of eyes, carry out proportional zoom.
Fig. 4 is the concrete schematic diagram of matching location determining method in Fig. 1.
Preferred as in the present embodiment, gathers least one set user's face image in virtual try-in method;It is identified detection according to described user's face image, obtains face eyes coordinates and face bridge of the nose coordinate;The mode of the interpupillary distance proportionally scaling of two in described face eyes coordinates is obtained glasses picture picture;Calculate the matching position of face image and glasses picture picture according to face bridge of the nose coordinate, and obtain trying on the facial image of glasses by described matching position and described glasses picture picture;Described facial image is exported, completes virtual trying on.Specifically, described matching position is the nose support central point in described glasses picture picture and the overlapping positions of described face bridge of the nose point, and as shown in Figure 4, some c is a certain overlapping positions by nose support central point and face bridge of the nose point.
Fig. 5 is the virtual try-in device structural representation in one embodiment of the invention.
The virtual try-in device of one in the present embodiment, including a lower part:
Client 101, described client includes: collecting unit 1011, recognition unit 1012, forming unit 1013 and fitting unit 1014.Preferred as in the present embodiment, described client 101 is desktop computer, notebook computer, intelligent mobile phone or PAD.
Described collecting unit 1011, in order to gather least one set user's face image;Preferred as in the present embodiment, described collecting unit is PC photographic head or mobile phone camera, as long as or possessing the desk-top of image collecting function or handheld device.
Described recognition unit 1012, in order to be identified detection according to described user's face image, obtains face eyes coordinates and face bridge of the nose coordinate;
Described forming unit 1013, in order to obtain glasses picture picture by the mode of the interpupillary distance proportionally scaling of two in described face eyes coordinates;
Described fitting unit 1014, in order to calculate the matching position of face image and glasses picture picture according to face bridge of the nose coordinate, and obtains trying on the facial image of glasses by described matching position and described glasses picture picture;
Application server 102, described application server is in order to receive the solicited message of described client input, and responds;Preferred as in the present embodiment, described application server initiates request to local data base or high in the clouds, in order to carry out data call and synchronization.
Output unit 103, described output unit, in order to be exported according to rotating the consecutive image obtained on fixed-direction by described facial image, includes but not limited to, the application mode of web, android, ios.The described consecutive image obtained that rotates on fixed-direction carries out output and refers to, user uses mouse or finger to slide to the left in front end, front-end interface is then shown and is put on one's glasses user to the continuous effect of anticlockwise (substantially computer/mobile phone showing interface is the static picture that rotates gradually of lineup's face angle, because the angle of variation is less, human eye is difficult to differentiate, so what human eye saw seems the effect of dynamically change), sliding to the right, interface then shows that user puts on one's glasses dextrorotary effect.Preferably, the output of described output unit 2D picture format, 3D picture format, film format.
Those of ordinary skill in the field it is understood that more than; described be only specific embodiments of the invention, be not limited to the present invention, all within the spirit and principles in the present invention; any amendment of being made, equivalent replacement, improvement etc., should be included within protection scope of the present invention.

Claims (10)

1. a virtual try-in method, it is characterised in that including:
Gather least one set user's face image;
It is identified detection according to described user's face image, obtains face eyes coordinates and face bridge of the nose coordinate;
The mode of the interpupillary distance proportionally scaling of two in described face eyes coordinates is obtained glasses picture picture;
Calculate the matching position of face image and glasses picture picture according to face bridge of the nose coordinate, and obtain trying on the facial image of glasses by described matching position and described glasses picture picture;
Described facial image is exported, completes virtual trying on.
2. virtual try-in method according to claim 1, it is characterised in that when described user's face image is acquired, user is at original position first 90-degree rotation to the left, then 90-degree rotation to the right, then goes back to original position;Or user is at original position first 90-degree rotation to the right, then 90-degree rotation to the left, then go back to original position;Described original position be the eye level direction of visual lines of user towards position.
3. virtual try-in method according to claim 1, it is characterised in that adopt aam algorithm location the face eyes of described user, bridge of the nose characteristic point, obtain face eyes coordinates and face bridge of the nose coordinate.
4. virtual try-in method according to claim 1, it is characterized in that, the mode of described scale is, according to the right oculocentric distance in the face left eye center of described user to face, and glasses left eye center is to the right oculocentric distance of glasses, calculates glasses picture picture.
5. virtual try-in method according to claim 1, it is characterised in that described matching position is the nose support central point in described glasses picture picture and the overlapping positions of described face bridge of the nose point.
6. a virtual try-in device, it is characterised in that including:
Client, described client includes: collecting unit, recognition unit, forming unit and fitting unit,
Described collecting unit, in order to gather least one set user's face image;
Described recognition unit, in order to be identified detection according to described user's face image, obtains face eyes coordinates and face bridge of the nose coordinate;
Described forming unit, in order to obtain glasses picture picture by the mode of the interpupillary distance proportionally scaling of two in described face eyes coordinates;
Described fitting unit, in order to calculate the matching position of face image and glasses picture picture according to face bridge of the nose coordinate, and obtains trying on the facial image of glasses by described matching position and described glasses picture picture;
Application server, described application server is in order to receive the solicited message of described client input, and responds;
Output unit, described output unit is in order to export described facial image according to rotating the consecutive image obtained on fixed-direction.
7. virtual try-in device according to claim 6, it is characterised in that described client is desktop computer, notebook computer, intelligent mobile phone or PAD.
8. virtual try-in device according to claim 6, it is characterised in that described collecting unit is PC photographic head or mobile phone camera.
9. virtual try-in device according to claim 6, it is characterised in that described application server initiates request to local data base or high in the clouds, in order to carry out data call and synchronization.
10. virtual try-in device according to claim 6, it is characterised in that described output unit output 2D picture format, 3D picture format, film format.
CN201610113132.4A 2016-02-29 2016-02-29 Virtualized wearing method and virtualized wearing apparatus Pending CN105809507A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610113132.4A CN105809507A (en) 2016-02-29 2016-02-29 Virtualized wearing method and virtualized wearing apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610113132.4A CN105809507A (en) 2016-02-29 2016-02-29 Virtualized wearing method and virtualized wearing apparatus

Publications (1)

Publication Number Publication Date
CN105809507A true CN105809507A (en) 2016-07-27

Family

ID=56465992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610113132.4A Pending CN105809507A (en) 2016-02-29 2016-02-29 Virtualized wearing method and virtualized wearing apparatus

Country Status (1)

Country Link
CN (1) CN105809507A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570747A (en) * 2016-11-03 2017-04-19 济南博图信息技术有限公司 Glasses online adaption method and system combining hand gesture recognition
CN107025628A (en) * 2017-04-26 2017-08-08 广州帕克西软件开发有限公司 A kind of virtual try-in method of 2.5D glasses and device
CN107507050A (en) * 2017-07-13 2017-12-22 李考亮 People near being based upon provides visit online glasses marketing method and system with mirror
CN107578319A (en) * 2017-09-19 2018-01-12 无锡宏治视光科技有限公司 Multifunctional management system for glasses sale
CN109063539A (en) * 2018-06-08 2018-12-21 平安科技(深圳)有限公司 The virtual usual method of glasses, device, computer equipment and storage medium
TWI663561B (en) * 2017-06-02 2019-06-21 視鏡科技股份有限公司 Virtual glasses matching method and system
CN109978655A (en) * 2019-01-14 2019-07-05 明灏科技(北京)有限公司 A kind of virtual frame matching method and system
CN110348936A (en) * 2019-05-23 2019-10-18 珠海随变科技有限公司 A kind of glasses recommended method, device, system and storage medium
CN110477858A (en) * 2018-05-15 2019-11-22 深圳市斯尔顿科技有限公司 Tested eye alignment methods, device and Ophthalmologic apparatus
CN112328084A (en) * 2020-11-12 2021-02-05 北京态璞信息科技有限公司 Positioning method and device of three-dimensional virtual glasses and electronic equipment
CN112733570A (en) * 2019-10-14 2021-04-30 北京眼神智能科技有限公司 Glasses detection method and device, electronic equipment and storage medium
CN113673461A (en) * 2021-08-26 2021-11-19 深圳随锐云网科技有限公司 Method and device for realizing selection of human face and human figure region based on 4K + AI

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103400119A (en) * 2013-07-31 2013-11-20 南京融图创斯信息科技有限公司 Face recognition technology-based mixed reality spectacle interactive display method
CN103413118A (en) * 2013-07-18 2013-11-27 毕胜 On-line glasses try-on method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413118A (en) * 2013-07-18 2013-11-27 毕胜 On-line glasses try-on method
CN103400119A (en) * 2013-07-31 2013-11-20 南京融图创斯信息科技有限公司 Face recognition technology-based mixed reality spectacle interactive display method

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570747A (en) * 2016-11-03 2017-04-19 济南博图信息技术有限公司 Glasses online adaption method and system combining hand gesture recognition
CN107025628A (en) * 2017-04-26 2017-08-08 广州帕克西软件开发有限公司 A kind of virtual try-in method of 2.5D glasses and device
TWI663561B (en) * 2017-06-02 2019-06-21 視鏡科技股份有限公司 Virtual glasses matching method and system
CN107507050A (en) * 2017-07-13 2017-12-22 李考亮 People near being based upon provides visit online glasses marketing method and system with mirror
CN107578319A (en) * 2017-09-19 2018-01-12 无锡宏治视光科技有限公司 Multifunctional management system for glasses sale
CN110477858A (en) * 2018-05-15 2019-11-22 深圳市斯尔顿科技有限公司 Tested eye alignment methods, device and Ophthalmologic apparatus
CN110477858B (en) * 2018-05-15 2023-11-28 深圳莫廷医疗科技有限公司 Eye to be inspected alignment method, device and ophthalmic equipment
CN109063539A (en) * 2018-06-08 2018-12-21 平安科技(深圳)有限公司 The virtual usual method of glasses, device, computer equipment and storage medium
WO2019232871A1 (en) * 2018-06-08 2019-12-12 平安科技(深圳)有限公司 Glasses virtual wearing method and apparatus, and computer device and storage medium
CN109063539B (en) * 2018-06-08 2023-04-18 平安科技(深圳)有限公司 Virtual glasses wearing method and device, computer equipment and storage medium
CN109978655A (en) * 2019-01-14 2019-07-05 明灏科技(北京)有限公司 A kind of virtual frame matching method and system
CN110348936A (en) * 2019-05-23 2019-10-18 珠海随变科技有限公司 A kind of glasses recommended method, device, system and storage medium
CN112733570A (en) * 2019-10-14 2021-04-30 北京眼神智能科技有限公司 Glasses detection method and device, electronic equipment and storage medium
CN112733570B (en) * 2019-10-14 2024-04-30 北京眼神智能科技有限公司 Glasses detection method and device, electronic equipment and storage medium
CN112328084A (en) * 2020-11-12 2021-02-05 北京态璞信息科技有限公司 Positioning method and device of three-dimensional virtual glasses and electronic equipment
CN113673461A (en) * 2021-08-26 2021-11-19 深圳随锐云网科技有限公司 Method and device for realizing selection of human face and human figure region based on 4K + AI
CN113673461B (en) * 2021-08-26 2024-03-26 深圳随锐云网科技有限公司 Method and device for realizing face and human shape area selection based on 4K+AI

Similar Documents

Publication Publication Date Title
CN105809507A (en) Virtualized wearing method and virtualized wearing apparatus
US11215845B2 (en) Method, device, and computer program for virtually adjusting a spectacle frame
US10691927B2 (en) Image deformation processing method and apparatus, and computer storage medium
US9262671B2 (en) Systems, methods, and software for detecting an object in an image
EP3745352B1 (en) Methods and systems for determining body measurements and providing clothing size recommendations
CN103413118B (en) Online glasses try-on method
CN109801380A (en) A kind of method, apparatus of virtual fitting, storage medium and computer equipment
CN109063695A (en) A kind of face critical point detection method, apparatus and its computer storage medium
JP2001268594A (en) Client server system for three-dimensional beauty simulation
CN103489107B (en) A kind of method and apparatus making virtual fitting model image
CN103559489A (en) Method for extracting features of palm in non-contact imaging mode
US11908150B2 (en) System and method for mobile 3D scanning and measurement
KR20170016578A (en) Clothes Fitting System And Operation Method of Threof
CN106570747A (en) Glasses online adaption method and system combining hand gesture recognition
Boukamcha et al. Automatic landmark detection and 3D Face data extraction
Galantucci et al. Coded targets and hybrid grids for photogrammetric 3D digitisation of human faces
TW202016881A (en) Program, information processing device, quantification method, and information processing system
CN110603508B (en) Media content tracking
Xu et al. Object restoration based on extrinsic reflective symmetry plane detection
KR20200020342A (en) A method for inputting body shape information on a terminal and a method for wearing virtual clothing based on inputted body shape information and a system therefor
KR20080055622A (en) Ergonomic human computer interface
Fang et al. Automatic head and facial feature extraction based on geometry variations
Mizuchi et al. Monocular 3d palm posture estimation based on feature-points robust against finger motion
CN106462726A (en) Frame recognition system and method
CN109393614A (en) A kind of software tool for dimensional measurement of cutting the garment according to the figure

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160727