CN106570747A - Glasses online adaption method and system combining hand gesture recognition - Google Patents

Glasses online adaption method and system combining hand gesture recognition Download PDF

Info

Publication number
CN106570747A
CN106570747A CN201610976517.3A CN201610976517A CN106570747A CN 106570747 A CN106570747 A CN 106570747A CN 201610976517 A CN201610976517 A CN 201610976517A CN 106570747 A CN106570747 A CN 106570747A
Authority
CN
China
Prior art keywords
glasses
models
face
human face
online
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610976517.3A
Other languages
Chinese (zh)
Inventor
刘治
宿方琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan Boyto Information Technology Co Ltd
Original Assignee
Jinan Boyto Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan Boyto Information Technology Co Ltd filed Critical Jinan Boyto Information Technology Co Ltd
Priority to CN201610976517.3A priority Critical patent/CN106570747A/en
Publication of CN106570747A publication Critical patent/CN106570747A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0621Item configuration or customization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers

Abstract

The present invention discloses a glasses online adaption method and system combining hand gesture recognition. According to the present invention, an image acquisition device is utilized to capture the face image information and transmit to a server, and a correlation relation between the hand gesture information and a pre-stored 3D glasses model is preset in the server. The method comprises the steps of receiving the face image information, positioning the positions of the key part points of the faces, obtaining the spacing of the pupils of the double eyes and the face characteristic points, and constructing a 3D face model; according to the face characteristic points and the 3D face model, establishing a head pose estimation model used for describing the deflection amount of a head relative to the image acquisition device in the three-dimensional space; receiving and recognizing the hand gesture information, and automatically loading the 3D glasses model corresponding to the hand gesture information in a face eye area of a head pose estimation model; according to the spacing the pupils of the double eyes, adjusting the mirror bracket width of the 3D glasses model, and finally finishing the glasses online adaption and outputting the 3D glasses model of which the mirror bracket width is adjusted.

Description

With reference to the online adaptation method of glasses and system of gesture identification
Technical field
The invention belongs to image procossing and area of pattern recognition, more particularly to a kind of glasses of combination gesture identification are fitted online Method of completing the square and system.
Background technology
In recent years, as Internet technology constantly develops, shopping online becomes instantly popular consumption pattern.Traditional is mutual Networking glasses marketing method is all angles photo that seller provides glasses on the net, and user selects oneself by browsing glasses photo The glasses admired.But this consumer's physical experience eyeglasses-wearing that cannot allow by way of photo understands glasses detailed information Effect.Meanwhile, under traditional glasses sales mode, trade company is due to the difference of respective shop scale, it is impossible to cover abundant Eyewear products, also cannot realize personalized making spectacles to order service according to the hobby of user.Virtually trying on based on augmented reality Technology can make up the deficiency of conventional internet sale, allow consumer to stay indoors, just complete trying on for glasses at home.
Mainly there are three kinds of online eyeglasses selection technologies at present:1) technology tried on based on static glasses picture.2D glasses pictures Piece is loaded directly into face eyes, and this method is simply effective, but the method can only process front face, for face Situations such as deflection, does not have universality, and can not make corresponding feedback according to the head movement of user, and Consumer's Experience is bad. 2) 3D glasses try-ins technology.3D glasses models are loaded on 2D faces, while track human faces motion, there is provided user's multi-angle Wearing effect.3) the 3D glasses try-in technologies based on 3D faces.Not only glasses are three-dimensional to this method, and face is also three-dimensional , this method can be good at the occlusion issue for solving face deflection, but the 3D of face rebuilds and remains a difficult problem at present, Need 3D capture devices to obtain, it is relatively costly.
The content of the invention
In order to solve the shortcoming of prior art, it is online that the first object of the present invention provides a kind of glasses of combination gesture identification Adaptation method.
For achieving the above object, the present invention is employed the following technical solutions:
A kind of online adaptation method of glasses of combination gesture identification, captures human face image information using image collecting device And server is sent to, the adaptation method is completed in server, presets gesture information and the 3D for prestoring in server Incidence relation between glasses model, specifically includes following steps:
Human face image information is received, the key position point position of locating human face obtains pupil of both eyes spacing and face characteristic Point, and construct 3D faceforms;Wherein, the key position point of face includes canthus position, corners of the mouth position and nose position;
According to the human face characteristic point and 3D faceforms, set up for describing head in three dimensions relative to image The head pose estimation model of harvester amount of deflection;
Gesture information is received and identified, according to the incidence relation of gesture information and the 3D glasses models for prestoring, will be with gesture The corresponding 3D glasses models of information are automatically loaded into the face eye areas of head pose estimation module;
According to pupil of both eyes spacing, the mirror holder width of 3D glasses models is adjusted, glasses is finally completed and is adapted to online and exports 3D glasses models after mirror holder width adjustment.
The present invention presets the incidence relation between gesture information and the 3D glasses models that prestore first in server, Human face image information according to obtaining is set up 3D faceforms and is adopted relative to image for describing head in three dimensions The head pose estimation model of acquisition means amount of deflection, further according to gesture information, by the 3D glasses model corresponding with gesture information It is automatically loaded into the face eye areas of head pose estimation module so that 3D glasses models are moved with head movement;Most The mirror holder width of 3D glasses models afterwards according to pupil of both eyes spacing, is adjusted, glasses is finally completed and is adapted to online and exports mirror holder width 3D glasses models after degree adjustment, truly present the effect that user tries glasses on.
The 3D glasses model corresponding with gesture information is automatically loaded into into the face eyes area of head pose estimation module After domain, also include rendering 3D glasses models.So can like entering the color of 3D glasses models according to user Row is rendered, and tries the glasses of the color that user likes on.
After capture human face image information, using facial modeling algorithm come the key position point position of locating human face Put, further obtain human face characteristic point.
Human face characteristic point fitting is the position letter of the key position point of further locating human face on the basis of Face detection Breath.The facial modeling algorithm of current main flow:Active shape model (ASM), active apparent model (AAM), CLM constraints office Portion's aspect of model independent positioning method, these methods are typical Face detection algorithms, are capable of the key of locating human face exactly Portion's site location.
Point on 3D faceforms is corresponded with the point of facial modeling.So cause 3D faceforms more accurate Face and its characteristic point information are shown really.
Amount of deflection of the head in three dimensions relative to image collecting device is estimated using POSIT algorithms.The method Head amount of deflection in three dimensions relative to image collecting device is calculated exactly can.
The second object of the present invention is to provide a kind of online adaption system of glasses of combination gesture identification.The system can be true Real presentation user tries the effect of glasses on.
A kind of glasses online adaption system of the combination gesture identification of the present invention, including:
3D faceforms build module, and which is used to receive human face image information, and the key position point position of locating human face obtains To pupil of both eyes spacing and human face characteristic point, and construct 3D faceforms;Wherein, the key position point of face includes canthus Position, corners of the mouth position and nose position;
Head pose estimation model construction module, which is used for according to the human face characteristic point and 3D faceforms, sets up and uses To describe head pose estimation model of the head in three dimensions relative to image collecting device amount of deflection;
The automatic load-on module of 3D glasses models, which is used to receive and identify gesture information, according to gesture information with prestore The 3D glasses model corresponding with gesture information is automatically loaded into head pose estimation module by the incidence relation of 3D glasses models Face eye areas;
3D glasses model adaptation modules, which is used for according to pupil of both eyes spacing, adjusts the mirror holder width of 3D glasses models, most Complete glasses eventually to be adapted to and 3D glasses models after exporting mirror holder width adjustment in line.
The online adaption system of glasses of the combination gesture identification of the present invention, presets gesture information and the 3D glasses for prestoring Incidence relation between model, sets up 3D faceforms and for describing head in three-dimensional according to the human face image information for obtaining In space relative to image collecting device amount of deflection head pose estimation model, further according to gesture information, will be with gesture information Corresponding 3D glasses models are automatically loaded into the face eye areas of head pose estimation module so that 3D glasses models with Head movement and move;Finally according to pupil of both eyes spacing, the mirror holder width of 3D glasses models is adjusted, glasses is finally completed online The 3D glasses models for being adapted to and exporting after mirror holder width adjustment, truly present the effect that user tries glasses on.
The system also includes rendering module, and which is used to render 3D glasses models.
The 3D faceforms build module and are additionally operable to the crucial portion using facial modeling algorithm come locating human face Site location, further obtains human face characteristic point.
Point on 3D faceforms is corresponded with the point of facial modeling.
The head pose estimation model construction module is additionally operable to estimate head in three dimensions using POSIT algorithms Relative to the amount of deflection of image collecting device.
The third object of the present invention is to provide another kind of online adaption system of glasses for combining gesture identification.
The another kind that the present invention is provided combines the online adaption system of glasses of gesture identification, including:
Image collecting device, which is configured to capture human face image information;
Server, which is configured to:
Preset the incidence relation between gesture information and the 3D glasses models that prestore;
Human face image information is received, the key position point position of locating human face obtains pupil of both eyes spacing and face characteristic Point, and construct 3D faceforms;Wherein, the key position point of face includes canthus position, corners of the mouth position and nose position;
According to the human face characteristic point and 3D faceforms, set up for describing head in three dimensions relative to image The head pose estimation model of harvester amount of deflection;
Gesture information is received and identified, according to the incidence relation of gesture information and the 3D glasses models for prestoring, will be with gesture The corresponding 3D glasses models of information are automatically loaded into the face eye areas of head pose estimation module;
According to pupil of both eyes spacing, the mirror holder width of 3D glasses models is adjusted, glasses is finally completed and is adapted to online and exports 3D glasses models after mirror holder width adjustment.
The server is additionally configured to:3D glasses models are rendered.
The system also includes display device, its 3D glasses model being used for after display output mirror holder width adjustment.
The server is additionally configured to estimate head in three dimensions relative to image acquisition using POSIT algorithms The amount of deflection of device.
Described image harvester is video camera.
Beneficial effects of the present invention are:
(1) the method for the invention is preset in server between gesture information and the 3D glasses models that prestore first Incidence relation, according to obtain human face image information set up 3D faceforms and for describing head phase in three dimensions For the head pose estimation model of image collecting device amount of deflection, further according to gesture information, will be corresponding with gesture information 3D glasses models are automatically loaded into the face eye areas of head pose estimation module so that 3D glasses models are with head movement And move;Finally according to pupil of both eyes spacing, the mirror holder width of 3D glasses models is adjusted, be finally completed glasses and be adapted to online and defeated The 3D glasses models gone out after mirror holder width adjustment, truly present the effect that user tries glasses on.
(2) the online adaption system of glasses of combination gesture identification of the invention, presets gesture information and the 3D for prestoring Incidence relation between glasses model, sets up 3D faceforms according to the human face image information for obtaining and exists for describing head In three dimensions relative to image collecting device amount of deflection head pose estimation model, further according to gesture information, will be with gesture The corresponding 3D glasses models of information are automatically loaded into the face eye areas of head pose estimation module so that 3D glasses models Move with head movement;Finally according to pupil of both eyes spacing, the mirror holder width of 3D glasses models is adjusted, glasses are finally completed It is adapted to and 3D glasses models after exporting mirror holder width adjustment in line, truly presents the effect that user tries glasses on.
(3) present invention is also rendered to 3D glasses models, so being capable of liking to 3D glasses models according to user Color is rendered, and tries the glasses of the color that user likes on.
Description of the drawings
Fig. 1 is a kind of online adaptation method schematic flow sheet of glasses of combination gesture identification of the present invention;
Fig. 2 is the interpupillary distance measurement object of reference and measuring method schematic diagram of present invention design;
Fig. 3 is a kind of online adaption system structural representation of glasses of combination gesture identification of the present invention.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than the embodiment of whole.
Embodiment one
Fig. 1 is a kind of online adaptation method schematic flow sheet of glasses of combination gesture identification of the present invention.As shown in Figure 1 A kind of online adaptation method of glasses of combination gesture identification, is captured human face image information and is sent to using image collecting device Server, the adaptation method are completed in server, preset gesture information and the 3D glasses models for prestoring in server Between incidence relation, specifically include following steps:
Step 1:Human face image information is received, the key position point position of locating human face obtains pupil of both eyes spacing and people Face characteristic point, and construct 3D faceforms;Wherein, the key position point of face includes canthus position, corners of the mouth position and nose Sharp position.
In specific implementation process, after capture human face image information, people is positioned using facial modeling algorithm The key position point position of face, further obtains human face characteristic point.
Human face characteristic point fitting is the position letter of the key position point of further locating human face on the basis of Face detection Breath.The facial modeling algorithm of current main flow:Active shape model (ASM), active apparent model (AAM), CLM constraints office Portion's aspect of model independent positioning method, these methods are typical Face detection algorithms, are capable of the key of locating human face exactly Portion's site location.
During pupil of both eyes spacing, the present invention goes out eyes interpupillary distance by the Image estimation that image collecting device is obtained Actual distance.Specific embodiment is as follows:
(1) user obtains marker, is attached at forehead, as shown in Figure 2.
(2) user is just to image collecting device, image collecting device capture user's face image, and detects in object of reference The side of white square, as shown in Figure 2.If the length of side actual range of white square be d, in the picture foursquare pixel away from From for D, in image, the pixel distance of two is EyePixeLength, then two eye pupils away from EyeDist are:
Image collecting device can be achieved using photography/videography.The online interpupillary distance measuring method taken by the present invention, User can easily obtain object of reference, and accurately estimate the interpupillary distance of eyes, and ranging process is simply effective.
Step 2:According to the human face characteristic point and 3D faceforms, set up relative in three dimensions for describing head In the head pose estimation model of image collecting device amount of deflection.
Wherein, the point on 3D faceforms is corresponded with the point of facial modeling.3D faceforms are caused so Face and its characteristic point information are shown more accurately.
Amount of deflection of the head in three dimensions relative to image collecting device is estimated using POSIT algorithms.The method Head amount of deflection in three dimensions relative to image collecting device is calculated exactly can.
Present invention employs the head pose estimation method based on POSIT and 3-dimensional faceform.First, set up a 3-dimensional Faceform, the point of the point on 3-dimensional faceform and facial modeling is corresponded, then using POSIT algorithms Deflection matrix R (formula 1) of the face relative to photographic head is estimated, the attitude parameter roll of head is calculated according to formula 2, yaw,pitch。
The unit vector base of hypothesis camera coordinates system is (i, j, k), then the unit vector of i axles is in three-dimensional face model coordinate (r can be expressed as in system11,r12,r13), the unit vector of j axles can be expressed as in three-dimensional face model coordinate system in the same manner (r21,r22,r23), the unit vector of z-axis can be expressed as (r in three-dimensional face model coordinate system31,r32,r33);Wherein, Face is R, r relative to the deflection matrix of photographic headij(i=1,2,3;J=1,2,3;) for the element of average R, it is by POSIT What algorithm was estimated.
Wherein, roll, yow, pitch represent head respectively along z, and y, the anglec of rotation of x-axis, atan2 represent arc tangent letter Number.
Head pose estimation model can estimate face and move matrix relative to the deflection matrix peace of photographic head.Moment of deflection Battle array refer to deflection matrix of the head relative to image collecting device coordinate system, by this matrix can obtain head relative to The anglec of rotation of image collecting device, enables the glasses of loading to follow head movement;Translation matrix refers to that face is adopted in image The coordinate of position under acquisition means coordinate system, i.e. face in the image of image collecting device.
Step 3:Gesture information is received and identified, according to gesture information and the incidence relation of the 3D glasses models for prestoring, is incited somebody to action The 3D glasses model corresponding with gesture information is automatically loaded into the face eye areas of head pose estimation module.
Gesture information includes gestures detection, finger tip detection, three partial informations of gesture control signal detection.Server passes through Image collecting device captures the position of hand region, positions fingertip location, then tracks motion of the finger tip in glasses selection region, If finger tip rested on time at certain eye positions to be selected more than 1 second, the glasses will be automatically loaded into face eye areas.
Step 4:According to pupil of both eyes spacing, the mirror holder width of 3D glasses models is adjusted, glasses is finally completed and is adapted to online And export the 3D glasses models after mirror holder width adjustment.
The method of the present embodiment is preset in server between gesture information and the 3D glasses models that prestore first Incidence relation, according to obtain human face image information set up 3D faceforms and for describing head phase in three dimensions For the head pose estimation model of image collecting device amount of deflection, further according to gesture information, will be corresponding with gesture information 3D glasses models are automatically loaded into the face eye areas of head pose estimation module so that 3D glasses models are with head movement And move;Finally according to pupil of both eyes spacing, the mirror holder width of 3D glasses models is adjusted, be finally completed glasses and be adapted to online and defeated The 3D glasses models gone out after mirror holder width adjustment, truly present the effect that user tries glasses on.
Further, the 3D glasses model corresponding with gesture information is automatically loaded into the people of head pose estimation module After face eye areas, also include rendering 3D glasses models.So being capable of liking to 3D glasses models according to user Color rendered, try the glasses of the color that user likes on.
Embodiment two
Fig. 3 is a kind of online adaption system structural representation of glasses of combination gesture identification of the present invention.As shown in Figure 3 A kind of glasses online adaption system of combination gesture identification, including:
3D faceforms build module, and which is used to receive human face image information, and the key position point position of locating human face obtains To pupil of both eyes spacing and human face characteristic point, and construct 3D faceforms;Wherein, the key position point of face includes canthus Position, corners of the mouth position and nose position;
Head pose estimation model construction module, which is used for according to the human face characteristic point and 3D faceforms, sets up and uses To describe head pose estimation model of the head in three dimensions relative to image collecting device amount of deflection;
The automatic load-on module of 3D glasses models, which is used to receive and identify gesture information, according to gesture information with prestore The 3D glasses model corresponding with gesture information is automatically loaded into head pose estimation module by the incidence relation of 3D glasses models Face eye areas;
3D glasses model adaptation modules, which is used for according to pupil of both eyes spacing, adjusts the mirror holder width of 3D glasses models, most Complete glasses eventually to be adapted to and 3D glasses models after exporting mirror holder width adjustment in line.
The system also includes rendering module, and which is used to render 3D glasses models.
Wherein, 3D faceforms build module and are additionally operable to the crucial portion using facial modeling algorithm come locating human face Site location, further obtains human face characteristic point.
Point on 3D faceforms is corresponded with the point of facial modeling.
Wherein, head pose estimation model construction module is additionally operable to estimate head in three dimensions using POSIT algorithms In relative to image collecting device amount of deflection.
The online adaption system of glasses of the combination gesture identification of the present embodiment, presets gesture information and the 3D eyes for prestoring Incidence relation between mirror model, sets up 3D faceforms and for describing head three according to the human face image information for obtaining In dimension space relative to image collecting device amount of deflection head pose estimation model, further according to gesture information, will believe with gesture The corresponding 3D glasses models of manner of breathing are automatically loaded into the face eye areas of head pose estimation module so that 3D glasses models with Head movement and move;Finally according to pupil of both eyes spacing, the mirror holder width of 3D glasses models is adjusted, glasses is finally completed and is existed Line is adapted to and exports the 3D glasses models after mirror holder width adjustment, truly presents the effect that user tries glasses on.
Embodiment three
The present invention also provides a kind of online adaption system of glasses of combination gesture identification to be included:Image collecting device and service Device.
Wherein:
(1) image collecting device, which is configured to capture human face image information.
Image collecting device can be achieved using photography/videography.
(2) server, which is configured to:
Preset the incidence relation between gesture information and the 3D glasses models that prestore;
Human face image information is received, the key position point position of locating human face obtains pupil of both eyes spacing and face characteristic Point, and construct 3D faceforms;Wherein, the key position point of face includes canthus position, corners of the mouth position and nose position;
According to the human face characteristic point and 3D faceforms, set up for describing head in three dimensions relative to image The head pose estimation model of harvester amount of deflection;
Gesture information is received and identified, according to the incidence relation of gesture information and the 3D glasses models for prestoring, will be with gesture The corresponding 3D glasses models of information are automatically loaded into the face eye areas of head pose estimation module;
According to pupil of both eyes spacing, the mirror holder width of 3D glasses models is adjusted, glasses is finally completed and is adapted to online and exports 3D glasses models after mirror holder width adjustment.
Server is additionally configured to estimate head in three dimensions relative to image collecting device using POSIT algorithms Amount of deflection.
Further, server is additionally configured to:3D glasses models are rendered.
The system also includes display device, its 3D glasses model being used for after display output mirror holder width adjustment.
The present invention presets the incidence relation between gesture information and the 3D glasses models that prestore first in server, Human face image information according to obtaining is set up 3D faceforms and is adopted relative to image for describing head in three dimensions The head pose estimation model of acquisition means amount of deflection, further according to gesture information, by the 3D glasses model corresponding with gesture information It is automatically loaded into the face eye areas of head pose estimation module so that 3D glasses models are moved with head movement;Most The mirror holder width of 3D glasses models afterwards according to pupil of both eyes spacing, is adjusted, glasses is finally completed and is adapted to online and exports mirror holder width 3D glasses models after degree adjustment, truly present the effect that user tries glasses on.
The server of the present invention is also rendered to 3D glasses models.So being capable of liking to 3D glasses moulds according to user The color of type is rendered, and tries the glasses of the color that user likes on.
The online adaption system of glasses of the combination gesture identification of the present embodiment, presets gesture information and the 3D eyes for prestoring Incidence relation between mirror model, sets up 3D faceforms and for describing head three according to the human face image information for obtaining In dimension space relative to image collecting device amount of deflection head pose estimation model, further according to gesture information, will believe with gesture The corresponding 3D glasses models of manner of breathing are automatically loaded into the face eye areas of head pose estimation module so that 3D glasses models with Head movement and move;Finally according to pupil of both eyes spacing, the mirror holder width of 3D glasses models is adjusted, glasses is finally completed and is existed Line is adapted to and exports the 3D glasses models after mirror holder width adjustment, truly presents the effect that user tries glasses on.
One of ordinary skill in the art will appreciate that all or part of flow process in realizing above-described embodiment method, can be Instruct related hardware to complete by computer program, described program can be stored in a computer read/write memory medium In, the program is upon execution, it may include such as the flow process of the embodiment of above-mentioned each method.Wherein, described storage medium can be magnetic Dish, CD, read-only memory (Read-Only Memory, ROM) or random access memory (Random AccessMemory, RAM) etc..
Although the above-mentioned accompanying drawing that combines is described to the specific embodiment of the present invention, not to present invention protection model The restriction enclosed, one of ordinary skill in the art should be understood that on the basis of technical scheme those skilled in the art are not The various modifications made by needing to pay creative work or deformation are still within protection scope of the present invention.

Claims (10)

1. the online adaptation method of glasses of a kind of combination gesture identification, it is characterised in that the adaptation method is completed in server, The incidence relation between gesture information and the 3D glasses models that prestore is preset in server, following steps are specifically included:
Human face image information is received, the key position point position of locating human face obtains pupil of both eyes spacing and human face characteristic point, and And construct 3D faceforms;Wherein, the key position point of face includes canthus position, corners of the mouth position and nose position;
According to the human face characteristic point and 3D faceforms, set up for describing head in three dimensions relative to image acquisition The head pose estimation model of device amount of deflection;
Gesture information is received and identified, according to the incidence relation of gesture information and the 3D glasses models for prestoring, will be with gesture information Corresponding 3D glasses models are automatically loaded into the face eye areas of head pose estimation module;
According to pupil of both eyes spacing, the mirror holder width of 3D glasses models is adjusted, glasses is finally completed and is adapted to online and exports mirror holder 3D glasses models after width adjustment.
2. the online adaptation method of glasses of a kind of combination gesture identification as claimed in claim 1, it is characterised in that will be with gesture After the corresponding 3D glasses models of information are automatically loaded into the face eye areas of head pose estimation module, also include to 3D Glasses model is rendered.
3. a kind of online adaptation method of glasses of combination gesture identification as claimed in claim 1, it is characterised in that capture face After image information, using facial modeling algorithm come the key position point position of locating human face, face is further obtained Characteristic point.
4. a kind of online adaptation method of glasses of combination gesture identification as claimed in claim 1, it is characterised in that 3D face moulds Point in type is corresponded with the point of facial modeling.
5. the online adaptation method of glasses of a kind of combination gesture identification as claimed in claim 1, it is characterised in that utilize POSIT algorithms estimate amount of deflection of the head in three dimensions relative to image collecting device.
6. the online adaption system of glasses of a kind of combination gesture identification, it is characterised in that include:
3D faceforms build module, and which is used to receive human face image information, and the key position point position of locating human face obtains double Eye pupil pitch of holes and human face characteristic point, and construct 3D faceforms;Wherein, the key position point of face includes canthus position Put, corners of the mouth position and nose position;
Head pose estimation model construction module, which is used for according to the human face characteristic point and 3D faceforms, sets up for retouching State head pose estimation model of the head in three dimensions relative to image collecting device amount of deflection;
The automatic load-on module of 3D glasses models, which is used to receive and identify gesture information, according to gesture information and the 3D eyes for prestoring The 3D glasses model corresponding with gesture information is automatically loaded into the people of head pose estimation module by the incidence relation of mirror model Face eye areas;
3D glasses model adaptation modules, which is used for according to pupil of both eyes spacing, adjusts the mirror holder width of 3D glasses models, final complete It is adapted to and 3D glasses models after exporting mirror holder width adjustment into glasses in line.
7. the online adaption system of glasses of a kind of combination gesture identification as claimed in claim 6, it is characterised in that the system is also Including rendering module, which is used to render 3D glasses models.
8. a kind of online adaption system of glasses of combination gesture identification as claimed in claim 6, it is characterised in that the head Attitude estimation model construction module is additionally operable to estimate head in three dimensions relative to image collector using POSIT algorithms The amount of deflection put.
9. the online adaption system of glasses of a kind of combination gesture identification, it is characterised in that include:
Image collecting device, which is configured to gather human face image information and be sent to server;
Server, which is configured to:
The incidence relation between gesture information and the 3D glasses models that prestore is preset in server;
Human face image information is received, the key position point position of locating human face obtains pupil of both eyes spacing and human face characteristic point, and And construct 3D faceforms;Wherein, the key position point of face includes canthus position, corners of the mouth position and nose position;
According to the human face characteristic point and 3D faceforms, set up for describing head in three dimensions relative to image acquisition The head pose estimation model of device amount of deflection;
Gesture information is received and identified, according to the incidence relation of gesture information and the 3D glasses models for prestoring, will be with gesture information Corresponding 3D glasses models are automatically loaded into the face eye areas of head pose estimation module;
According to pupil of both eyes spacing, the mirror holder width of 3D glasses models is adjusted, glasses is finally completed and is adapted to online and exports mirror holder 3D glasses models after width adjustment.
10. the online adaption system of glasses of a kind of combination gesture identification as claimed in claim 9, it is characterised in that include:Institute State server to be additionally configured to:3D glasses models are rendered.
CN201610976517.3A 2016-11-03 2016-11-03 Glasses online adaption method and system combining hand gesture recognition Pending CN106570747A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610976517.3A CN106570747A (en) 2016-11-03 2016-11-03 Glasses online adaption method and system combining hand gesture recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610976517.3A CN106570747A (en) 2016-11-03 2016-11-03 Glasses online adaption method and system combining hand gesture recognition

Publications (1)

Publication Number Publication Date
CN106570747A true CN106570747A (en) 2017-04-19

Family

ID=58540465

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610976517.3A Pending CN106570747A (en) 2016-11-03 2016-11-03 Glasses online adaption method and system combining hand gesture recognition

Country Status (1)

Country Link
CN (1) CN106570747A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107085864A (en) * 2017-06-01 2017-08-22 北京大学第三医院 The glasses model building device and method of distinguished point based, manufacturing glasses method and glasses
CN107103513A (en) * 2017-04-23 2017-08-29 广州帕克西软件开发有限公司 A kind of virtual try-in method of glasses
CN110533775A (en) * 2019-09-18 2019-12-03 广州智美科技有限公司 A kind of glasses matching process, device and terminal based on 3D face
CN112258280A (en) * 2020-10-22 2021-01-22 恒信东方文化股份有限公司 Method and system for extracting multi-angle head portrait to generate display video
WO2021115298A1 (en) * 2019-12-12 2021-06-17 左忠斌 Glasses matching design device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103177269A (en) * 2011-12-23 2013-06-26 北京三星通信技术研究有限公司 Equipment and method used for estimating object posture
CN105404861A (en) * 2015-11-13 2016-03-16 中国科学院重庆绿色智能技术研究院 Training and detecting methods and systems for key human facial feature point detection model
CN105809507A (en) * 2016-02-29 2016-07-27 北京酷配科技有限公司 Virtualized wearing method and virtualized wearing apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103177269A (en) * 2011-12-23 2013-06-26 北京三星通信技术研究有限公司 Equipment and method used for estimating object posture
CN105404861A (en) * 2015-11-13 2016-03-16 中国科学院重庆绿色智能技术研究院 Training and detecting methods and systems for key human facial feature point detection model
CN105809507A (en) * 2016-02-29 2016-07-27 北京酷配科技有限公司 Virtualized wearing method and virtualized wearing apparatus

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107103513A (en) * 2017-04-23 2017-08-29 广州帕克西软件开发有限公司 A kind of virtual try-in method of glasses
CN107103513B (en) * 2017-04-23 2020-12-29 广州帕克西软件开发有限公司 Virtual try-on method for glasses
CN107085864A (en) * 2017-06-01 2017-08-22 北京大学第三医院 The glasses model building device and method of distinguished point based, manufacturing glasses method and glasses
CN107085864B (en) * 2017-06-01 2023-07-25 北京大学第三医院 Glasses modeling device and method based on feature points, glasses manufacturing method and glasses
CN110533775A (en) * 2019-09-18 2019-12-03 广州智美科技有限公司 A kind of glasses matching process, device and terminal based on 3D face
CN110533775B (en) * 2019-09-18 2023-04-18 广州智美科技有限公司 Glasses matching method and device based on 3D face and terminal
WO2021115298A1 (en) * 2019-12-12 2021-06-17 左忠斌 Glasses matching design device
CN112258280A (en) * 2020-10-22 2021-01-22 恒信东方文化股份有限公司 Method and system for extracting multi-angle head portrait to generate display video

Similar Documents

Publication Publication Date Title
AU2018214005B2 (en) Systems and methods for generating a 3-D model of a virtual try-on product
US9990780B2 (en) Using computed facial feature points to position a product model relative to a model of a face
CN104809638B (en) A kind of virtual try-in method of glasses based on mobile terminal and system
US9697635B2 (en) Generating an avatar from real time image data
CN104978548B (en) A kind of gaze estimation method and device based on three-dimensional active shape model
US10665022B2 (en) Augmented reality display system for overlaying apparel and fitness information
US9842246B2 (en) Fitting glasses frames to a user
CN106570747A (en) Glasses online adaption method and system combining hand gesture recognition
CN105210093B (en) Apparatus, system and method for capturing and displaying appearance
CN106529409B (en) A kind of eye gaze visual angle measuring method based on head pose
Alnajar et al. Calibration-free gaze estimation using human gaze patterns
US9342877B2 (en) Scaling a three dimensional model using a reflection of a mobile device
TW202040348A (en) Virtual try-on systems and methods for spectacles
CN107392159A (en) A kind of facial focus detecting system and method
KR20180112756A (en) A head-mounted display having facial expression detection capability
US11170521B1 (en) Position estimation based on eye gaze
KR20160070744A (en) Method and system to create custom products
WO2001032074A1 (en) System for selecting and designing eyeglass frames
JP2023515517A (en) Fitting eyeglass frames including live fitting
Chen et al. 3D face reconstruction and gaze tracking in the HMD for virtual interaction
WO2015172229A1 (en) Virtual mirror systems and methods
CN110349269A (en) A kind of target wear try-in method and system
Huang et al. Vision-based virtual eyeglasses fitting system
CN113744411A (en) Image processing method and device, equipment and storage medium
CN107025628B (en) Virtual try-on method and device for 2.5D glasses

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170419