WO2024037722A1 - Dispositifs, procédés et programmes informatiques pour essayage de lunettes virtuelles - Google Patents

Dispositifs, procédés et programmes informatiques pour essayage de lunettes virtuelles Download PDF

Info

Publication number
WO2024037722A1
WO2024037722A1 PCT/EP2022/073166 EP2022073166W WO2024037722A1 WO 2024037722 A1 WO2024037722 A1 WO 2024037722A1 EP 2022073166 W EP2022073166 W EP 2022073166W WO 2024037722 A1 WO2024037722 A1 WO 2024037722A1
Authority
WO
WIPO (PCT)
Prior art keywords
eyeglasses
face
virtual
user
lens
Prior art date
Application number
PCT/EP2022/073166
Other languages
English (en)
Inventor
Longchuan NIU
Mohammed-En-Nadhir ZIGHEM
Salah Eddine BEKHOUCHE
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Priority to PCT/EP2022/073166 priority Critical patent/WO2024037722A1/fr
Publication of WO2024037722A1 publication Critical patent/WO2024037722A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the present disclosure relates to the field of object detection, and, more particularly, to virtual eyeglasses try-on, and related devices, methods and computer programs.
  • virtual try-on applications of eyeglasses attempt to align eyeglasses to human faces using face detection and alignment algorithms.
  • existing virtual try-on applications of eyeglasses assume that the user is wearing zero-degree eyeglasses.
  • users often wear prescription eyeglasses which are eyeglasses with a prescribed lens degree.
  • a device for virtual eyeglasses try-on is provided.
  • the device is configured to obtain an input image comprising a face of a user.
  • the device is further configured to perform face detection for the obtained input image.
  • the device is further configured to obtain input information related to virtual eyeglasses.
  • the input information comprises a lens degree indication for the virtual eyeglasses.
  • the device is further configured to generate an output image comprising the face of the user detected by the performed face detection, the detected face wearing the virtual eyeglasses corresponding with the lens degree indication, such that at least an area including eyes of the detected face is modified in accordance with the lens degree indication.
  • the device is further configured to display the generated output image.
  • the present disclosure allows virtual eyeglasses try-on with an adjustable lens degree.
  • the lens degree indication included in the input information based on which the virtual eyeglasses in the output image are generated may be adjusted by the user. Furthermore, at least the eye area of the detected face is modifiable based on the lens degree indication, thus providing the users intuitive information about how they look when wearing various prescription eyeglasses.
  • the device is further configured to obtain the input information by detecting physical eyeglasses worn by the detected face in the obtained input image and estimating the lens degree indication based at least on an eye shape of the detected face.
  • This implementation form allows automatically obtaining the lens degree indication from the input image.
  • the device is further configured to obtain an auxiliary image comprising the face of the user without the physical eyeglasses, wherein the estimating of the lens degree indication is further based on an eye shape difference between the input image and the auxiliary image.
  • This implementation form allows estimating the lens degree indication with enhanced accuracy.
  • the device is further configured to perform the estimating of the lens degree indication via applying a neural network.
  • This implementation form allows estimating the lens degree indication with enhanced efficiency.
  • the device is further configured to obtain a confirmation or a correction for the estimated lens degree indication before the generation of the output image.
  • This implementation form allows estimating the lens degree indication with enhanced accuracy.
  • the device is further configured to remove the detected physical eyeglasses from the detected face before the generation of the output image.
  • This implementation form allows an enhanced user experience virtual eyeglasses try-on, as the generated virtual eyeglasses rather than the physical eyeglasses are visible.
  • the device is further configured to remove the detected physical eyeglasses via applying an eyeglasses removal generative adversarial network.
  • This implementation form allows removing the detected physical eyeglasses with enhanced efficiency.
  • the device is further configured to obtain the input information comprising the lens degree indication from user input.
  • This implementation form allows the user to manually enter the lens degree indication.
  • the device is further configured to obtain at least one visual virtual eyeglasses parameter.
  • the output image is further generated such that the virtual eyeglasses worn by the detected face are modified in accordance with the obtained at least one visual virtual eyeglasses parameter.
  • This implementation form allows an enhanced user experience virtual eyeglasses try-on, as the user is able to change various parameters affecting the visual appearance of the virtual eyeglasses.
  • the obtained at least one visual virtual eyeglasses parameter comprises at least one of an eyeglasses type, a color of a lens, a reflective index of the lens, or a light-sensitivity of the lens.
  • This implementation form allows an enhanced user experience virtual eyeglasses try-on, as the user is able to change various parameters affecting the visual appearance of the virtual eyeglasses.
  • the device is further configured to obtain at least one visual face parameter.
  • the output image is further generated such that the detected face is modified in accordance with the obtained at least one visual face parameter.
  • the obtained input information further comprises at least one of a distance between pupils or a degree of astigmatism.
  • This implementation form allows estimating the lens degree indication with enhanced accuracy and generating an output image with a more realistic visual appearance of the face of the user.
  • the device is further configured to generate and display a comparison image in addition to the generated output image.
  • the comparison image comprises the detected face without the virtual eyeglasses.
  • a method for virtual eyeglasses try-on comprises obtaining, by a device for virtual eyeglasses try-on, an input image comprising a face of a user.
  • the method further comprises performing, by the device, face detection for the obtained input image.
  • the method further comprises obtaining, by the device, input information related to virtual eyeglasses.
  • the input information comprises a lens degree indication for the virtual eyeglasses.
  • the method further comprises generating, by the device, an output image comprising the face of the user detected by the performed face detection, the detected face wearing the virtual eyeglasses corresponding with the lens degree indication, such that at least an area including eyes of the detected face is modified in accordance with the lens degree indication.
  • the method further comprises displaying, by the device, the generated output image.
  • the present disclosure allows virtual eyeglasses try-on with an adjustable lens degree.
  • the lens degree indication included in the input information based on which the virtual eyeglasses in the output image are generated may be adjusted by the user.
  • at least the eye area of the detected face is modifiable based on the lens degree indication, thus providing the users intuitive information about how they look when wearing various prescription eyeglasses.
  • the obtaining of the input information comprises detecting, by the device, physical eyeglasses worn by the detected face in the obtained input image and estimating, by the device, the lens degree indication based at least on an eye shape of the detected face.
  • This implementation form allows automatically obtaining the lens degree indication from the input image.
  • the method further comprises obtaining, by the device, an auxiliary image comprising the face of the user without the physical eyeglasses.
  • the estimating of the lens degree indication is further based on an eye shape difference between the input image and the auxiliary image. This implementation form allows estimating the lens degree indication with enhanced accuracy.
  • the estimating of the lens degree indication is performed, by the device, via applying a neural network.
  • This implementation form allows estimating the lens degree indication with enhanced efficiency.
  • the method further comprises obtaining, by the device, a confirmation or a correction for the estimated lens degree indication before the generation of the output image.
  • This implementation form allows estimating the lens degree indication with enhanced accuracy.
  • the method further comprises removing, by the device, the detected physical eyeglasses from the detected face before the generation of the output image.
  • This implementation form allows an enhanced user experience virtual eyeglasses try-on, as the generated virtual eyeglasses rather than the physical eyeglasses are visible.
  • the removing of the detected physical eyeglasses is performed, by the device, via applying an eyeglasses removal generative adversarial network.
  • This implementation form allows removing the detected physical eyeglasses with enhanced efficiency.
  • the input information comprising the lens degree indication is obtained, by the device, from user input. This implementation form allows the user to manually enter the lens degree indication.
  • the method further comprises obtaining, by the device, at least one visual virtual eyeglasses parameter.
  • the output image is further generated such that the virtual eyeglasses worn by the detected face are modified in accordance with the obtained at least one visual virtual eyeglasses parameter.
  • the obtained at least one visual virtual eyeglasses parameter comprises at least one of an eyeglasses type, a color of a lens, a reflective index of the lens, or a light-sensitivity of the lens.
  • the method further comprises obtaining, by the device, at least one visual face parameter.
  • the output image is further generated such that the detected face is modified in accordance with the obtained at least one visual face parameter.
  • the obtained input information further comprises at least one of a distance between pupils or a degree of astigmatism.
  • This implementation form allows estimating the lens degree indication with enhanced accuracy and generating an output image with a more realistic visual appearance of the face of the user.
  • the method further comprises generating and displaying, by the device, a comparison image in addition to the generated output image.
  • the comparison image comprises the detected face without the virtual eyeglasses.
  • a computer program product comprises program code configured to perform a method according to the second aspect, when the program code is executed on a computer.
  • the present disclosure allows virtual eyeglasses try-on with an adjustable lens degree.
  • the lens degree indication included in the input information based on which the virtual eyeglasses in the output image are generated may be adjusted by the user.
  • at least the eye area of the detected face is modifiable based on the lens degree indication, thus providing the users intuitive information about how they look when wearing various prescription eyeglasses.
  • Fig. 1 includes diagrams illustrating eyeglasses with a variety of lens degrees
  • Fig. 2 is a block diagram illustrating a device for virtual eyeglasses try-on
  • Fig. 3 is a diagram illustrating a general overview of virtual eyeglasses try-on with an adjustable lens degree
  • Fig. 4 is a diagram illustrating an augmented reality -based eyeglasses try-on block of Fig. 3 in more detail;
  • Fig. 5 is a diagram illustrating a first embodiment
  • Fig. 6 is a diagram further illustrating the first embodiment
  • Fig. 7 is a diagram illustrating a second embodiment
  • Fig. 8 is a diagram further illustrating the second embodiment
  • Fig. 9A is a flow chart illustrating a method
  • Fig. 9B is a flow chart illustrating another method.
  • a disclosure in connection with a described method may also hold true for a corresponding device or system configured to perform the method and vice versa.
  • a corresponding device may include a unit to perform the described method step, even if such unit is not explicitly described or illustrated in the figures.
  • a corresponding method may include a step performing the described functionality, even if such step is not explicitly described or illustrated in the figures.
  • Eyeglasses may change the appearance of a user wearing them depending on their lens degree. Particularly eye shape and/or an area including eyes may be changed.
  • Fig. 1 includes diagrams illustrating this.
  • Diagram 100 A illustrates a user wearing eyeglasses with farsighted lenses
  • diagram 100B illustrates a user wearing eyeglasses with zero-degree lenses
  • diagram 100C illustrates a user wearing eyeglasses with nearsighted lenses. As can be seen, the eye shape differs in each of diagrams 100A to 100C.
  • AR augmented reality
  • At least some embodiments of the present disclosure may provide virtual try-on of eyeglasses for users with prescription lenses.
  • Lens degree may be specified, e.g., via user input (e.g., via a slide bar), or via estimation performed, e.g., by a pre-trained neural network, such as a convolutional neural network (CNN) -based pre-trained neural network.
  • CNN convolutional neural network
  • At least some embodiments of the present disclosure may allow easy integration, e.g., into a website or a mobile application.
  • At least some embodiments of the present disclosure may be executed in realtime, thereby functioning as a virtual mirror showing the user’s face wearing virtual eyeglasses.
  • At least some embodiments of the present disclosure may allow removal of physical eyeglasses form an output image. That is, the user may wear the physical eyeglasses which are replaced with selected virtual eyeglasses in the output image without the user having to actually take off the physical eyeglasses.
  • At least some embodiments of the present disclosure may allow photo realistic eye shape generation in accordance with lens degrees - from minus to plus degrees, with or without eyeglasses.
  • Diagram 300 of Fig. 3 illustrates a general overview of the disclosed virtual eyeglasses try-on with an adjustable lens degree.
  • the lens degree may be given by a user at block 304 (using, e.g., a slide bar provided by a suitable user interface) or evaluated by, e.g., an algorithm at blocks 302, 303 from an input image obtained at block 301.
  • the physical eyeglasses may be removed to be replaced with virtual eyeglasses.
  • the lens degree may be estimated, e.g., by comparing the user’s eye shape with and without the physical eyeglasses.
  • the user may provide additional input, such as visual virtual eyeglasses parameters and/or visual face parameters described in more detail below.
  • an output image may be generated.
  • the generation of the output image is described in more detail below in connection with Fig. 2.
  • an implementation example of block 307 is described in detail below in connection with Fig. 4.
  • the generated output image may be displayed to the user.
  • Fig. 2 is a block diagram illustrating the device 200 for virtual eyeglasses try-on, according to an embodiment of the present disclosure.
  • the device 200 may comprise any of various types of client devices used directly by an end user entity and capable of communication in a wireless network, such as user equipment (UE).
  • client devices include but are not limited to smartphones, tablet computers, smart watches, lap top computers, intemet- of-things (loT) devices, etc.
  • the device 200 may comprise a server device (e.g. a server device providing one or more cloud services) communicatively connected (e.g., wirelessly) to a client device.
  • the device 200 comprises at least one processor or a processing unit 202 and at least one memory 204 coupled to the at least one processor 202, which may be used to implement the functionalities described later in more detail.
  • the device 200 may further comprise a digital camera 206.
  • the camera 206 may be configured, e.g., to capture input images and/or auxiliary images described later in more detail.
  • the camera 206 may comprise, e.g., a frontfacing camera or a rear-facing camera.
  • the device 200 may further comprise a display 208 configured to, e.g., display output images and/or prompts to a user, described later in more detail.
  • the display 208 may comprise a touch screen.
  • the device 200 may also include other elements, such as a transceiver configured to enable the device 200 to transmit and/or receive information to/from other devices, and/or input means (such as a physical or virtual keyboard), as well as other elements not shown in Fig. 2.
  • the device 200 may use the transceiver to transmit or receive signalling information and data in accordance with at least one cellular communication protocol.
  • the transceiver may be configured to provide at least one wireless radio connection, such as for example a 3GPP mobile broadband connection (e.g., 5G).
  • the transceiver may comprise, or be configured to be coupled to, at least one antenna to transmit and/or receive radio frequency signals.
  • the at least one processor 202 may include, e.g., one or more of various processing devices, such as a co-processor, a microprocessor, a controller, a digital signal processor (DSP), a processing circuitry with or without an accompanying DSP, or various other processing devices including integrated circuits such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a microcontroller unit (MCU), a hardware accelerator, a special-purpose computer chip, or the like.
  • various processing devices such as a co-processor, a microprocessor, a controller, a digital signal processor (DSP), a processing circuitry with or without an accompanying DSP, or various other processing devices including integrated circuits such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a microcontroller unit (MCU), a hardware accelerator, a special-purpose computer chip, or the like.
  • ASIC application specific integrated circuit
  • FPGA field programm
  • the memory 204 may be configured to store e.g. computer programs and the like.
  • the memory may include one or more volatile memory devices, one or more non-volatile memory devices, and/or a combination of one or more volatile memory devices and non-volatile memory devices.
  • the memory 204 may be embodied as semiconductor memories (such as mask ROM, PROM (programmable ROM), EPROM (erasable PROM), flash ROM, RAM (random access memory), etc.).
  • the device 200 is configured to obtain an input image comprising a face of a user.
  • the input image may comprise, for example, a still image or an image frame of a video stream, e.g. captured by the camera 206.
  • the device 200 is further configured to perform face detection for the obtained input image.
  • the device 200 is further configured to obtain input information related to virtual eyeglasses.
  • the input information comprises a lens degree indication for the virtual eyeglasses.
  • the obtained input information may further comprise a distance between pupils and/or a degree of astigmatism.
  • the lens degree indication may include one or more values in an eyeglass prescription of the user, such as one or more diopter related values.
  • the device 200 is further configured to generate an output image.
  • the output image comprises the face of the user detected by the performed face detection, the detected face wearing the virtual eyeglasses corresponding with the lens degree indication, such that at least an area including eyes of the detected face is modified in accordance with the lens degree indication.
  • the device 200 is further configured to display the generated output image, e.g., on the display 208.
  • the device 200 may be further configured to obtain the input information by detecting physical eyeglasses worn by the detected face in the obtained input image and estimating the lens degree indication based at least on an eye shape of the detected face.
  • the device 200 may be further configured to perform the estimating of the lens degree indication via applying a neural network.
  • the device 200 may be further configured to display guidance information in response to failing to detect the physical eyeglasses.
  • the guidance information may comprise a prompt to make sure the device 200 is stable and/or to make sure the face in view of the camera 206.
  • the device 200 may be further configured to obtain an auxiliary image comprising the face of the user without the physical eyeglasses.
  • the estimating of the lens degree indication may further be based on an eye shape difference between the input image and the auxiliary image.
  • the device 200 may be further configured to obtain a confirmation or a correction for the estimated lens degree indication before the generation of the output image.
  • the device 200 may be further configured to remove the detected physical eyeglasses from the detected face before the generation of the output image.
  • the device 200 may be further configured to remove the detected physical eyeglasses via applying an eyeglasses removal generative adversarial network (ERGAN).
  • ESGAN eyeglasses removal generative adversarial network
  • the device 200 may be further configured to obtain the input information comprising the lens degree indication from user input (e.g., via a text entry box or a slide bar).
  • the user may be prompted to take off the physical eyeglasses (e.g., to allow the display of the virtual eyeglasses on the user’s face without the physical eyeglasses), and/or to wear the physical eyeglasses (e.g., to allow the lens degree estimation of the first embodiment).
  • the device 200 may be further configured to obtain at least one visual virtual eyeglasses parameter.
  • the output image may further be generated such that the virtual eyeglasses worn by the detected face are modified in accordance with the obtained at least one visual virtual eyeglasses parameter.
  • the obtained at least one visual virtual eyeglasses parameter may comprise an eyeglasses type, a color of a lens, a reflective index of the lens, and/or a light-sensitivity of the lens.
  • the device 200 may be further configured to obtain at least one visual face parameter.
  • the output image may further be generated such that the detected face is modified in accordance with the obtained at least one visual face parameter.
  • the obtained at least one visual face parameter may comprise eye color, hair color, and/or skin tone.
  • the device 200 may be further configured to generate and display a comparison image in addition to the generated output image, e.g., simultaneously or successively.
  • the comparison image may comprise the detected face without the virtual eyeglasses.
  • Diagram 500 of Fig. 5 and diagram 600 of Fig. 6 further illustrate the first embodiment.
  • the first embodiment allows a virtual eyeglasses try-on with an adjustable lens degree, e.g., when the user is wearing physical eyeglasses, as shown at 200 A, and wants to find out how he/she looks like wearing various virtual eyeglasses, as shown at 200B.
  • the lens degree will be estimated and used to generate the user’s face with the virtual eyeglasses.
  • various virtual eyeglasses may be selected/switched, block 501, and the lens degree estimate provided by the lens degree estimation may be edited, block 502.
  • a face detector may be applied to an input image or video of the user.
  • the device 200 may be further configured to prompt the user to enter the lens degree, thereby triggering a switch to the second embodiment.
  • the device 200 may be further configured to display guidance information in response to failing to detect the physical eyeglasses.
  • the guidance information may comprise a prompt to make sure the device 200 is stable and/or to make sure the face in view of the camera 206.
  • the eyeglasses detector 601 may be launched (e.g., for each detected face in the input image/video) to check if the user(s) is/are already wearing eyeglasses or not.
  • the lens degree estimation 602 may be performed, e.g., using a trained CNN architecture.
  • the estimated lens degree may be confirmed by the user. Additionally, the user may edit the estimated lens degree value, e.g., to correspond with an eyeglasses prescription, block 603.
  • eyeglasses removal 604 may be performed, e.g., by using a trained ERGAN to generate the user’s face without glasses.
  • the ERGAN output, the lens degree value(s), and/or the eyeglasses style or the like may be input to the AR glasses degree block 606 to generate the user’s face with the virtual eyeglasses.
  • Diagram 700 of Fig. 7 and diagram 800 of Fig. 8 further illustrate the first embodiment.
  • the second embodiment allows a virtual eyeglasses try-on with an adjustable lens degree, e.g., when the user is not wearing physical eyeglasses, as shown at 200C, and wants to find out how he/she looks like wearing various virtual eyeglasses, as shown at 200D.
  • the lens degree will be obtained from user input and used to generate the user’s face with the virtual eyeglasses.
  • various virtual eyeglasses may be selected/switched, block 701.
  • An objective of the second embodiment includes providing a good virtual eyeglasses experience to the user with manual lens degree input.
  • a face detector may be applied to an input image or video of the user.
  • the device 200 may be further configured to display guidance information in response to failing to detect the physical eyeglasses.
  • the guidance information may comprise a prompt to make sure the device 200 is stable and/or to make sure the face in view of the camera 206.
  • the eyeglasses detector 801 may be launched (e.g., for each detected face in the input image/video) to check if the user(s) is/are already wearing eyeglasses or not.
  • the user may be prompted to manually enter the lens degree, block 802.
  • the input image or video, the entered lens degree value(s), and/or the eyeglasses style or the like (block 803) may be input to the AR glasses degree block 804 to generate the user’s face with the virtual eyeglasses.
  • Fig. 4 illustrates the AR -based eyeglasses try-on block 307 of Fig. 3 in more detail.
  • the block 307 may take three inputs: user’s face image 401, the lens degree 402 of the user, and the eyeglasses model 403.
  • face and facial key point detection may be performed for the images with faces.
  • scaling of the face region with the lens degree (either plus or minus) inside the eyeglasses frame may be performed for the faces.
  • the face region within the eyeglasses frame inner contour may be cropped.
  • estimation of the eye pose may be performed.
  • a three-dimensional (3D) face transform module may handle face pose estimation, eye point alignment, and 3D eyeglasses rendering.
  • PBR physically based rendering
  • IBL image-based lighting
  • IBL image-based lighting
  • Fig. 9A is a flow chart illustrating a method 900A for virtual eyeglasses try-on, according to an embodiment of the present disclosure.
  • the device 200 for virtual eyeglasses try-on obtains the input image comprising the face of the user.
  • the user may be wearing physical eyeglasses in the obtained input image.
  • the user may be prompted to wear the physical eyeglasses before operation 901A, e.g., via a user interface of the device 200.
  • the device 200 may obtain the at least one visual virtual eyeglasses parameter.
  • the device 200 may obtain the at least one visual face parameter.
  • the device 200 performs the face detection for the obtained input image.
  • the device 200 obtains the input information related to the virtual eyeglasses.
  • the input information comprises a lens degree indication for the virtual eyeglasses.
  • the device 200 may obtain the auxiliary image comprising the face of the user without the physical eyeglasses. At least in some embodiments, the user may be prompted to take off the physical eyeglasses before operation 905A, e.g., via the user interface of the device 200.
  • the device 200 may detect the physical eyeglasses worn by the detected face in the obtained input image and estimate, at optional operation 905 A3, the lens degree indication based at least on the eye shape of the detected face.
  • the device 200 may obtain the confirmation or correction for the estimated lens degree indication.
  • the device 200 may remove the detected physical eyeglasses from the detected face.
  • the device 200 generates the output image comprising the face of the user detected by the performed face detection, the detected face wearing the virtual eyeglasses corresponding with the lens degree indication, such that at least the area including eyes of the detected face is modified in accordance with the lens degree indication.
  • the device 200 displays the generated output image.
  • the device 200 the comparison image in addition to the generated output image.
  • the comparison image comprises the detected face without the virtual eyeglasses.
  • the method 900A may be performed by the device 200.
  • the operations 901-908 can, for example, be performed by the at least one processor 202 and the memory 204. Further features of the method 900A directly result from the functionalities and parameters of the device 200 and thus are not repeated here.
  • the method 900A can be performed by computer programs.
  • the operations 901 A-908 of Fig. 9 A may be carried out in any suitable order, or simultaneously where appropriate.
  • operation 903 may be carried out after operation 905A1.
  • operation 905A1 may be carried out before operation 901 A.
  • Fig. 9B is a flow chart illustrating a method 900B for virtual eyeglasses try-on, according to an embodiment of the present disclosure.
  • the device 200 for virtual eyeglasses try-on obtains the input image comprising the face of the user.
  • the user may be without physical eyeglasses in the obtained input image.
  • the device 200 may obtain the at least one visual virtual eyeglasses parameter.
  • the device 200 may obtain the at least one visual face parameter.
  • the device 200 performs the face detection for the obtained input image.
  • the device 200 obtains the input information related to the virtual eyeglasses.
  • the input information comprises a lens degree indication for the virtual eyeglasses. More specifically, at optional operation 905B, the device 200 may obtain the input information comprising the lens degree indication from user input.
  • the device 200 generates the output image comprising the face of the user detected by the performed face detection, the detected face wearing the virtual eyeglasses corresponding with the lens degree indication, such that at least the area including eyes of the detected face is modified in accordance with the lens degree indication.
  • the device 200 displays the generated output image.
  • the device 200 the comparison image in addition to the generated output image.
  • the comparison image comprises the detected face without the virtual eyeglasses.
  • the method 900B may be performed by the device 200.
  • the operations 901-908 can, for example, be performed by the at least one processor 202 and the memory 204. Further features of the method 900B directly result from the functionalities and parameters of the device 200 and thus are not repeated here.
  • the method 900B can be performed by computer programs.
  • the operations 901B-908 of Fig. 9B may be carried out in any suitable order, or simultaneously where appropriate.
  • operation 903 may be carried out after operation 905B.
  • operation 905B may be carried out before operation 901B.
  • the device 200 may comprise means for performing at least one method described herein.
  • the means may comprise the at least one processor 202, and the at least one memory 204 including program code configured to, when executed by the at least one processor, cause the device 200 to perform the method.
  • the functionality described herein can be performed, at least in part, by one or more computer program product components such as software components.
  • the device 200 may comprise a processor or processor circuitry, such as for example a microcontroller, configured by the program code when executed to execute the embod- iments of the operations and functionality described.
  • the functionality described herein can be performed, at least in part, by one or more hardware logic components.
  • illustrative types of hardware logic components include Field-programmable Gate Arrays (FPGAs), Program-specific Integrated Circuits (ASICs), Program-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), and Graphics Processing Units (GPUs).
  • FPGAs Field-programmable Gate Arrays
  • ASICs Program-specific Integrated Circuits
  • ASSPs Program-specific Standard Products
  • SOCs System-on-a-chip systems
  • CPLDs Complex Programmable Logic Devices
  • GPUs Graphics Processing Units

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Hardware Design (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention divulgue des dispositifs, des procédés et des programmes informatiques pour l'essayage de lunettes virtuelles. L'invention permet l'essayage de lunettes virtuelles avec un degré de lentille réglable. Une indication de degré de lentille incluse dans des informations d'entrée sur la base desquelles les lunettes virtuelles dans une image de sortie sont générées peut être ajustée par un utilisateur. En outre, au moins une zone d'œil d'un visage détecté peut être modifiée sur la base de l'indication de degré de lentille, fournissant ainsi aux utilisateurs des informations intuitives concernant ce à quoi ils ressemblent lors du port de diverses lunettes de prescription.
PCT/EP2022/073166 2022-08-19 2022-08-19 Dispositifs, procédés et programmes informatiques pour essayage de lunettes virtuelles WO2024037722A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2022/073166 WO2024037722A1 (fr) 2022-08-19 2022-08-19 Dispositifs, procédés et programmes informatiques pour essayage de lunettes virtuelles

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2022/073166 WO2024037722A1 (fr) 2022-08-19 2022-08-19 Dispositifs, procédés et programmes informatiques pour essayage de lunettes virtuelles

Publications (1)

Publication Number Publication Date
WO2024037722A1 true WO2024037722A1 (fr) 2024-02-22

Family

ID=83270834

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2022/073166 WO2024037722A1 (fr) 2022-08-19 2022-08-19 Dispositifs, procédés et programmes informatiques pour essayage de lunettes virtuelles

Country Status (1)

Country Link
WO (1) WO2024037722A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018005884A1 (fr) * 2016-06-29 2018-01-04 EyesMatch Ltd. Système et procédé de miroir de maquillage numérique
US20220230300A1 (en) * 2019-08-02 2022-07-21 Genentech, Inc. Using Deep Learning to Process Images of the Eye to Predict Visual Acuity

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018005884A1 (fr) * 2016-06-29 2018-01-04 EyesMatch Ltd. Système et procédé de miroir de maquillage numérique
US20220230300A1 (en) * 2019-08-02 2022-07-21 Genentech, Inc. Using Deep Learning to Process Images of the Eye to Predict Visual Acuity

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
JUNFENG LYU ET AL: "Portrait Eyeglasses and Shadow Removal by Leveraging 3D Synthetic Data", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 20 March 2022 (2022-03-20), XP091181419 *
LUTZ SEBASTIAN ET AL: "Deep Convolutional Neural Networks for estimating lens distortion parameters SAUCE -Smart Asset re-Use in Creative Environment View project ACTION-TV View project Deep Convolutional Neural Networks for estimating lens distortion parameters", 12 August 2019 (2019-08-12), pages 1 - 8, XP093045233, Retrieved from the Internet <URL:https://www.researchgate.net/profile/Sebastian-Lutz-3/publication/335126536_Deep_Convolutional_Neural_Networks_for_estimating_lens_distortion_parameters/links/5d518870299bf1995b79c30b/Deep-Convolutional-Neural-Networks-for-estimating-lens-distortion-parameters.pdf> [retrieved on 20230509] *
MARELLI DAVIDE ET AL: "Designing an AI-Based Virtual Try-On Web Application", SENSORS, vol. 22, no. 10, 18 May 2022 (2022-05-18), pages 3832, XP093045239, DOI: 10.3390/s22103832 *
ROY DEBAPRIYA ET AL: "An Unsupervised Approach towards Varying Human Skin Tone Using Generative Adversarial Networks", 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), IEEE, 10 January 2021 (2021-01-10), pages 10681 - 10688, XP033908828, DOI: 10.1109/ICPR48806.2021.9412852 *
SOARES BORGES ALINE DE FATIMA ET AL: "A Virtual Makeup Augmented Reality System", 2019 21ST SYMPOSIUM ON VIRTUAL AND AUGMENTED REALITY (SVR), IEEE, 28 October 2019 (2019-10-28), pages 34 - 42, XP033645016, DOI: 10.1109/SVR.2019.00022 *
ZHANG QIAN ET AL: "A Virtual Try-On System for Prescription Eyeglasses", IEEE COMPUTER GRAPHICS AND APPLICATIONS, vol. 37, no. 4, 31 August 2017 (2017-08-31), pages 84 - 93, XP011659059, ISSN: 0272-1716, [retrieved on 20170818], DOI: 10.1109/MCG.2017.3271458 *

Similar Documents

Publication Publication Date Title
AU2019419376B2 (en) Virtual try-on systems and methods for spectacles
AU2018214005B2 (en) Systems and methods for generating a 3-D model of a virtual try-on product
KR102204810B1 (ko) 안경테를 가상으로 조정하기 위한 방법, 장치 및 컴퓨터 프로그램
Plopski et al. Corneal-imaging calibration for optical see-through head-mounted displays
CN106682632B (zh) 用于处理人脸图像的方法和装置
US9342877B2 (en) Scaling a three dimensional model using a reflection of a mobile device
EP3101624A1 (fr) Procédé de traitement d&#39;image et dispositif de traitement d&#39;image
EP3243162A1 (fr) Décalage de détection de regard pour modèles de suivi du regard
JP2017514193A (ja) 視線方向を決定するための3d画像分析装置
CN113366491B (zh) 眼球追踪方法、装置及存储介质
KR20100050052A (ko) 안경 가상 착용 방법
US11960146B2 (en) Fitting of glasses frames including live fitting
WO2018119938A1 (fr) Procédé et dispositif de traitement d&#39;images
KR20170071967A (ko) 안경 온라인 판매 시스템의 안경 추천 방법
CN110503068A (zh) 视线估计方法、终端及存储介质
CN108573192B (zh) 匹配人脸的眼镜试戴方法和装置
US20220277512A1 (en) Generation apparatus, generation method, system, and storage medium
CN107659772A (zh) 3d图像生成方法、装置及电子设备
Tang et al. Making 3D eyeglasses try-on practical
WO2024037722A1 (fr) Dispositifs, procédés et programmes informatiques pour essayage de lunettes virtuelles
WO2022272230A1 (fr) Détection de point-selle d&#39;oreille robuste et efficace du point de vue du calcul
CN113744411A (zh) 图像处理方法及装置、设备、存储介质
CN104423038B (zh) 电子设备及其焦点信息获取方法
US11798248B1 (en) Fitting virtual eyewear models on face models
US12008711B2 (en) Determining display gazability and placement of virtual try-on glasses using optometric measurements

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22768350

Country of ref document: EP

Kind code of ref document: A1