WO2023134269A1 - Dispositif d'affichage, et système et procédé d'essayage virtuel - Google Patents

Dispositif d'affichage, et système et procédé d'essayage virtuel Download PDF

Info

Publication number
WO2023134269A1
WO2023134269A1 PCT/CN2022/128392 CN2022128392W WO2023134269A1 WO 2023134269 A1 WO2023134269 A1 WO 2023134269A1 CN 2022128392 W CN2022128392 W CN 2022128392W WO 2023134269 A1 WO2023134269 A1 WO 2023134269A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
image
display device
image data
data associated
Prior art date
Application number
PCT/CN2022/128392
Other languages
English (en)
Chinese (zh)
Inventor
黄玖法
Original Assignee
海信视像科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 海信视像科技股份有限公司 filed Critical 海信视像科技股份有限公司
Publication of WO2023134269A1 publication Critical patent/WO2023134269A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Definitions

  • the present application relates to the technical field of intelligent display devices, in particular to a display device, a virtual fitting system and a method.
  • the virtual fitting system is a service platform that integrates hardware and software. It can build a virtual model through augmented reality (Augmented Reality, AR), artificial intelligence (AI), 3D vision and other technologies, and generate a display based on the virtual model. picture.
  • the virtual fitting system can realize the 360-degree natural fit of virtual clothes and the effect of clothes moving with people. It can be widely used in online shopping, daily wear and other life scenes.
  • the virtual fitting system can be built into smart devices such as smart terminals and display devices.
  • the smart device When using the virtual fitting system, the smart device is controlled to run the application of the virtual fitting system. Then input image data such as photos, then the virtual fitting system can form a synthetic effect map through image synthesis, and display it through smart devices to achieve the purpose of virtual fitting.
  • this virtual fitting method which forms effect images through image synthesis, is only suitable for static image display, and cannot display the fitting effect in real time, nor can it display the effects of various wearing postures and viewing angles.
  • the final effect of this kind of virtual fitting system is poor, and the degree of restoration of fitting is low.
  • the present disclosure provides a display device, including: a display, a camera, a communicator, and a controller.
  • the display is configured to display images and/or user interfaces;
  • the camera is configured to collect image data in real time, and the image data includes images associated with the user;
  • the communicator is configured to establish a communication connection with the server, and the server has a built-in model reconstruction application, using The human body model is generated according to the image data associated with the user;
  • the controller is connected with the display, the camera, and the communicator, and the controller is configured to perform: obtaining the image data associated with the user; sending the image data associated with the user to the server for Make the server generate a human body model according to the image data associated with the user and send the human body model to the controller; add clothing materials to the human body model to synthesize the rendering model; extract action parameters from the image data associated with the user, and according to the The action parameters adjust the model pose of the rendered model to render a fitting picture.
  • the present disclosure also provides a virtual fitting system, including: a display device, an image acquisition device, and a server; wherein, the image acquisition device is connected to the display device, and the display device establishes a communication connection with the server; the image acquisition device is configured to collect image data in real time, The image data includes images associated with the user, and image signal processing is performed on the images associated with the user to generate image data associated with the user; the image acquisition device is also configured to send the image data associated with the user to the display device; the display device It is configured to obtain the image data associated with the user, and send the image data associated with the user to the server; a model reconstruction application is built in the server, and the server is configured to receive the image data associated with the user, and run the model reconstruction application; according to the The image data associated with the user generates a human body model, and the server is also configured to send the human body model to the display device; the display device is also configured to add clothing materials to the human body model to synthesize the rendered model; and, extract from the image data associated with the
  • the present disclosure also provides a virtual fitting method, which is applied to a virtual fitting system.
  • the virtual fitting system includes: a display device, an image acquisition device, and a server; wherein, the image acquisition device is connected to the display device, and the display device establishes a communication connection with the server;
  • the virtual fitting method includes: the image acquisition device collects images associated with the user in real time, and performs image signal processing on the image associated with the user to generate image data associated with the user; the display device obtains the image data associated with the user, and The image data associated with the user is sent to the server; the server receives the image data associated with the user, generates a mannequin based on the image data associated with the user, and sends the mannequin to the display device; the display device adds clothing materials to the mannequin for composite rendering Model; the display device extracts action parameters from the image data associated with the user, and adjusts the model pose of the rendering model according to the action parameters to render a fitting picture.
  • An embodiment of the present disclosure also provides a computer-readable non-volatile storage medium, on which computer instructions are stored, and when the computer instructions are executed by a processor, the computer device executes the above method.
  • FIG. 1 is a scene diagram of a virtual fitting system in an embodiment of the present disclosure
  • FIG. 2 is a hardware configuration diagram of a display device in an embodiment of the present disclosure
  • FIG. 3 is a schematic structural diagram of a home smart wardrobe in an embodiment of the present disclosure
  • FIG. 4 is a schematic structural diagram of a display device with a built-in camera in an embodiment of the present disclosure
  • FIG. 5 is a schematic structural diagram of a display device connected to an external image acquisition device in an embodiment of the present disclosure
  • Fig. 6a is a schematic diagram of a virtual fitting interface in an embodiment of the present disclosure.
  • Fig. 6b is a display effect diagram of clothing option classification in an embodiment of the present disclosure.
  • Fig. 6c is a schematic diagram of an interface for identifying clothes in an embodiment of the present disclosure.
  • Fig. 6d is a schematic diagram of an interface for selecting a clothing color in an embodiment of the present disclosure
  • FIG. 6e is a schematic diagram showing the effect of purchase links in an embodiment of the present disclosure.
  • FIG. 6f is a schematic diagram of a purchase interface displayed in an embodiment of the present disclosure.
  • Fig. 6g is a schematic diagram of the effect of displaying product sales positions in the smart fitting mirror in the embodiment of the present disclosure
  • FIG. 7 is a schematic flow diagram of a virtual fitting method in an embodiment of the present disclosure.
  • FIG. 8 is a software configuration diagram of a server in an embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram of a fitting application interface in an embodiment of the present disclosure.
  • FIG. 10 is a schematic diagram of a key frame-based action driving process in an embodiment of the present disclosure.
  • Fig. 11 is a schematic diagram of a process flow for matching associated clothes in an embodiment of the present disclosure.
  • FIG. 12 is a schematic diagram of an expression matching process in an embodiment of the present disclosure.
  • Fig. 13 is a sequence diagram of data interaction in the virtual fitting system in an embodiment of the present disclosure.
  • the virtual fitting system is a service platform integrating hardware and software, which can construct a virtual model through augmented reality (Augmented Reality, AR), artificial intelligence (Artificial Intelligence, AI) and 3D vision technologies. , and generate a display screen based on the virtual model.
  • augmented reality Augmented Reality, AR
  • artificial intelligence Artificial Intelligence, AI
  • 3D vision technologies 3D vision technologies.
  • Fig. 1 is an exemplary use scene diagram of the virtual fitting system in the embodiment of the present application of some embodiments of the present disclosure.
  • the virtual fitting system provided by the present disclosure may include a control device 100 , a display device 200 , an intelligent terminal 300 , and a server 400 .
  • the virtual fitting system can realize the virtual fitting function through the collaborative work among multiple devices.
  • the control device 100 and the smart terminal 300 can be used for user interaction, and can input control instructions to the display device 200 based on the virtual fitting user interface provided by the display device 200 .
  • Both the display device 200 and the server 400 have data processing capability.
  • the display device 200 is deployed locally, the server 400 is deployed in the cloud, and the display device 200 and the server 400 can exchange data.
  • the control device 100 can receive the user's input operation instructions, and convert the operation instructions into instructions that the display device 200 can recognize and respond to, and play an intermediary role between the user and the display device 200 .
  • the control device 100 may be a remote controller, and the communication between the remote controller and the display device 200 includes infrared protocol communication, bluetooth protocol communication, and other short-distance communication methods, and the display device 200 is controlled wirelessly or wiredly.
  • the user can control the display device 200 by inputting user commands through buttons on the remote control, voice input, control panel input, and the like.
  • a smart terminal 300 (such as a mobile terminal, a tablet computer, a computer, a notebook computer, etc.) can also be used to control the display device 200 .
  • the display device 200 is controlled using an application program running on a smart terminal.
  • the display device may not use the above-mentioned smart terminal or control device to receive instructions, but may receive user control through touch or gesture.
  • the display device 200 can also be controlled in a manner other than the control device 100 and the smart terminal 300.
  • the module for acquiring voice commands configured inside the display device 200 can directly receive the user's voice command control , the user's voice command control can also be received through the voice control device installed outside the display device 200.
  • FIG. 2 is an exemplary hardware configuration diagram of a display device in some embodiments of the present disclosure.
  • the display device 200 includes one or more combinations of functional modules such as a power supply 210 , a communicator 220 , a memory 230 , an interface module 240 , a controller 250 , and a display 260 .
  • the power supply 210 can supply power to the display device 200 so that each functional module can be powered on and run.
  • the communicator 220 is used to establish a communication connection relationship between the display device 200 and the server 400, for example, the display device 200 communicates through a local area network (LAN), a wireless local area network (WLAN) and other networks.
  • the server 400 may provide various contents and interactions to the display device 200 .
  • the memory 230 is used to store various information and application data.
  • the interface module 240 is used to connect the display device 200 with peripheral devices to realize the input or output of specific types of signals.
  • the controller 250 controls the operation of the display device 200 and responds to user's operations by running various software control programs stored in the memory 230 .
  • the display 260 is used to present a user interface, so that the display device 200 has a screen display function, and the display device 200 can present a specific display screen on the display 260 by running the application program in the memory 230 and displaying the process. For example, playback interface, user interface, application program interface, etc.
  • the display 260 may take different forms and have different display ratios.
  • the display 260 may have a shape corresponding to a standard display screen ratio.
  • the display resolution of a smart TV is 3840 ⁇ 2160; the display resolution of a personal computer is 1920 ⁇ 1080; the display resolution of a display terminal is 2400 ⁇ 1080.
  • the displays 260 included in them can be designed in a shape and proportion suitable for actual purposes.
  • the display 260 of the virtual reality device includes two square screens on the left and right;
  • FIG. 3 shows a schematic structural diagram of a home smart wardrobe in some embodiments of the present disclosure. As shown in FIG. Elongated display equal to the width and height of the wardrobe door.
  • the display device 200 should be able to acquire a portrait frame associated with the user.
  • the display device 200 can acquire portrait images through the built-in image acquisition module 270 , that is, the display device 200 further includes the image acquisition module 270 on the basis of including the above-mentioned various functional modules.
  • FIG. 4 shows a schematic structural diagram of a display device with a built-in camera in some embodiments of the present disclosure.
  • the image acquisition module is a camera disposed on the top or bottom of the display device 200 .
  • the display device 200 can acquire portrait images through an external image acquisition device 500 .
  • FIG. 5 shows a schematic structural diagram of a display device externally connected to an image acquisition device in some embodiments of the present disclosure, that is, as shown in FIG. 5 , the display device 200 can be connected to the image acquisition device 500 through the interface module 240 .
  • the image acquisition device 500 has a built-in camera and a transmission circuit, and can take pictures of the user through the camera, and then send the captured image or video to the display device 200 for display through the transmission circuit and the interface module 240 .
  • the display device 200 acts as a direct interaction device for the user, which can receive the user's control instructions, and perform data processing according to the control instructions to form a user interface containing different contents, which are presented through the display 260 .
  • the display device 200 can be used as a dedicated device of the virtual fitting system, that is, the display device 200 is only used to run the virtual fitting program and present the virtual fitting interface.
  • the display device 200 can be applied to a robot assistant in a shopping mall environment, and the robot assistant can perform voice interaction with the user to realize a virtual fitting function.
  • the display device 200 can also be an implementation device of the virtual fitting system, that is, the display device 200 has many functions, and the virtual fitting function is one of the many functions.
  • the display device 200 may be a personal computer, and a virtual fitting application may be installed in the personal computer so that it can implement a virtual fitting function.
  • various application programs can be installed in the display device 200 for realizing specific functions.
  • the installed applications can be system applications or third-party applications.
  • a "virtual fitting" application program for users to download and install may be displayed in the application store provided by the operator of the display device 200 .
  • Clothing material can be pre-built in the "virtual fitting" application program, then the display device 200 can run the application program in response to the user's input operation, and receive the image associated with the user, and the display device 200 can compare the image associated with the user with the pre-installed
  • the constructed clothing material is synthesized, and the synthesis effect is displayed to achieve the purpose of fitting.
  • the fitting application program can be an independent application program or a functional module integrated in a specific application.
  • the display device 200 may display a control or icon for activating the "virtual fitting" function in the shopping application.
  • the shopping application can display a prompt interface to the user, and the prompt interface includes controls or icons used to guide the user to input an image associated with the user.
  • the display device 200 invokes the current material model with the purchased clothing, thereby using the "virtual fitting" function to synthesize the image associated with the user with the material model, and output a picture of the fitting effect.
  • the display device 200 may perform image synthesis processing based on the acquired portrait picture associated with the user.
  • the display device 200 synthesizes the portrait image and the clothing image by adding virtual clothing patterns based on the portrait image to realize virtual fitting. For example, after acquiring the portrait picture associated with the user, the display device 200 can perform feature recognition on the portrait picture to identify the wearable position of the portrait pattern, including upper limbs, lower limbs, hands, feet, neck, top of the head, etc. Then extract the virtual clothing material from the clothing material library, so as to add the corresponding clothing material at each dressing position to complete the virtual fitting.
  • the display device 200 may also perform screen synthesis by adding portrait patterns based on virtual clothing patterns.
  • the virtual fitting application can add a portrait display area to the head area corresponding to the virtual clothing pattern, and display the head image in the acquired portrait pattern picture in the display area, thereby synthesizing a virtual fitting picture.
  • the virtual clothing pattern can be obtained through clothing materials stored in the memory 230 or the cloud server 400 .
  • the operator of the virtual fitting system can generate virtual clothing materials by performing multi-angle image shooting, 3D modeling and other processing methods according to the popular clothing currently on sale.
  • the generated virtual clothing material can be stored in the server 400 of the virtual fitting system.
  • the display device 200 starts the virtual fitting application and selects the clothing to be tried on, the display device 200 can request the virtual clothing material from the server 400 according to the user's selection. .
  • the display device 200 may cache the requested virtual clothing material according to the usage of its own memory 230 . Therefore, when using the virtual fitting system subsequently, according to the clothing to be tried on selected by the user, the corresponding virtual clothing material can be first matched in the local cache of the memory 230, and when the local cache includes the corresponding virtual clothing material, the corresponding virtual clothing material can be extracted Virtual clothing material. And when the corresponding virtual clothing material is not matched in the local cache, the corresponding virtual clothing material may be requested to the server 400 again.
  • the virtual clothing material may include parameters with multiple dimensions independent of each other, including: clothing style, color, material, and the like.
  • the display device 200 can present a variety of different clothing styles through the arrangement and combination of parameters in different dimensions.
  • the same virtual clothing material can be used for the same type of clothing, and the appearance of different clothing can be obtained by adjusting parameters such as color and material during virtual try-on. Therefore, the display device 200 can combine multiple clothing materials with less model data, reducing the construction amount of clothing models and the amount of data transmission when requesting clothing materials.
  • the display device 200 may present a virtual fitting interface for the user after running the virtual fitting application.
  • Fig. 6a shows a schematic diagram of a virtual fitting interface of some embodiments of the present disclosure.
  • the virtual fitting interface may include display windows, clothing options, control options, and outfit recommendation options.
  • the virtual fitting interface can display different options to the user, and the display device 200 responds to the user's selection operation on the virtual fitting interface to control and synthesize a virtual fitting image.
  • the portrait image of the user collected by the image acquisition device 500 can be displayed in the display window in real time, and if the selected clothing option is "clothing A", the display device 200 can call the “clothing A” from the local cache or the server 400. ” and display the corresponding virtual clothing material through the display window. Similarly, if the option of "clothing B" is selected among the multiple clothes, the display device 200 may call the virtual clothes material corresponding to "clothes B" and display it through the display window.
  • FIG. 6b shows an exemplary clothing option classification display effect diagram of some embodiments of the present disclosure.
  • the display device 200 can simultaneously display images corresponding to the selected top clothing “top A” and bottom clothing “pants B” in the display window.
  • “Top A” and “Top B” which are both tops, they cannot be selected at the same time, but the clothing material selected later replaces the clothing material selected first.
  • virtual clothing materials of the same category can be further divided into more detailed categories.
  • the classification of tops can be further divided into categories such as spring clothes, summer clothes, autumn clothes, and winter clothes according to the wearing season; it can also be further divided into categories such as coats and shirts according to the wearing position.
  • the display device 200 also needs to call different virtual clothing materials.
  • the display device 200 determines the clothing to be tried on selected by the user by acquiring information such as the product name and item number of the clothing input by the user. For this type of interaction, the display device 200 can search the clothing material library according to the user information, so as to call the clothing material related to the input information.
  • the display device 200 can also identify image information such as display pictures, physical images, and barcodes on hang tags based on image recognition technology to determine the clothing to be tried on.
  • Figure 6c shows a schematic diagram of an interface for identifying clothing in some embodiments of the present disclosure, and the virtual fitting interface may include the option "Identify Clothes", if the option "Identify Clothes" is selected by the user option, the display device 200 can automatically start the image acquisition device 500 to acquire the clothing image, and recognize the clothing image, and call the virtual clothing material according to the image recognition result.
  • the basis for identifying the clothing image can be calculated by calculating the similarity between the captured clothing image and the preset standard image, and the standard image with the highest similarity is used as the recognition result, and stored in the database. Call the virtual clothing material corresponding to the standard image.
  • a clothing option in the virtual fitting interface corresponds to a type of virtual fitting material.
  • the clothing parameters can be adjusted through the control options.
  • FIG. 6d shows a schematic diagram of an interface for selecting the color of clothing in some embodiments of the present disclosure.
  • the display device 200 can display color control options in the virtual fitting interface, so that Show the user the function of selecting any color from the preset multiple color control options, and control the current color of the virtual clothing screen displayed in the display window.
  • control options can also be used to control and adjust the display screen of the display window, for example, rotating the display angle, partially zooming in, adjusting brightness, and beautifying functions.
  • the interactive action is adjusted based on the input parameters matched with the corresponding option selected by the user, and the presentation effect of the display window on the display device 200 is controlled.
  • the outfit recommendation option is used to enable or disable the outfit recommendation function of the virtual fitting application.
  • the clothing recommendation function can automatically display the recommended clothing screen in the display window according to the clothing selected by the user after determining any clothing options selected by the user.
  • the clothing recommendation function can match the clothing materials of unselected categories in the virtual clothing material library according to a specific style algorithm and based on the clothing options selected by the user.
  • the outfit recommendation algorithm can perform unified matching operations based on dimensions such as color, type, and style. For example, in response to the user's operation of selecting black and formal tops, the outfit recommendation algorithm can automatically match black and formal bottoms and shoes, and call the corresponding virtual clothing model, which will be displayed together with the selected top virtual clothing model in the display window.
  • the virtual fitting interface presented by the display device 200 may also include prompt content and link options.
  • the virtual fitting interface can also include a "save outfit" option.
  • the user can click the "save outfit” option to save all items in the current display window. Showcase patterns.
  • the virtual fitting application can also automatically display the purchase link or shopping guide information of the corresponding clothing for the user after responding to the user's click "save outfit", so that the user can purchase the corresponding clothing according to the virtual fitting results.
  • FIG. 6e shows a schematic diagram of displaying purchase link effects in some embodiments of the present disclosure
  • FIG. 6f shows a schematic diagram of displaying purchase interface in some embodiments of the present disclosure.
  • the display device 200 may present a product link corresponding to the virtual clothing pattern displayed in the current display window for the user to select. After obtaining any product link selected by the user, the display device 200 can jump from the virtual fitting interface to the product detail interface.
  • the application prompts the user to display an icon or control for inputting an image associated with the user, responds to the user's operation, opens the file manager, and selects Select a picture file in the save path of the file, and use the picture file as the image input associated with the user.
  • the built-in camera of the display device 200 or the external image acquisition device can also take photos for the user, and use the images obtained by the photos as images associated with the user to input into the application.
  • the virtual fitting application can also provide a dynamic fitting image screen, so that the dynamic fitting image screen can obtain the information associated with the user input by the user by uploading a video file or recording a video file in real time. image.
  • the input video image can be displayed in a specific area in the application interface to obtain a better composite effect.
  • the display device 200 can also use augmented reality technology to add a virtual clothing material model to the video screen uploaded by the user and associated with the user, and to The target is tracked, so that the clothing material model can follow the movement changes of the character target in the video.
  • this dynamic fitting method is limited by the limitations of AR technology, resulting in poor fusion of clothing material and character targets in the video, and clothing cannot accurately fit the character targets.
  • there is a large delay in the display effect of the clothes following the movements of the characters so the interactive experience of the virtual fitting function is poor.
  • the virtual fitting system includes a display device 200 , a server 400 and an image acquisition device 500 .
  • the image collection device 500 is used to collect images associated with the user in real time, and process image signals to form image data associated with the user.
  • a model reconstruction application can be built in the server 400, and the model reconstruction application can generate a human body model according to the image data associated with the user.
  • the server 400 sends the human body model to the display device 200 .
  • the display device 200 is used to run the virtual fitting application program, and render the human body model to synthesize and display the virtual fitting picture.
  • the virtual fitting method includes the following:
  • the image associated with the user is collected in real time by the image collection device 500 .
  • the image acquisition device 500 may include a camera and a data processing module, wherein the camera may capture images of the user's environment to obtain images associated with the user.
  • the data processing module may perform image signal processing on the image associated with the user to generate image data associated with the user, and input the image data associated with the user into the display device 200 as an image signal.
  • the image acquisition device 500 may be a functional module built in the display device 200, and the display device 200 is an integral device.
  • the image acquisition device 500 is a camera on the display device 200, which can be under the unified control of the controller 250 in the display device 200, and can directly send the collected images associated with the user to the controller 250.
  • the camera should be set at a specific position on the display device 200 in order to collect images associated with the user.
  • the camera can be set on the top of the smart TV, and the shooting direction of the camera is the same as the light emitting direction of the screen of the smart TV, so that images associated with the user located in front of the screen can be captured.
  • the image acquisition device 500 can also be an external device connected to the display device 200, that is, as shown in FIG. High Definition Multimedia Interface, HDMI interface), analog or data high-definition component input interface, composite video broadcast signal (Composite Video Broadcast Signal, CVBS) interface, (Universal Serial Bus (Universal Serial Bus, USB) interface and one of the RGB ports can support a specific data transmission method. Then the image acquisition device 500 can send the collected image associated with the user to the display device 200 through the specific data transmission method supported by the interface after accessing the interface module 240. For example, after being connected to the smart TV, the image acquisition device 500 may communicate with the smart TV through Open Natural Interaction (OpenNI), so as to send the collected image associated with the user to the smart TV.
  • OpenNI Open Natural Interaction
  • the image data associated with the user sent by the image acquisition device 500 to the display device 200 may also include image recognition results, bone parameters, expression parameters, and gesture recognition results. data etc. Therefore, in some embodiments, the data processing module of the image acquisition device 500 can also have a built-in image processing application. After the camera captures an image associated with the user, the data processing module can run the image processing application to process the image associated with the user. The image is recognized.
  • the image acquisition device 500 can identify "image-depth" data, 3D human skeleton key point coordinates, portrait head recognition position, portrait target tracking point, etc. in the image associated with the user by running image processing applications with different functions. content. Therefore, when the recognized content is sent to the display device 200 , the display device 200 can be provided with functional supports such as RGBD image data, limb driving, human body tracking, face reconstruction material and expression driving material. Accordingly, these data may, together with user-associated images, constitute user-associated image data.
  • the image acquisition device 500 needs to have specific hardware support.
  • the image acquisition device 500 can be a camera group with multiple lenses, and the multiple lenses can detect the same target from different positions, so that by comparing the image capture results and angle differences of multiple lenses , to calculate the depth information of the target.
  • the image acquisition device 500 can also integrate sensor elements with specific functions, so that the images collected by the device 500 associated with the user include data for three-dimensional modeling.
  • a laser radar component can be built in, and the laser radar can emit laser light to the shooting area, and detect the reflected laser signal to measure the distance between the target in the shooting area and the image acquisition device 500, and compare the scanning result with the image acquisition device 500.
  • Each frame of image captured by the camera is correlated to generate user-associated image data with image depth.
  • FIG. 7 shows a schematic flowchart of a virtual fitting method in some embodiments of the present disclosure.
  • the image acquisition device 500 acquires the image associated with the user and generates the image data associated with the user, it can send the image data associated with the user to the display device 200 .
  • the display device 200 acquires the image data associated with the user as data input for the fitting application.
  • the display device 200 may send an activation command to the image acquisition device 500 after the fitting application is started, control the image acquisition device 500 to start capturing images associated with the user, and perform processing on the images associated with the user to obtain images associated with the user data.
  • the display device 200 may acquire image data associated with the user including different contents from the image acquisition device 500 .
  • the display device 200 may set the type of image data associated with the user to include only image content in the start instruction sent to the image capture device 500 .
  • the image data associated with the user needs to be specified in the start command, including "image-depth" data, 3D human skeleton key point coordinates and other data content.
  • the image forms may also be different.
  • the image data associated with the user in the form of a picture can be obtained from the image acquisition device 500; image data associated with the user.
  • the display device 200 After the display device 200 acquires the image data associated with the user, it can render the fitting result screen according to the image data associated with the user. In order to present a better clothing fusion effect, the display device 200 may obtain a fitting screen based on a human body model. Therefore, after acquiring the image data associated with the user, the display device 200 can create a human body model according to the image data associated with the user. However, due to the huge amount of data processing during the creation of the human body model and the limitation of the hardware configuration of the display device 200, the quality of the human body model locally created by the display device 200 is rough and the accuracy is poor. Therefore, in some embodiments, after acquiring the image data associated with the user, the display device 200 may send the image data associated with the user to the server 400 for modeling processing.
  • a model reconstruction application may be built in the server 400 .
  • a model reconstruction application can generate a mannequin based on image data associated with a user.
  • multiple initial human body models may be pre-configured, and multiple initial human body models may be set according to factors such as age and gender.
  • the server 400 can first identify the image in the image data associated with the user to determine the age, gender and other information of the person in the image, and then based on these information from Call the appropriate one among the preset multiple initial human models.
  • the server 400 can also read data such as "image-depth” data and 3D human skeleton key point coordinates in the image data associated with the user, and set the initial human body model according to the read data, Modify, adjust, so that the initial model gradually has portrait features in the image data associated with the user.
  • the server 400 may extract the proportional relationship between the user's head width and shoulder width from the image data associated with the user, and adjust the head width and shoulder width in the initial model according to the proportional relationship.
  • FIG. 8 shows a software configuration diagram of an exemplary server in some embodiments of the present disclosure.
  • the human body model created by the server 400 may include multiple parts, each part can be parameterized and adjusted independently, and supports replacement through a preset material library, so as to quickly generate the human body model.
  • a variety of hairstyle materials can be pre-stored in the server 400.
  • the server 400 After the server 400 receives the image data associated with the user, it can match similar models in the hairstyle material library according to the portrait hairstyle and color in the image data associated with the user. , and add the hair model to the mannequin's head as the initial hairstyle.
  • the fitting application has the function of allowing the user to arbitrarily choose to change the hairstyle of the character in the virtual fitting effect screen.
  • the display device 200 can then request the server 400 to rebuild. model, the server 400 can select a new hairstyle material from the hairstyle material library and add it to the human body model to reconstruct the human body model.
  • multiple modeling modules can be built in the server 400, including a head reconstruction module, a body reconstruction module, an expression recognition module, a trial hair module, and the like.
  • Each modeling module can construct a corresponding virtual model through a specific model reconstruction method.
  • head modeling can further include geometric reconstruction and texture reconstruction units, wherein the geometric reconstruction unit can perform point cloud denoising processing, triangulation processing, point cloud smoothing processing, and point cloud fusion processing according to the image data associated with the user , so that the shape of the head model gradually tends to be consistent with the person target in the image data associated with the user.
  • the texture reconstruction unit can perform portrait segmentation processing, skin color migration processing, and skin color fusion processing on the head model, so that the appearance of the head model is gradually consistent with the person target in the image data associated with the user.
  • models of body parts can also be generated based on specific modeling methods.
  • a body model can be created through a skinned multi-person linear model (Skinned Multi-Person Linear Model, SMPL). Therefore, the body reconstruction module may include an SMPL model generation unit, a parameterized mapping unit, and a stitching unit.
  • the server 400 can generate an initial model through the SMPL model generation unit, and then extract human body parameters from the image data associated with the user, so as to perform parameter mapping from the SMPL model to the parameterized model, and obtain an image corresponding to the user.
  • the parametric body model of the data, and finally the body model and the head model are spliced through the splicing unit to obtain the human body model.
  • the display device 200 in order to obtain the human body model, when the display device 200 sends the image data associated with the user to the server 400, it can identify the portrait target from the image data associated with the user, and add bones to the portrait target according to the recognition result key points to generate bone parameters. Then send the skeleton parameters to the server 400, so that the server 400 can set the joint point positions of the human body model according to the skeleton parameters.
  • the human body model after adding the bone parameters can change according to the rules corresponding to the bone parameters, so as to simulate a model posture that is more in line with the real state of the character.
  • the built-in processing unit can also be used to establish the expression base model and estimate the expression parameters, so as to set specific expression forms for the human body model according to the expressions of the characters in the image data associated with the user.
  • the preset functional units can be used to set the hair matching function and hair penetration processing function, etc. to create a more realistic hair model.
  • the server 400 Since the server 400 generates or reconstructs the human body model for the subsequent virtual fitting process, so in some embodiments, in addition to the built-in application for generating the human body model, the server 400 may also have a built-in application for generating the clothing model .
  • the server 400 can perform clothing modeling, clothing material simulation, and clothing deformation processing by running the application, so as to create a clothing model based on clothing image data.
  • the display device 200 can also send the clothing image data to the server 400.
  • the clothing image data can be obtained by taking images of the clothing from multiple angles. Get six views of clothing in the product interface of , namely front view, back view, left view, right view, bottom view and top view.
  • the server 400 can automatically model and generate clothes patterns according to these images, and then through rendering processes such as cloth simulation and clothes deformation, a clothes model that conforms to the real effect can be obtained.
  • the server 400 may send the generated human body model to the display device 200 .
  • the display device 200 After receiving the human body model, the display device 200 renders the human body model to render an image with a fitting effect. During the rendering of the human body model, the display device 200 may add clothing materials to the human body model to synthesize a rendered model.
  • the display device 200 may also optimize the human body model sent by the server 400 .
  • the display device 200 can perform head synthesis and restoration processing on the human body model based on the Unity engine, including performing head shape processing through the meshfilter tool and meshrender tool; performing texture processing through the texture tool and normalmap tool; and performing head shape processing through the blendshape/morph tool.
  • the face parameter model algorithm is used to process expression parameter information and adjust the hair to fit the head shape.
  • the display device 200 may also display the image part in the image data associated with the user while sending the image data associated with the user to the server 400 .
  • FIG. 9 shows a schematic diagram of a fitting application interface of some embodiments of the present disclosure.
  • the program interface of a fitting application program can include a fitting window and an original window.
  • the rendering results of the human body model and clothing materials can be displayed in the fitting window, and the image data associated with the user can be displayed in the original window.
  • the image part is the image frame captured by the image acquisition device 500 in real time.
  • the display device 200 After the display device 200 synthesizes the rendered model, it can also extract action parameters from the image data associated with the user, and adjust the model pose of the rendered model according to the action parameters to render a fitting picture. That is, the display device 200 may determine the user's action according to the image data associated with the user, and control the rendering model to follow the user's action corresponding to the image data associated with the user.
  • the actions that can be presented in the fitting screen include body actions, head actions, facial expressions, gesture actions, and clothing actions.
  • the motion parameters can be calculated and obtained by the image acquisition device 500 through head detection, 3D human skeleton key point detection, facial expression detection, and gesture recognition detection.
  • the display device 200 can detect the user's action by comparing multiple frames of images in the image data associated with the user. That is, in some embodiments, in the step of adjusting the model pose of the rendered model according to the action parameters, the display device 200 can traverse the skeleton key points of each frame image in the image data associated with the user; Skeleton key point position to obtain the movement distance of each bone key point to move the joint point position of the human body model according to the movement distance. Through the coordinated adjustment of multiple skeleton key points, the display device 200 can adjust the pose of the person in the rendered model to follow the image content of each frame of the image data associated with the user to the same action, realizing the action following effect.
  • the display device 200 can make the human body model produce a reasonable running mode by skinning, that is, adding bones to the model. For example, after obtaining the human body model, the display device 200 can skin the human body model and the clothing model and provide a texture for the cloth simulation algorithm for skinning, and use the blendshape/morph tool to process the skeleton skinning animation for the model.
  • the virtual fitting method can collect image data associated with the user through the image acquisition device 500 in real time, and generate a human body model through the server 400 according to the image data associated with the user, and display a human body model on the display device 200. Add clothing materials to the model to form a rendered model.
  • the display device 200 can also acquire user actions according to the image data associated with the user, and adjust the model posture of the rendering model in real time according to the user actions, so as to realize avatar driving.
  • the method can realize the effect of 3D, dynamic, real-time and multi-angle display of the fitting screen through the human body model.
  • the server 400 executes model building and reconstruction, which can share the data processing load of the display device 200 and is conducive to generating a more detailed and realistic character model.
  • FIG. 10 shows a schematic diagram of a key frame-based action driving process in some embodiments of the present disclosure.
  • the server 400 sets the human body model parameters according to the image depth parameters, and sends the human body model to the display device 200 .
  • the display device 200 may execute S1004 in subsequent applications.
  • the key frame may be a frame of image corresponding to a specific time point.
  • the display device 200 may extract a frame of image from the image data associated with the user every 1 second as a key frame.
  • the key may also be a frame of image obtained at intervals of a certain number of frames.
  • the display device 200 may extract a frame of image from image data associated with the user at an interval of 20 frames as a key frame.
  • the key frame may also be a frame of image with a distinctive portrait target obtained by performing image recognition on the image frame. For example, by inputting the image data associated with the user into the portrait recognition model frame by frame, when it is determined by the model recognition that the image frame contains a portrait, it will be marked as a key frame.
  • the initial key frame is an image frame used to extract image depth parameters to generate a human body model.
  • the real-time key frame is the key frame extracted by the display device 200 in real time from the image data associated with the user.
  • the image similarity can be determined by calculating the histograms of the two images separately, and then calculating the normalized correlation coefficient of the two histograms, such as Bhattacharyachian distance, histogram intersection distance and other data, to determine the similarity of the two images.
  • the content of the two key frame images is the same, but when the user is in an action state, the content of the two key frame images is different, and the greater the range of motion, the lower the similarity of the content of the two frame images , so the user's action state can be detected by image similarity.
  • the display device 200 may compare the image similarity with a preset similarity threshold.
  • the display device 200 can execute the step of extracting motion parameters from the image data associated with the user, so as to drive the human body model to generate motions according to the motion parameters.
  • the display device 200 can extract the image depth parameter from the real-time key frame image, and The extracted image depth parameters are sent to the server 400, so that the server 400 reconstructs the human body model according to the image depth parameters.
  • the screen content included in the image may gradually vary with the user's actions, that is, the image similarity between the initial key frame and the subsequently extracted key frame gradually decreases. Therefore, In order to maintain the consistency of the display device 200 driving the model action, reduce the number of times of remodeling.
  • the display device 200 may record the initial key frame image after extracting the image depth parameter from the initial key frame image. And after the step of extracting motion parameters from the image data associated with the user is performed, or after the step of sending the image depth parameters to the server, the real-time key frame images are used to replace the recorded initial key frame images.
  • the display device 200 may first extract an initial key frame T0 from image data associated with the user, and send the initial key frame T0 to the server 400 to generate a human body model. Moreover, when the display device 200 subsequently renders the fitting picture of the mannequin, it can continuously acquire the real-time key frame, that is, T1. Then the display device 200 can compare the initial key frame T0 and the real-time key frame T1 after acquiring the real-time key frame T1, and calculate the similarity S01 of the two frames of images.
  • start Action parameters are extracted from the key frame T1 or the image data associated with the user to drive the human body model to generate actions and follow the changes of the user's actions.
  • the display device 200 may use the real-time key frame T1 to replace the initial key frame T0 as the initial key frame in the subsequent motion judgment process. That is, when the real-time key frame T2 is obtained, the similarity between the key frame T1 and the key frame T2 can be compared to continue tracking user actions or reconstructing the human body model.
  • the above embodiment can continuously analyze the image frame associated with the user collected by the image collection device 500 through the initial key frame and the real-time key frame, so as to track the actual action of the user. Moreover, the display device 200 can determine whether it is necessary to reconstruct the human body model by comparing the image similarity between two adjacent key frames, so that the human body model can reduce the number of reconstruction models on the premise that it can be synchronized in time, and improve the fitting class. The responsiveness of the application.
  • the fitting application provides real-time dynamic fitting functions, it can also provide users with outfit recommendations, as shown in Figure 11, which shows the process of matching associated clothes in some embodiments of the present disclosure schematic diagram.
  • the display device 200 may acquire a selection instruction input by the user for selecting clothing.
  • a selection instruction input by the user for selecting clothing.
  • multiple clothing options can be set, and any option in the interface that is selected can be determined by obtaining user interaction operations, so as to input a selection instruction.
  • the user selects multiple target clothes among multiple clothes options, that is, at least one target clothes is specified in the selection command.
  • the selection instruction may be that the user can simultaneously select upload and download to input the selection instruction.
  • the display device 200 may respond to the selection instruction and extract the target clothing material from the clothing material library. Because clothing can be divided into multiple categories, such as tops, bottoms, shoes, hats, bags, etc. These types can be matched with each other to form the final dressing effect. Therefore, when the selection instruction indicates that not all types are selected, the fitting application can automatically match other suitable types of clothing according to the characteristics of the selected clothing, so as to present a better fitting effect.
  • the display device 200 can match the associated clothing material in the clothing material library according to the target clothing material according to the preset dressing recommendation rules.
  • the preset recommendation rules for outfits can be comprehensively set based on categories such as color, purpose, style, and applicable age. For example, when the user chooses blue and white tops with a small and fresh style, related clothing materials such as blue and white bottoms and shoes can be recommended to the user according to the preset recommendation rules for outfitting.
  • the display device 200 may add the target clothing material and the associated clothing material to the human body model, so as to form a rendering model with the effect of dressing in the opinion.
  • the display device 200 can pre-set multiple dressing recommendation rules, and each rule can match the appropriate associated clothing under the rule according to the target clothing material selected by the user. footage and preview it through multiple windows for enhanced display.
  • the display device 200 can quickly implement expression switching in the form of a preset standard expression template. That is, as shown in FIG. 12 , FIG. 12 shows a schematic diagram of an expression matching process in some embodiments of the present disclosure.
  • the display device 200 can identify the head area in the image data associated with the user, and detect the user's expression in the head area, and then according to the expression type of the user's expression, Match the expression model of the same type as the expression in the preset expression library, thereby replacing the facial area in the rendered model with the expression model.
  • the display device 200 may identify the head area in the image associated with the user by detecting the target shape in the image data associated with the user and the screen layout characteristics in the shape. From the head area, use the expression recognition model to recognize the current user's expression.
  • the facial expression recognition model can be obtained by training an artificial intelligence model with sample data. That is, by inputting a large amount of sample data with expression labels into the initial model, and setting the model output result as the classification probability of the image belonging to a specific expression category, and then calculating the error between the classification probability and the expression label, and adjusting according to the error backpropagation Model parameters, so that the output result of the model is gradually the same as the label result, and an expression recognition model is obtained.
  • the expression recognition model After the expression recognition model outputs the classification probability of the current image for a certain expression, the expression with the highest classification probability can be used as the user's expression, such as a smile. Then match the same type of expression model in the preset database according to the recognized user's expression, that is, the standard smile model, and use the standard smile model to replace the facial area in the rendering model, so that the rendering model can display smiling expressions.
  • the above embodiment can quickly replace the facial area in the rendering model by matching the standard expression model, so that the display device 200 does not need to modify the facial parameters of the model to reduce the amount of data processing.
  • better facial expression tracking timeliness can be obtained, and the display effect can be improved.
  • some embodiments of the present disclosure further provide a display device 200 including: a display 260 , a camera, a communicator 220 and a controller 250 .
  • the display 260 is configured to display images and/or user interfaces;
  • the camera is configured to collect image data in real time, and the image data includes images associated with the user;
  • the communicator 220 is configured to establish a communication connection with the server 400, and the server 400 has a built-in model a reconstruction application for generating a mannequin from image data associated with a user;
  • the controller 250 is configured to perform the following procedural steps:
  • FIG. 13 shows an exemplary virtual fitting system data interaction sequence diagram of some embodiments of the present disclosure.
  • the present disclosure also provides a virtual fitting system, including: a display device 200, an image acquisition device 500, and a server 400; wherein, the image acquisition device 500 is connected to the display device 200, and the display device 200 establishes communication with the server 400 connect.
  • the image acquisition device 500 receives an image associated with the user from the user.
  • the image acquisition apparatus 500 performs image signal processing on an image associated with a user, so as to generate image data associated with the user.
  • the image acquisition apparatus 500 sends the image data associated with the user to the display device 200.
  • the display device 200 sends the image data associated with the user to the server 400.
  • the server 400 runs a model reconstruction application; generates a human body model according to the image data associated with the user.
  • the server 400 sends the human body model to the display device 200 .
  • the display device 200 adds clothing materials to the human body model to synthesize a rendered model; and extracts action parameters from image data associated with the user, and adjusts the model pose of the rendered model according to the action parameters to render a fitting picture.
  • the display device 200 sends a fitting screen to the device associated with the user.
  • the entire virtual fitting system may include: a cloud server 400 , a local display device 200 and an image acquisition device 500 .
  • the cloud server 400 can be responsible for human body reconstruction, including basic algorithm modules such as head reconstruction, body reconstruction, expression recognition, hair try-on, etc., and provides functional support for the display device 200 such as human body model, expression driving parameters, and hair try-on.
  • the server 400 may rely on the data collected by the image collection device 500 as input, and output the modeling processing results to the display device 200 to support applications.
  • the image acquisition device 500 is responsible for providing acquisition data, including image signal processing (Image Signal Processing, ISP) debugging, RGBD stream, key points of 3D human skeleton, human head detection, multi-target tracking data, etc., so as to communicate and transmit through OpenNI, Provide the display device 200 with functional support for RGBD image data, limb driving, human body tracking, face reconstruction material, and expression driving material.
  • the display device 200 can be responsible for functions such as rendering of the human body model, display and rendering of clothing materials, motion driving, and local parameter adjustment.
  • the virtual fitting system can collect image data associated with the user in real time during use, and send the image data associated with the user to the server 400 to generate a human body model. Then add clothing materials to the human body model to synthesize the rendering model, and extract the character's actions in real time, so as to adjust the model pose of the rendering model according to the action parameters to form a fitting screen.
  • the virtual fitting system can realize the dynamic 3D virtual fitting function, and display the user's actions in real time through the rendering model, so as to achieve the effect of clothes moving with people and recommended outfits, and solve the problem that the traditional virtual fitting method cannot display the fitting effect in real time.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • General Engineering & Computer Science (AREA)
  • Development Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Processing Or Creating Images (AREA)

Abstract

L'invention concerne un dispositif d'affichage, et un système et un procédé d'essayage virtuel. Selon le procédé, des données d'image associées à un utilisateur peuvent être collectées en temps réel pendant l'utilisation, et les données d'image associées à l'utilisateur peuvent être envoyées à un serveur afin de générer un modèle humain. Des matériaux vestimentaires sont ajoutés au modèle humain pour synthétiser un modèle de rendu, et des mouvements de personnage sont extraits en temps réel, de sorte qu'une posture de modèle du modèle de rendu est ajustée selon des paramètres de mouvement pour former une image d'ajustement. Selon le procédé, une fonction d'essayage virtuel 3D dynamique peut être obtenue, et les mouvements de l'utilisateur sont affichés en temps réel au moyen du modèle de rendu afin d'obtenir ainsi les effets d'utilisation de vêtements sur des personnes ainsi qu'une recommandation de port, et de résoudre le problème, dans des procédés d'essayage virtuel classiques, selon lequel l'effet d'essayage ne peut pas être affiché en temps réel.
PCT/CN2022/128392 2022-01-17 2022-10-28 Dispositif d'affichage, et système et procédé d'essayage virtuel WO2023134269A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210051018.9 2022-01-17
CN202210051018.9A CN116523579A (zh) 2022-01-17 2022-01-17 一种显示设备、虚拟试衣系统及方法

Publications (1)

Publication Number Publication Date
WO2023134269A1 true WO2023134269A1 (fr) 2023-07-20

Family

ID=87280075

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/128392 WO2023134269A1 (fr) 2022-01-17 2022-10-28 Dispositif d'affichage, et système et procédé d'essayage virtuel

Country Status (2)

Country Link
CN (1) CN116523579A (fr)
WO (1) WO2023134269A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117635883A (zh) * 2023-11-28 2024-03-01 广州恒沙数字科技有限公司 基于人体骨骼姿态的虚拟试衣生成方法及系统
CN117649283A (zh) * 2023-12-14 2024-03-05 杭州抽象派数字科技有限公司 一种虚拟试衣系统和虚拟试衣方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116880948A (zh) * 2023-09-07 2023-10-13 深圳星坊科技有限公司 珠宝虚拟试戴展示方法、装置、计算机设备和存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1074944A2 (fr) * 1999-07-12 2001-02-07 Styleclick.Com Inc. Procédé et appareil permettant la combination et la visualisation d'items de vêtements
CN102156810A (zh) * 2011-03-30 2011-08-17 北京触角科技有限公司 增强现实实时虚拟试衣系统及方法
CN102298797A (zh) * 2011-08-31 2011-12-28 深圳市美丽同盟科技有限公司 三维虚拟试衣的方法、装置及系统
CN105825407A (zh) * 2016-03-31 2016-08-03 上海晋荣智能科技有限公司 虚拟试衣镜系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1074944A2 (fr) * 1999-07-12 2001-02-07 Styleclick.Com Inc. Procédé et appareil permettant la combination et la visualisation d'items de vêtements
CN102156810A (zh) * 2011-03-30 2011-08-17 北京触角科技有限公司 增强现实实时虚拟试衣系统及方法
CN102298797A (zh) * 2011-08-31 2011-12-28 深圳市美丽同盟科技有限公司 三维虚拟试衣的方法、装置及系统
CN105825407A (zh) * 2016-03-31 2016-08-03 上海晋荣智能科技有限公司 虚拟试衣镜系统

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117635883A (zh) * 2023-11-28 2024-03-01 广州恒沙数字科技有限公司 基于人体骨骼姿态的虚拟试衣生成方法及系统
CN117635883B (zh) * 2023-11-28 2024-05-24 广州恒沙数字科技有限公司 基于人体骨骼姿态的虚拟试衣生成方法及系统
CN117649283A (zh) * 2023-12-14 2024-03-05 杭州抽象派数字科技有限公司 一种虚拟试衣系统和虚拟试衣方法
CN117649283B (zh) * 2023-12-14 2024-05-14 杭州抽象派数字科技有限公司 一种虚拟试衣系统和虚拟试衣方法

Also Published As

Publication number Publication date
CN116523579A (zh) 2023-08-01

Similar Documents

Publication Publication Date Title
WO2023134269A1 (fr) Dispositif d'affichage, et système et procédé d'essayage virtuel
US20210177124A1 (en) Information processing apparatus, information processing method, and computer-readable storage medium
US20240193833A1 (en) System and method for digital makeup mirror
US11798201B2 (en) Mirroring device with whole-body outfits
US10109315B2 (en) Devices, systems and methods for auto-delay video presentation
RU2668408C2 (ru) Устройства, системы и способы виртуализации зеркала
US9098873B2 (en) Motion-based interactive shopping environment
JP3984191B2 (ja) 仮想化粧装置及びその方法
EP3243331B1 (fr) Dispositifs, systèmes et procédés de présentation de vidéo à retard automatique
JP4435809B2 (ja) 仮想化粧装置及びその方法
JP2019510297A (ja) ユーザの真実の人体モデルへの仮想的な試着
US11900506B2 (en) Controlling interactive fashion based on facial expressions
CN111199583B (zh) 一种虚拟内容显示方法、装置、终端设备及存储介质
US20240013463A1 (en) Applying animated 3d avatar in ar experiences
Chen et al. 3D face reconstruction and gaze tracking in the HMD for virtual interaction
CN117292097B (zh) Ar试穿互动体验方法及系统
Sénécal et al. Modelling life through time: cultural heritage case studies
Woodward et al. An interactive 3D video system for human facial reconstruction and expression modeling
US20240020901A1 (en) Method and application for animating computer generated images
US20240290043A1 (en) Real-time fashion item transfer system
Dhage et al. 3D Virtual Dressing Room Application
Shinkar et al. A Real Time Virtual Dressing Room Application using Opencv
Tharaka Real time virtual fitting room with fast rendering
Shi et al. CG Benefited Driver Facial Landmark Localization Across Large Rotation
CN113781614A (zh) Ar汉服换装方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22919903

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE