WO2024018431A1 - A smart system for producing realistic visual motion based apparel experience - Google Patents

A smart system for producing realistic visual motion based apparel experience Download PDF

Info

Publication number
WO2024018431A1
WO2024018431A1 PCT/IB2023/057456 IB2023057456W WO2024018431A1 WO 2024018431 A1 WO2024018431 A1 WO 2024018431A1 IB 2023057456 W IB2023057456 W IB 2023057456W WO 2024018431 A1 WO2024018431 A1 WO 2024018431A1
Authority
WO
WIPO (PCT)
Prior art keywords
experience
apparel
motion based
visual motion
realistic visual
Prior art date
Application number
PCT/IB2023/057456
Other languages
French (fr)
Inventor
Devansh SHARMA
Original Assignee
Sharma Devansh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sharma Devansh filed Critical Sharma Devansh
Publication of WO2024018431A1 publication Critical patent/WO2024018431A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion

Definitions

  • the field of the invention relates to a smart system for producing realistic visual motion based apparel experience. More particularly, the present invention is directed towards a smart system for providing realistic visual apparel (such as top, bottom, accessories) experience to the users by the help of artificial intelligence technology
  • the current smart mirrors are not able to produce a completely realistic representation of how clothing looks on an individual. While they provide 3D images and a 360-degree view, there are still limitations in accurately capturing the fit, texture, and overall appearance of the garments.
  • US6546309B1 discloses a method of manual fashion shopping and method for electronic fashion shopping by a customer using a programmed computer, CD-ROM, television, Internet or other electronic medium such as video.
  • the method comprises receiving personal information from the customer, selecting a body type and fashion category based on the personal information; selecting fashions from a plurality of clothes items based on the body type and fashion category, outputting a plurality of fashion data based on the selected fashions and receiving selection information from the customer.
  • the main drawback of this invention is the use of digitized image of customer’s face, which does not give the clarity to the customers about the products and the invention does not provide immediate trial facility to the customers to get instant results, thereby resulting into time-consuming method.
  • CN103324855A discloses a projection intelligent clothes fitting cabinet which comprises a cabinet, image acquisition equipment, a data processor, a data memory, a projector and a touch display screen.
  • the projection intelligent clothes fitting cabinet has the advantage that a user selects preferred clothes via the touch display screen, the clothes are effectively and accurately matched with the figure of the user, a clothes fitting effect is displayed on the touch display screen in real time, trouble of undressing and dressing when the user buys clothes in the traditional mode is omitted, and large quantities of time and energy are saved for the user.
  • this invention fails to provide a portable system that provide realistic visual motion based experience to the user.
  • the main object of the present invention is to provide a smart system for producing realistic visual motion based apparel experience using artificial intelligence technology.
  • Another object of the present invention is to provide a smart system for producing realistic visual motion based apparel experience that is convenient, fast, accurate and economic.
  • Yet another object of the present invention is to provide a smart system for producing realistic visual motion based apparel (such as top, bottom, accessories) experience that reduces time and effort consumed during in trial rooms.
  • Yet another object of the present invention is to provide a smart system for producing realistic visual motion based apparel experience that is contactless and ensures health safety.
  • the present invention relates to a smart system for producing realistic visual motion based apparel (such as top, bottom, accessories) experience that reduces time and effort consumed during in trial rooms and helps to maintain authenticity of the products.
  • the present invention provides a smart system for producing realistic visual motion based apparel (such as top, bottom, accessories) experience that comprises of a search engine module, a processor module, a plurality of cameras, an electronic display, wherein said search engine module allows a user to search and choose one or more apparel (such as top, bottom, accessories) from a plurality of apparels available in a store by providing a set of input instructions via an input module that is connected with said search engine module, said plurality of cameras are for capturing real time image/ video of a user and said captured image/video are transferred to the processor module, said processer module includes an image processor module that analyzes said real time image/ video for extracting a set of digital body data and simulating fitting of said chosen apparel on said digital body data, said process module further includes a video processor that produces a realistic visual motion based video from said simulated fitting and digital body data, thereby providing realistic visual motion based apparel experience to said user on said electronic display.
  • said search engine module allows a user to search and choose one or more apparel (such as
  • the present invention provides to a smart system for producing realistic visual motion based apparel experience that comprises of a set of sensors for measuring said digital body data accurately.
  • Figure 1 is a block diagram of a smart system for producing realistic visual motion based apparel experience according to an embodiment of the present invention.
  • Figure 2 is a pictorial diagram of a smart system for producing realistic visual motion based apparel experience according to an embodiment of the present invention.
  • the present invention provides a smart system for producing realistic visual motion based apparel experience that comprises of a search engine module, a processor module, a plurality of cameras, an electronic display, wherein said search engine module allows a user to search and choose one or more apparel from a plurality of apparels available in a store by providing a set of input instructions via an input module that is connected with said search engine module, said plurality of cameras are for capturing real time image/ video of a user and said captured image/video are transferred to the processor module, said processor module includes an image processor module that analyzes said real time image/ video for extracting a set of digital body data and simulating fitting of said chosen apparel on said digital body data, said process module further includes a video processor that produces a realistic visual motion based video from said simulated fitting and digital body data, thereby providing realistic visual motion based apparel experience to said user.
  • said search engine module allows a user to search and choose one or more apparel from a plurality of apparels available in a store by providing a set of input instructions via an input module that
  • the present invention provides to a smart system for producing realistic visual motion based apparel (such as top, bottom, accessories) experience that comprises of a set of sensors for measuring said digital body data accurately.
  • the system (100) comprises of a search engine module (1), a processor module (2), a plurality of cameras (3) and an electronic display (4).
  • the search engine module (1) allows a user to search and choose one or more apparel from a plurality of apparels available in a store by providing a set of input instructions that includes but not limited to size of cloth, color, type, style and brand via an input module that is connected with said search engine module (1).
  • the input module used herein includes but not limited to mobile phone, keyboard, and mouse.
  • the cameras are for capturing real time image/ video of a user and the captured image/ video are transferred to the processor module.
  • the processor module (2) includes an image processor module that analyzes said real time image/ video for extracting a set of digital body data and simulating fitting of said chosen apparel on said digital body data, said processor module further includes a video processor that produces a realistic visual motion based video from said simulated fitting and digital body data, thereby providing realistic visual motion based apparel experience to said user.
  • the processor module (2) works on the basis of a set of instructions that are pre-stored in the memory unit of the processor module.
  • the processor module (2) used herein refers to iOS Uno, raspberry Pi.
  • the search engine module (1) analyses said input instructions through machine learning techniques.
  • the search engine module (1) utilizes one or more advanced machine learning techniques to thoroughly analyze the input instructions provided by the user. By leveraging the capabilities of artificial intelligence, the search engine module (1) is capable of comprehensively understanding the context, intent, and nuances embedded within the instructions. Through the application of machine learning techniques, the search engine module (1) continually learns and improves its ability to interpret and extract meaningful information from the input instructions.
  • the machine learning techniques employs various natural language processing (NTP) algorithms to identify key words, phrases, and patterns, allowing for a more accurate and efficient retrieval of relevant results.
  • NTP natural language processing
  • the machine learning techniques employed by the search engine module (1) enable it to adapt to different types of instructions, even those with varying structures or languages. It handle complex queries, understand synonyms, and even recognize intent when the instructions are ambiguous or incomplete. By harnessing the power of machine learning, the search engine module (1) delivers highly precise and relevant search results. It continuously refines its techniques based on user feedback and data analysis, ensuring an enhanced user experience and improved search performance. The search engine module's (1) utilization of machine learning techniques empower it to effectively analyze input instructions, enabling accurate understanding and retrieval of the most pertinent information.
  • the input module includes but not limited to keyboard, speaker, mobile phone which serves as essential means for users to interact with the system and provide input in various forms.
  • the keyboard as a primary input device, enables users to input textual information and commands. It allows for the direct entry of characters and facilitates efficient communication between the user and the system.
  • the input module incorporates speakers, which play a crucial role in audio-based input.
  • the input module enable the system to receive spoken instructions or interact with the user through auditory feedback. This capability is particularly useful in scenarios where hands-free operation or voice recognition is desired.
  • mobile phones serve as a versatile input device within the input module. With their touchscreens, cameras, and built-in sensors, mobile phones offer a wide array of input options. The user interacts with the system by tapping, swiping, or using gesture-based commands on the phone's display. Additionally, the phone's camera captures visual input, while sensors like accelerometers and GPS allow for context-aware input.
  • the input module is not limited to the aforementioned devices but encompasses other input devices as well.
  • the input module further includes mice, trackpads, stylus pens, microphones, and more, depending on the specific system requirements and user preferences.
  • the input module ensures compatibility with different input devices, enhancing usability and accommodating diverse user needs.
  • the cameras include but not limited to RGB cameras, depth cameras, 3D scanning cameras.
  • the RGB cameras, or color cameras are used for capturing images and videos. They detect and record the visible spectrum of light, allowing for the reproduction of scenes in full color.
  • the RGB cameras are essential for capturing high-resolution visuals and are widely employed in applications such as image recognition, video conferencing, and augmented reality.
  • Depth cameras provide an additional dimension of information by capturing the depth or distance of objects in the scene. This is achieved through various depthsensing technologies such as time-of-flight (ToF) or structured light.
  • the depth cameras enable the system to perceive the three-dimensional structure of the environment and are particularly useful in applications such as gesture recognition, 3D mapping, and object tracking.
  • the system incorporates 3D scanning cameras. These specialized cameras are created to capture highly detailed three-dimensional representations of objects or scenes.
  • the set of digital body data include but not limited to body shape type, height, width, colour, skin texture, skin patterns, hair patterns, colour of nerves.
  • the set of digital body data is utilized for fitting simulation through a computer vision technique. Further, the utilization of digital body data that brings several advantages to fitting simulations through computer vision techniques, the advantages include the ability to offer accurate virtual try-on experiences, where customers visualize how clothing fit and look on the body types.
  • the digital body data also enables personalized recommendations, tailoring clothing suggestions based on individual body characteristics to enhance the shopping experience. Moreover, the availability of this data facilitates improved customization options, allowing customers to personalize clothing items to their preferences and measurements. Overall, leveraging digital body data in fitting simulations enhance accuracy, personalization, and accessibility, contributing to a more satisfying and informed online shopping experience.
  • the smart system (100) comprise of a recommendation module for suggesting one or more apparels to the user based on said digital body data and simulation fitting.
  • the smart system (100) incorporates a recommendation module that utilizes the digital body data to provide personalized suggestions of one or more apparel options to the user.
  • the recommendation module leverages the analysis of the user's body shape type, height, width, color, skin texture, skin patterns, hair patterns, and color of nerves to generate tailored recommendations.
  • the recommendation module takes into account the user's body characteristics and preferences to suggest clothing items that are likely to fit well and suit their style. By analysing the digital body data, the module matches the user's measurements and attributes with the available apparel options within its database.
  • the smart system (100) incorporates a fitting simulation feature that allows the user to visualize how the recommended apparel look on their digital representation. Through computer vision techniques, the system simulates the fitting of the recommended clothing items on a virtual avatar or the user's personalized digital representation.
  • the fitting simulation takes into consideration the user's body shape, measurements, and other digital body data to ensure a realistic representation of how the apparel fit and look on the user.
  • This simulation provides the user with a visual understanding of how the recommended clothing items appear on their body, helping them make informed decisions before making a purchase.
  • the smart system (100) enhances the user's shopping experience by providing personalized suggestions and a virtual fitting experience based on their unique digital body data.
  • the electronic display (4) utilized in the smart system (100) includes devices, such as smart mirrors and mobile phones, which serve as platforms for presenting information and visual content to the user.
  • the smart mirrors with the integrated electronic display (4), provide an interactive interface for users to access and interact with the smart system's (100) features.
  • the smart mirrors offer a reflective surface that transform into a digital display, providing a seamless blend of traditional mirror functionality and modern technology.
  • the mobile phones serve as portable electronic devices with vibrant displays that allow users to access the smart system's (100) functionalities on the go. These devices provide a compact and convenient platform for users to engage with the system's features and receive real-time information.
  • the search engine module (1) within the smart system (100) operates using natural language processing (NTP) techniques.
  • NTP natural language processing
  • the search engine module (1) retrieves relevant information and generates appropriate responses or actions.
  • the smart system (100) is capable of producing realistic visual motion-based videos with durations ranging from 3 to 5 seconds. This feature allows for the creation of short, impactful video content that is used for various purposes, such as advertisements, presentations, or user interface animations. The realistic visual motion in these videos adds a dynamic and engaging element to the smart system's (100) user experience.
  • the smart system (100) for producing a realistic visual motion-based apparel experience operates through a series of interconnected processes to provide users with an immersive virtual fitting experience.
  • the user interacts with the system via an input module, conveying their preferences and instructions in natural language.
  • the search engine module (1) employs machine learning techniques to analyze the input and search for suitable apparel options from a virtual store as depicted in Figure 2.
  • Real-time images or videos of the user are captured by a set of cameras and processed by the image processor module within the processor module (2).
  • the module extracts a comprehensive set of digital body data, including body shape type, height, width, color, skin texture, skin patterns, hair patterns, and color of nerves.
  • the system simulates the fitting of the chosen apparel on the user's virtual representation.
  • the computer vision techniques render a realistic visualization of how the apparel look and fit on the user, considering their body characteristics and measurements.
  • the processor module (2) includes a video processor that generates visual motion-based videos, integrating the simulated fitting and digital body data. These videos, typically lasting 3 to 5 seconds, provide a dynamic and immersive apparel experience.
  • the generated visual motion-based videos are presented on an electronic display (4), such as a smart mirror or a mobile phone. Users view and evaluate how the recommended apparel look on them, enhancing their shopping experience.
  • the system's integration of machine learning, computer vision, and natural language processing techniques deliver an interactive and visually compelling virtual fitting experience, enabling users to explore and visualize apparel options with accuracy and realism.
  • the present invention provides a smart system for producing realistic visual motion based apparel experience using artificial intelligence technology, that is convenient, fast, accurate and economic, reduces time and effort consumed during trial room and which helps to maintain authenticity of the products.

Abstract

The present invention provides to a smart system (100) for producing realistic visual motion based apparel experience that comprises of a search engine module (1), a processor module (2), a plurality of cameras (3), an electronic display (4), wherein said search engine module (1) allows a user to search and choose one or more apparel from a plurality of apparels available in a store, said cameras (3) are for capturing real time image/video of a user and said captured image/video are transferred to the processing module (2), said processing module (2) includes an image processing module that analyses said real time image/video for extracting a set of digital body data and simulating fitting of said chosen apparel on said digital body data, said processing module (2) further includes a video processor that produces a realistic visual motion based video.

Description

11 A SMART SYSTEM FOR PRODUCING REALISTIC VISUAL MOTION BASED APPAREL EXPERIENCE”
FIELD OF THE INVENTION
The field of the invention relates to a smart system for producing realistic visual motion based apparel experience. More particularly, the present invention is directed towards a smart system for providing realistic visual apparel (such as top, bottom, accessories) experience to the users by the help of artificial intelligence technology
BACKGROUND OF THE INVENTION
Indeed, the advent of e-commerce websites revolutionized the way people shop for clothing and accessories. While online shopping offers convenience and flexibility, the inability to try on clothes before purchasing has been a significant drawback for many consumers. This limitation led to a high number of returns, causing losses for suppliers.
To address this issue, e-commerce websites introduced various schemes to enable virtual try- ons. However, these attempts were not entirely successful. One solution that emerged was the development of smart fitting mirrors, which provided 3D images and a 360-degree rotation of the person trying on the clothes. While these mirrors improved the visualization aspect, they did not address the physical fit and feel of the garments.
Moreover, the COVID- 19 pandemic and previous outbreaks like Ebola raised additional concerns about using shared trial rooms in stores. The risk of transmission in such environments made it advisable to avoid their use. Additionally, the shortage of in-store trial rooms often resulted in long lines and wasted time for shoppers.
Given these challenges, the industry continues to explore innovative solutions to enhance the online shopping experience for clothing and accessories. Technologies like augmented reality (AR) and virtual reality (VR) are being leveraged to create immersive try-on experiences. These technologies allow customers to virtually see how garments look on their bodies and even simulate the sensation of wearing them. Al-powered systems and methods were introduced that analyze body measurements and fabric properties to provide accurate recommendations for sizing and fitting. These solutions aim to reduce the need for physical trials by enabling customers to make more informed purchasing decisions.
However, despite these advancements, there are always be some limitations to online clothing shopping compared to in-store experiences. Factors such as personal preferences, individual body shapes, and varying fabric qualities still pose challenges in achieving a perfect fit and feel without physical trials. However, ongoing technological advancements aim to bridge this gap and provide a more satisfactory online shopping experience for customers.
Consequently, aforesaid problems were solved by introducing a smart mirror system which is connected with the mirrors installed in the trial rooms of stores. Many smart mirrors were introduced but their installations require a lot of area and smart mirrors are bit expensive, therefore they are not installed by small in-store sellers.
Moreover, the current smart mirrors are not able to produce a completely realistic representation of how clothing looks on an individual. While they provide 3D images and a 360-degree view, there are still limitations in accurately capturing the fit, texture, and overall appearance of the garments.
US6546309B1 discloses a method of manual fashion shopping and method for electronic fashion shopping by a customer using a programmed computer, CD-ROM, television, Internet or other electronic medium such as video. The method comprises receiving personal information from the customer, selecting a body type and fashion category based on the personal information; selecting fashions from a plurality of clothes items based on the body type and fashion category, outputting a plurality of fashion data based on the selected fashions and receiving selection information from the customer. The main drawback of this invention is the use of digitized image of customer’s face, which does not give the clarity to the customers about the products and the invention does not provide immediate trial facility to the customers to get instant results, thereby resulting into time-consuming method. CN103324855A discloses a projection intelligent clothes fitting cabinet which comprises a cabinet, image acquisition equipment, a data processor, a data memory, a projector and a touch display screen. The projection intelligent clothes fitting cabinet has the advantage that a user selects preferred clothes via the touch display screen, the clothes are effectively and accurately matched with the figure of the user, a clothes fitting effect is displayed on the touch display screen in real time, trouble of undressing and dressing when the user buys clothes in the traditional mode is omitted, and large quantities of time and energy are saved for the user. However, this invention fails to provide a portable system that provide realistic visual motion based experience to the user.
Therefore, there exists a need to provide a smart system for producing realistic visual motion based apparel experience using artificial intelligence technology to overcome the aforesaid disadvantages by providing convenient, fast, accurate, cost friendly, user friendly system with a feature of making video to provide more realistic experience to the customers.
OBJECT OF THE INVENTION
The main object of the present invention is to provide a smart system for producing realistic visual motion based apparel experience using artificial intelligence technology.
Another object of the present invention is to provide a smart system for producing realistic visual motion based apparel experience that is convenient, fast, accurate and economic.
Yet another object of the present invention is to provide a smart system for producing realistic visual motion based apparel (such as top, bottom, accessories) experience that reduces time and effort consumed during in trial rooms.
Yet another object of the present invention is to provide a smart system for producing realistic visual motion based apparel experience that is contactless and ensures health safety.
Yet another object of the present invention is to provide a smart system for producing realistic visual motion based apparel experience by utilizing the parameters like body shape type, height, width, colour, skin texture, skin patterns, hair patterns, colour of nerves. Still another object of the present invention is to provide a smart system for producing realistic visual motion based apparel (such as top, bottom, accessories) experience which helps to maintain authenticity of the products.
SUMMARY OF THE INVENTION
The present invention relates to a smart system for producing realistic visual motion based apparel (such as top, bottom, accessories) experience that reduces time and effort consumed during in trial rooms and helps to maintain authenticity of the products.
In an embodiment, the present invention provides a smart system for producing realistic visual motion based apparel (such as top, bottom, accessories) experience that comprises of a search engine module, a processor module, a plurality of cameras, an electronic display, wherein said search engine module allows a user to search and choose one or more apparel (such as top, bottom, accessories) from a plurality of apparels available in a store by providing a set of input instructions via an input module that is connected with said search engine module, said plurality of cameras are for capturing real time image/ video of a user and said captured image/video are transferred to the processor module, said processer module includes an image processor module that analyzes said real time image/ video for extracting a set of digital body data and simulating fitting of said chosen apparel on said digital body data, said process module further includes a video processor that produces a realistic visual motion based video from said simulated fitting and digital body data, thereby providing realistic visual motion based apparel experience to said user on said electronic display.
In another embodiment, the present invention provides to a smart system for producing realistic visual motion based apparel experience that comprises of a set of sensors for measuring said digital body data accurately.
The above objects and advantages of the present invention will become apparent from the hereinafter set forth brief description of the drawings, detailed description of the invention, and claims appended herewith. BRIEF DESCRIPTION OF THE DRAWING
An understanding of the smart system for producing realistic visual motion based apparel experience of the present invention may be obtained by reference to the following drawing:
Figure 1 is a block diagram of a smart system for producing realistic visual motion based apparel experience according to an embodiment of the present invention.
Figure 2 is a pictorial diagram of a smart system for producing realistic visual motion based apparel experience according to an embodiment of the present invention.
DESCRIPTION OF THE INVENTION
The present invention will now be described hereinafter with reference to the accompanying drawings in which a preferred embodiment of the invention is shown. This invention may, however, be embodied in many different forms and should not be construed as being limited to the embodiment set forth herein. Rather, the embodiment is provided so that this disclosure will be thorough, and will fully convey the scope of the invention to those skilled in the art.
Many aspects of the invention can be better understood with references made to the drawings below. The components in the drawings are not necessarily drawn to scale. Instead, emphasis is placed upon clearly illustrating the components of the present invention. Moreover, like reference numerals designate corresponding parts through the several views in the drawings. Before explaining at least one embodiment of the invention, it is to be understood that the embodiments of the invention are not limited in their application to the details of construction and to the arrangement of the components set forth in the following description or illustrated in the drawings. The embodiments of the invention are capable of being practiced and carried out in various ways. In addition, the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting.
In an embodiment, the present invention provides a smart system for producing realistic visual motion based apparel experience that comprises of a search engine module, a processor module, a plurality of cameras, an electronic display, wherein said search engine module allows a user to search and choose one or more apparel from a plurality of apparels available in a store by providing a set of input instructions via an input module that is connected with said search engine module, said plurality of cameras are for capturing real time image/ video of a user and said captured image/video are transferred to the processor module, said processor module includes an image processor module that analyzes said real time image/ video for extracting a set of digital body data and simulating fitting of said chosen apparel on said digital body data, said process module further includes a video processor that produces a realistic visual motion based video from said simulated fitting and digital body data, thereby providing realistic visual motion based apparel experience to said user.
In another embodiment, the present invention provides to a smart system for producing realistic visual motion based apparel (such as top, bottom, accessories) experience that comprises of a set of sensors for measuring said digital body data accurately.
Referring to Figure 1, a block diagram of the smart system (100) for producing realistic visual motion based apparel experience. The system (100) comprises of a search engine module (1), a processor module (2), a plurality of cameras (3) and an electronic display (4). The search engine module (1) allows a user to search and choose one or more apparel from a plurality of apparels available in a store by providing a set of input instructions that includes but not limited to size of cloth, color, type, style and brand via an input module that is connected with said search engine module (1). The input module used herein includes but not limited to mobile phone, keyboard, and mouse.
The cameras are for capturing real time image/ video of a user and the captured image/ video are transferred to the processor module. The processor module (2) includes an image processor module that analyzes said real time image/ video for extracting a set of digital body data and simulating fitting of said chosen apparel on said digital body data, said processor module further includes a video processor that produces a realistic visual motion based video from said simulated fitting and digital body data, thereby providing realistic visual motion based apparel experience to said user. The processor module (2) works on the basis of a set of instructions that are pre-stored in the memory unit of the processor module. The processor module (2) used herein refers to Arduino Uno, raspberry Pi. The search engine module (1) analyses said input instructions through machine learning techniques. The search engine module (1) utilizes one or more advanced machine learning techniques to thoroughly analyze the input instructions provided by the user. By leveraging the capabilities of artificial intelligence, the search engine module (1) is capable of comprehensively understanding the context, intent, and nuances embedded within the instructions. Through the application of machine learning techniques, the search engine module (1) continually learns and improves its ability to interpret and extract meaningful information from the input instructions. The machine learning techniques employs various natural language processing (NTP) algorithms to identify key words, phrases, and patterns, allowing for a more accurate and efficient retrieval of relevant results.
The machine learning techniques employed by the search engine module (1) enable it to adapt to different types of instructions, even those with varying structures or languages. It handle complex queries, understand synonyms, and even recognize intent when the instructions are ambiguous or incomplete. By harnessing the power of machine learning, the search engine module (1) delivers highly precise and relevant search results. It continuously refines its techniques based on user feedback and data analysis, ensuring an enhanced user experience and improved search performance. The search engine module's (1) utilization of machine learning techniques empower it to effectively analyze input instructions, enabling accurate understanding and retrieval of the most pertinent information.
The input module includes but not limited to keyboard, speaker, mobile phone which serves as essential means for users to interact with the system and provide input in various forms.
The keyboard, as a primary input device, enables users to input textual information and commands. It allows for the direct entry of characters and facilitates efficient communication between the user and the system.
In addition to the keyboard, the input module incorporates speakers, which play a crucial role in audio-based input. The input module enable the system to receive spoken instructions or interact with the user through auditory feedback. This capability is particularly useful in scenarios where hands-free operation or voice recognition is desired. Furthermore, mobile phones serve as a versatile input device within the input module. With their touchscreens, cameras, and built-in sensors, mobile phones offer a wide array of input options. The user interacts with the system by tapping, swiping, or using gesture-based commands on the phone's display. Additionally, the phone's camera captures visual input, while sensors like accelerometers and GPS allow for context-aware input.
The input module is not limited to the aforementioned devices but encompasses other input devices as well. For instance, the input module further includes mice, trackpads, stylus pens, microphones, and more, depending on the specific system requirements and user preferences.
By incorporating various input devices within the input module, users have the flexibility to choose the most suitable means of interaction based on their preferences and the nature of the task at hand. The input module ensures compatibility with different input devices, enhancing usability and accommodating diverse user needs.
The cameras include but not limited to RGB cameras, depth cameras, 3D scanning cameras. The RGB cameras, or color cameras, are used for capturing images and videos. They detect and record the visible spectrum of light, allowing for the reproduction of scenes in full color. The RGB cameras are essential for capturing high-resolution visuals and are widely employed in applications such as image recognition, video conferencing, and augmented reality.
Depth cameras, on the other hand, provide an additional dimension of information by capturing the depth or distance of objects in the scene. This is achieved through various depthsensing technologies such as time-of-flight (ToF) or structured light. The depth cameras enable the system to perceive the three-dimensional structure of the environment and are particularly useful in applications such as gesture recognition, 3D mapping, and object tracking.
In addition to RGB and depth cameras, the system incorporates 3D scanning cameras. These specialized cameras are created to capture highly detailed three-dimensional representations of objects or scenes.
The set of digital body data include but not limited to body shape type, height, width, colour, skin texture, skin patterns, hair patterns, colour of nerves. The set of digital body data is utilized for fitting simulation through a computer vision technique. Further, the utilization of digital body data that brings several advantages to fitting simulations through computer vision techniques, the advantages include the ability to offer accurate virtual try-on experiences, where customers visualize how clothing fit and look on the body types. The digital body data also enables personalized recommendations, tailoring clothing suggestions based on individual body characteristics to enhance the shopping experience. Moreover, the availability of this data facilitates improved customization options, allowing customers to personalize clothing items to their preferences and measurements. Overall, leveraging digital body data in fitting simulations enhance accuracy, personalization, and accessibility, contributing to a more satisfying and informed online shopping experience.
The smart system (100) comprise of a recommendation module for suggesting one or more apparels to the user based on said digital body data and simulation fitting. The smart system (100) incorporates a recommendation module that utilizes the digital body data to provide personalized suggestions of one or more apparel options to the user. The recommendation module leverages the analysis of the user's body shape type, height, width, color, skin texture, skin patterns, hair patterns, and color of nerves to generate tailored recommendations.
The recommendation module takes into account the user's body characteristics and preferences to suggest clothing items that are likely to fit well and suit their style. By analysing the digital body data, the module matches the user's measurements and attributes with the available apparel options within its database.
Additionally, the smart system (100) incorporates a fitting simulation feature that allows the user to visualize how the recommended apparel look on their digital representation. Through computer vision techniques, the system simulates the fitting of the recommended clothing items on a virtual avatar or the user's personalized digital representation.
The fitting simulation takes into consideration the user's body shape, measurements, and other digital body data to ensure a realistic representation of how the apparel fit and look on the user. This simulation provides the user with a visual understanding of how the recommended clothing items appear on their body, helping them make informed decisions before making a purchase. By combining the recommendation module with fitting simulation capabilities, the smart system (100) enhances the user's shopping experience by providing personalized suggestions and a virtual fitting experience based on their unique digital body data.
The electronic display (4) utilized in the smart system (100) includes devices, such as smart mirrors and mobile phones, which serve as platforms for presenting information and visual content to the user.
The smart mirrors, with the integrated electronic display (4), provide an interactive interface for users to access and interact with the smart system's (100) features. The smart mirrors offer a reflective surface that transform into a digital display, providing a seamless blend of traditional mirror functionality and modern technology.
The mobile phones, on the other hand, serve as portable electronic devices with vibrant displays that allow users to access the smart system's (100) functionalities on the go. These devices provide a compact and convenient platform for users to engage with the system's features and receive real-time information.
The search engine module (1) within the smart system (100) operates using natural language processing (NTP) techniques. NTP enables the system to understand and interpret user queries and instructions expressed in natural language, making the smart system (100) easy to use. By analyzing the structure, context, and semantics of user input, the search engine module (1) retrieves relevant information and generates appropriate responses or actions.
Furthermore, the smart system (100) is capable of producing realistic visual motion-based videos with durations ranging from 3 to 5 seconds. This feature allows for the creation of short, impactful video content that is used for various purposes, such as advertisements, presentations, or user interface animations. The realistic visual motion in these videos adds a dynamic and engaging element to the smart system's (100) user experience.
EXAMPLE 1
Working of the invention The smart system (100) for producing a realistic visual motion-based apparel experience operates through a series of interconnected processes to provide users with an immersive virtual fitting experience. The user interacts with the system via an input module, conveying their preferences and instructions in natural language. The search engine module (1) employs machine learning techniques to analyze the input and search for suitable apparel options from a virtual store as depicted in Figure 2.
Real-time images or videos of the user are captured by a set of cameras and processed by the image processor module within the processor module (2). Using computer vision algorithms, the module extracts a comprehensive set of digital body data, including body shape type, height, width, color, skin texture, skin patterns, hair patterns, and color of nerves.
Based on this digital body data, the system simulates the fitting of the chosen apparel on the user's virtual representation. The computer vision techniques render a realistic visualization of how the apparel look and fit on the user, considering their body characteristics and measurements.
The processor module (2) includes a video processor that generates visual motion-based videos, integrating the simulated fitting and digital body data. These videos, typically lasting 3 to 5 seconds, provide a dynamic and immersive apparel experience.
The generated visual motion-based videos are presented on an electronic display (4), such as a smart mirror or a mobile phone. Users view and evaluate how the recommended apparel look on them, enhancing their shopping experience. The system's integration of machine learning, computer vision, and natural language processing techniques deliver an interactive and visually compelling virtual fitting experience, enabling users to explore and visualize apparel options with accuracy and realism.
EXAMPLE 2
Experiment Details
The present invention was experimented in 5 test cases and the realistic visual motion based video was produced within 5 seconds as depicted in below Table 1. Table 1: Experimental results
Figure imgf000014_0001
These data points represent the duration of individual videos ranging from 3 to 5 seconds, showcasing realistic visual motion. Furthermore, the production of the realistic visual motion based video within 5 seconds starts the step of capturing the essential images and videos, which is then processed using advanced stitching protocol. The advanced stitching protocol seamlessly combines the different camera perspectives, correcting any distortions and ensuring a cohesive and continuous viewing experience. Additionally, to handle the demanding computational requirements, high-performance CPUs or GPUs are utilized, due to high processing power for swift stitching and rendering of the video. Additionally, a data storage unit is configured within the system to accommodate the substantial amount of data generated throughout the production process.
Once the stitching process is complete, further post-processing steps are undertaken, which involves employing software tools for applying post-effects, performing color correction, adding audio, and encoding the video into a suitable format for various platforms. The hardware setup varies based on factors such as the desired resolution, frame rate, and complexity of the video. Higher resolutions and frame rates demand more robust hardware configurations to achieve real-time processing and optimal results.
Therefore, the present invention provides a smart system for producing realistic visual motion based apparel experience using artificial intelligence technology, that is convenient, fast, accurate and economic, reduces time and effort consumed during trial room and which helps to maintain authenticity of the products.
Many modifications and other embodiments of the invention set forth herein will readily occur to one skilled in the art to which the invention pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the invention is not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.

Claims

CLAIMS : A smart system (100) for producing realistic visual motion based apparel experience, comprising: a search engine module (1); a processor module
(2); a plurality of cameras (3); an electronic display (4); wherein said search engine module (1) allows a user to search and choose one or more apparel from a plurality of apparels available in a store by providing a set of input instructions via an input module that is connected with said search engine module (1); said plurality of cameras
(3) are for capturing real time image/ video of a user and said captured image/ video are transferred to the processor module (2); said processor module (2) includes an image processing module that analyses said real time image/ video for extracting a set of digital body data and a simulating fitting of said chosen apparel on said digital body data; said processor module (2) includes a video processor that produces a realistic visual motion based video from said simulated fitting and the digital body data via a protocol, thereby providing realistic visual motion based apparel experience to said user on said electronic display (4). The smart system ( 100) for producing realistic visual motion based apparel experience as claimed in claim 1, wherein said search engine module (1) analyses said input instructions through one or more machine learning techniques. The smart system ( 100) for producing realistic visual motion based apparel experience as claimed in claim 1, wherein said input module includes but not limited to keyboard, speaker, mobile phone.
4. The smart system ( 100) for producing realistic visual motion based apparel experience as claimed in claim 1, wherein said plurality of cameras (3) include but not limited to RGB cameras, depth cameras, 3D scanning cameras.
5. The smart system ( 100) for producing realistic visual motion based apparel experience as claimed in claim 1, wherein said set of digital body data include but not limited to body shape type, height, width, colour, skin texture, skin patterns, hair patterns, colour of nerves.
6. The smart system ( 100) for producing realistic visual motion based apparel experience as claimed in claim 1, wherein said set of digital body data is utilized for fitting simulation through a computer vision technique.
7. The smart system ( 100) for producing realistic visual motion based apparel experience as claimed in claim 1, wherein said smart system (100) comprise of a recommendation module for suggesting one or more apparels to the user based on said digital body data and simulating fitting.
8. The smart system ( 100) for producing realistic visual motion based apparel experience as claimed in claim 1, wherein said electronic display (4) includes but not limited to smart mirror, mobile phone.
9. The smart system ( 100) for producing realistic visual motion based apparel experience as claimed in claim 1, wherein said search engine module (1) operates through a natural language processing technique, thereby making the smart system (100) easy to use.
10. The smart system (100) for producing realistic visual motion based apparel experience as claimed in claim 1, wherein said realistic visual motion based video is produced in range from 3 seconds to 5 seconds.
PCT/IB2023/057456 2022-07-21 2023-07-21 A smart system for producing realistic visual motion based apparel experience WO2024018431A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202211041926 2022-07-21
IN202211041926 2022-07-21

Publications (1)

Publication Number Publication Date
WO2024018431A1 true WO2024018431A1 (en) 2024-01-25

Family

ID=89617326

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2023/057456 WO2024018431A1 (en) 2022-07-21 2023-07-21 A smart system for producing realistic visual motion based apparel experience

Country Status (1)

Country Link
WO (1) WO2024018431A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100079486A (en) * 2008-12-31 2010-07-08 동국대학교 산학협력단 Virtual fitting system using 3-dimentions avatar in web
EP2686834A2 (en) * 2011-03-14 2014-01-22 Belcurves S.a.r.l. Improved virtual try on simulation service
CN109285057A (en) * 2018-09-29 2019-01-29 深圳市丰巢科技有限公司 Applied to the dress ornament acquisition methods of express delivery cabinet, device, express delivery cabinet and storage medium
CA2734143C (en) * 2008-08-15 2021-08-31 Brown University Method and apparatus for estimating body shape

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2734143C (en) * 2008-08-15 2021-08-31 Brown University Method and apparatus for estimating body shape
KR20100079486A (en) * 2008-12-31 2010-07-08 동국대학교 산학협력단 Virtual fitting system using 3-dimentions avatar in web
EP2686834A2 (en) * 2011-03-14 2014-01-22 Belcurves S.a.r.l. Improved virtual try on simulation service
CN109285057A (en) * 2018-09-29 2019-01-29 深圳市丰巢科技有限公司 Applied to the dress ornament acquisition methods of express delivery cabinet, device, express delivery cabinet and storage medium

Similar Documents

Publication Publication Date Title
US11164381B2 (en) Clothing model generation and display system
AU2017240823B2 (en) Virtual reality platform for retail environment simulation
Pachoulakis et al. Augmented reality platforms for virtual fitting rooms
KR100722229B1 (en) Apparatus and method for immediately creating and controlling virtual reality interaction human model for user centric interface
WO2016097732A1 (en) Methods for generating a 3d virtual body model of a person combined with a 3d garment image, and related devices, systems and computer program products
CN106127552B (en) Virtual scene display method, device and system
CN111724231A (en) Commodity information display method and device
Masri et al. Virtual dressing room application
Jin et al. Application of VR technology in jewelry display
CN111598996A (en) Article 3D model display method and system based on AR technology
Wen et al. A survey of facial capture for virtual reality
CN108629824B (en) Image generation method and device, electronic equipment and computer readable medium
Tanmay et al. Augmented reality based recommendations based on perceptual shape style compatibility with objects in the viewpoint and color compatibility with the background
CN114779948B (en) Method, device and equipment for controlling instant interaction of animation characters based on facial recognition
WO2024018431A1 (en) A smart system for producing realistic visual motion based apparel experience
Manfredi et al. TryItOn: a virtual dressing room with motion tracking and physically based garment simulation
Narducci et al. Enabling consistent hand-based interaction in mixed reality by occlusions handling
Chu et al. A cloud service framework for virtual try-on of footwear in augmented reality
Hsu et al. HoloTabletop: an anamorphic illusion interactive holographic-like tabletop system
Jácome et al. Parallax Engine: Head Controlled Motion Parallax Using Notebooks’ RGB Camera
Clement et al. GENERATING DYNAMIC EMOTIVE ANIMATIONS FOR AUGMENTED REALITY
CN117292097B (en) AR try-on interactive experience method and system
Araki et al. Follow-the-trial-fitter: real-time dressing without undressing
Patel et al. Immersive Interior Design: Exploring Enhanced Visualization Through Augmented Reality Technologies
Venkata et al. AI-Enhanced Digital Mirrors: Empowering Women's Safety and Shopping Experiences

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23842553

Country of ref document: EP

Kind code of ref document: A1