WO2020060012A1 - A computer implemented platform for providing contents to an augmented reality device and method thereof - Google Patents
A computer implemented platform for providing contents to an augmented reality device and method thereof Download PDFInfo
- Publication number
- WO2020060012A1 WO2020060012A1 PCT/KR2019/008222 KR2019008222W WO2020060012A1 WO 2020060012 A1 WO2020060012 A1 WO 2020060012A1 KR 2019008222 W KR2019008222 W KR 2019008222W WO 2020060012 A1 WO2020060012 A1 WO 2020060012A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- module
- users
- product
- interest information
- user
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/258—Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
- H04N21/25808—Management of client data
- H04N21/25841—Management of client data involving the geographical location of the client
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/768—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using context analysis, e.g. recognition aided by known co-occurring patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/251—Learning process for intelligent management, e.g. learning user preferences for recommending movies
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/258—Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
- H04N21/25866—Management of end-user data
- H04N21/25891—Management of end-user data being end-user preferences
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/414—Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
- H04N21/41407—Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
- H04N21/4316—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/47815—Electronic shopping
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/812—Monomedia components thereof involving advertisement data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
Definitions
- the present invention relates generally to image processing, and, particularly but not exclusively, to a platform and method for providing contents to an augmented reality device.
- Augmented reality is the integration of digital information with the user's environment, which uses capabilities of a computer-generated display, audio, and texts to enhance a user's real-world experience.
- the augmented reality (AR) adds spatially aligned virtual objects, such as three-dimensional models, two-dimensional textures, and the like, to the user's environment.
- the retail sector has constantly been evolving to adapt buying patterns, i.e. moving from the age of tele-shopping commercials to the rise of online shopping portals.
- the retail sector has adapted an augmented reality (AR) shopping experience trend, which provides experience to customers to view colors, sizes, and dimensions of various personal and lifestyle products, such as home furnishing, clothes, etc.
- US20140225924 discloses a system for determining trigger items in augmented reality environments.
- the system generates augmented reality scenario associated with a trigger item, which is detected from one or more frames.
- an augmented reality content item is generated by another user using other computing device.
- US20130050258 discloses a see-through head-mounted display (HMD) device that provides an augmented reality image which is associated with a real-world object, such as a picture frame, wall or billboard.
- a real-world object such as a picture frame, wall or billboard.
- the location and visual characteristics of the objects are determined by the front facing camera of the HMD device.
- the user selects from among candidate data streams, such as a web page, game feed, and video or stocker ticker. Subsequently, when the user is in the location of the object and looks at the object, the HMD device matches the visual characteristics to the record for identifying the data stream, and then displaying corresponding augmented reality images registered to the object.
- US20160253844 discloses a system that provides augmented and virtual reality experience using topologies connecting disparate device types, shared-environments, messaging systems, virtual object placements, etc. Some embodiments employ pose-search systems and methods that provide more granular pose determination than were previously possible.
- none of the prior art documents provide a user to view preferred products of the other user based on the preferences and the physical attributes of the other users.
- providing an easy user interface for the selections of the preferred product among the recommended users is necessary.
- the selection of product category is also required for highlighting the preferred products of the user in the selected category.
- This summary is provided to introduce concepts related to providing a plurality of contents to an augmented reality device. This summary is neither intended to identify essential features of the present invention nor is it intended for use in determining or limiting the scope of the present invention.
- various embodiments herein may include one or more platforms and methods for providing a plurality of contents to an augmented reality device in a client-server arrangement are provided.
- the method includes identifying location of a user associated with the augmented reality device.
- the method includes identifying at least one product from the identified location.
- the method includes determining users of interest information associated with the user, and at least one interest information of the users of interest information.
- the method includes storing details related to a plurality of products, pre-determined users of interest information, the determined users of interest information, and the determined interest information of the users of interest information in a database.
- the method includes mapping the at least one identified product with at least one stored information and generating preference information based on the mapped data.
- the method further includes transmitting the preference information based on the mapped data, and then highlighting the product from the preference information on a display area coupled with the augmented reality device.
- a computer implemented platform is configured to provide a plurality of content to an augmented reality device in a client-server arrangement.
- the platform includes a memory which is configured to store a set of pre-determined rules, and a processor which is configured to generate system processing commands.
- the platform includes a client module and a server.
- the client module includes a plurality of modules and a location identifier.
- the location identifier is configured to identify location of a user associated with the augmented reality device.
- the server includes a product identifier, a determination module, a database, a mapping module, a communication module, a tagging module, and a plurality of other modules.
- the product identifier is configured to identify at least one product from the identified location.
- the determination module is configured to determine users of interest information associated with the user, and at least one interest information of the users of interest information.
- the database is configured to store details related to a plurality of products, pre-determined users of interest information, the determined users of interest information, and the determined interest information of the users of interest information.
- the mapping module is configured to map the at least one identified product with at least one stored information and generate preference information based on the mapped data.
- the communication module is configured to transmit the generated preference information to the augmented reality device.
- the tagging module is configured to highlight the product from the preference information on a display area coupled with the augmented reality device.
- an augmented reality device may provide a user to view preferred products of the other user based on the preferences and the physical attributes of the other users.
- Figure 1 illustrates a schematic diagram depicting a computer implemented platform for providing a plurality of contents to an augmented reality device in a client-server arrangement, according to an exemplary implementation of the present invention.
- Figure 2 illustrates a schematic diagram depicting a client module, according to an exemplary implementation of the present invention.
- Figure 3 illustrates a schematic diagram depicting a server, according to an exemplary implementation of the present invention.
- Figure 4 illustrates a schematic diagram depicting identifying shopping location of a user, according to an exemplary implementation of the present invention.
- Figure 5 illustrates a schematic diagram depicting determining users of interest information, according to an exemplary implementation of the present invention.
- Figure 6 illustrates a graphical view depicting a call pattern and frequency over a time period, according to an exemplary implementation of the present invention.
- Figure 7 illustrates a schematic diagram depicting degree of association of a user based on a pattern, according to an exemplary implementation of the present invention.
- Figure 8 illustrates a schematic diagram depicting overall ranking of associated users for identifying closeness, according to an exemplary implementation of the present invention.
- Figure 9 illustrates a schematic diagram depicting identification of preference information, according to an exemplary implementation of the present invention.
- Figure 10 illustrates a block diagram depicting a learning model, according to an exemplary implementation of the present invention.
- Figure 11 illustrates a schematic diagram depicting identification of user's contextual preferences, according to an exemplary implementation of the present invention.
- Figure 12a illustrates a schematic diagram depicting contextual preferences ontology, according to an exemplary implementation of the present invention.
- Figure 12b illustrates a schematic diagram depicting product ontology, according to an exemplary implementation of the present invention.
- Figure 12c illustrates a schematic diagram depicting user's contextual product preferences ontology, according to an exemplary implementation of the present invention.
- Figure 12d illustrates a schematic diagram depicting generation of user's contextual product preferences ontology, according to an exemplary implementation of the present invention.
- Figures 13a and 13b illustrate a sequence diagram determining users of interest information, according to an exemplary implementation of the present invention.
- Figure 14 illustrates a sequence diagram depicting identifying preference information of a product, according to an exemplary implementation of the present invention.
- Figures 15a and 15b illustrate a sequence diagram depicting highlighting a product from the preference information, according to an exemplary implementation of the present invention.
- Figure 16 illustrates a flowchart depicting a method for providing a plurality of contents to an augmented reality device in a client-server arrangement, according to an exemplary implementation of the present invention.
- Figure 17 illustrates a flowchart depicting identifying preference information of at least one product, according to an exemplary implementation of the present invention.
- Figure 18 illustrates a flowchart depicting learning a style of an associated user from at least one crawled image, according to an exemplary implementation of the present invention.
- Figures 19a-19d illustrate use-case scenarios depicting augmented reality shopping assistance modes, according to an exemplary implementation of the present invention.
- the various embodiments of the present invention provide a computer implemented platform for providing a plurality of contents to an augmented reality device and method thereof.
- connections between components and/or modules within the figures are not intended to be limited to direct connections. Rather, these components and modules may be modified, re-formatted or otherwise changed by intermediary components and modules.
- references in the present invention to "one embodiment” or “an embodiment” mean that a particular feature, structure, characteristic, or function described in connection with the embodiment is included in at least one embodiment of the invention.
- the appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
- the present invention discloses a computer implemented method for providing a plurality of contents to an augmented reality device.
- the plurality of contents are augmented reality contents, i.e. users of interest information and their preferences for a product in the vision of a user, while the user is at a pre-defined location and looking at products.
- the method includes identifying location of a user associated with the augmented reality device.
- the method includes identifying at least one product from the identified location.
- the method includes determining users of interest information associated with the user, and at least one interest information of the users of interest information.
- the method includes storing, in a database, details related to a plurality of products, pre-determined users of interest information, the determined users of interest information, and the determined interest information of the users of interest information. Furthermore, the method includes mapping the at least one identified product with at least one stored information and generating preference information based on the mapped data. Upon generation of the preference information, the method further includes transmitting the preference information based on the mapped data, and then highlighting the product from the preference information on a display area coupled with the augmented reality device.
- the step of identifying the location further includes capturing a real-time view of the user associated with the augmented reality device. Subsequently, the step of identifying the location includes recognizing text and image data from the captured view and then identifying the location of the user. Further, the step of identifying the location includes detecting the presence of the user in a pre-defined location, and wherein the pre-defined location is a geo-tagged location.
- the step of determining the users of interest information further includes a step of crawling through web, at least one of data related to associated user's activities, images, social media feeds, call log details, SMS details, product browsing history, e-commerce purchase history, product reviews, likes or dislikes of one or more products.
- the step of determining the users of interest information includes analyzing the crawled data and identifying texts, the presence of other users in at least one image, and their associated activities.
- the users of interest information includes a plurality of information about the user and associated one or more users.
- the step of determining the users of interest information further includes analyzing the call log details, and/or the SMS details, and then determining frequency of calls and/or SMS of the user with the associated users. In one embodiment, the step of determining the users of interest information further includes providing one or more determined users of interest information in the form of timeline.
- the method includes a step of ranking users of interest information of associated users based on the crawled data.
- the method includes a step of identifying preference information of at least one product.
- the step of identifying preference information further includes extracting the users of interest information from the database and the crawled data related to the product.
- the step of identifying preference information includes categorizing a plurality of products based on the extracted users of interest information and the crawled data related to the product and selecting a preferred category of each of the products based on the extracted users of interest information and the crawled data related to the product.
- the step of identifying preference information further includes analyzing a context of the products in the form of time of the year, weather, special occasion, and other occasions, and then generating contextual category wise product preferences for the user.
- the contextual category wise product preferences are generated based on the category of the identified product.
- the step of crawling through web further includes extracting attributes of the identified product and comparing the extracted attributes with the crawled data.
- the method includes a step of identifying at least one object from the identified location.
- the object includes a plurality of products, a collection of products, and a similar or dissimilar set of products.
- the method includes a step of learning a style of the associated user from at least one crawled image.
- the step of learning a style further includes estimating an articulated pose by spatially quantizing body part regions of the associated users.
- the step of learning a style includes estimating at least one portion of the image as a binary mask by using global clothing probability map.
- the step of learning a style includes segmenting probable body part regions into visually coherent regions and clustering the segmented regions into a single non-connected region. Further, the step of learning a style includes classifying clothing type for the clustered region.
- the method includes a step of receiving an input from the user associated with the augmented reality device.
- the step of receiving the input further includes manipulating the users of interest information and the at least one interest information of the users of interest information.
- the step of receiving the input further includes removing the users of interest information and the at least one interest information of the users of interest information.
- the step of receiving the input includes prioritizing the users of interest information and the at least one interest information of the users of interest information.
- the step of receiving the input includes reordering the users of interest information and the at least one interest information of the users of interest information.
- the step of receiving the input includes manipulating the users of interest information and the at least one interest information of the users of interest information.
- the step of receiving the input is in the form of touch, eye gaze, head movement, voice command, and/or gesture.
- the present invention discloses a computer implemented platform for providing a plurality of contents to an augmented reality device.
- the platform includes a memory, a processor, a client module, and a server module.
- the memory is configured to store a set of pre-determined rules.
- the processor is configured to generate system processing commands.
- the client module includes a location identifier.
- the location identifier is configured to identify location of a user associated with the augmented reality device.
- the server includes a product identifier, a determination module, a database, a mapping module, a communication module, and a tagging module.
- the product identifier is configured to identify at least one product from the identified location.
- the determination module is configured to determine users of interest information associated with the user, and at least one interest information of the users of interest information.
- the database is configured to store details related to a plurality of products, pre-determined users of interest information, the determined users of interest information, and the determined interest information of the users of interest information.
- the mapping module is configured to map the at least one identified product with at least one stored information and generate preference information based on the mapped data.
- the communication module is configured to transmit the generated preference information to the augmented reality device.
- the tagging module is configured to highlight the product from the preference information on a display area coupled with the augmented reality device.
- the location identifier further includes a capturing module and a recognition module.
- the capturing module is configured to capture a real-time view of the user associated with the augmented reality device.
- the recognition module is configured to recognize text and image data from the captured view and identify the location of the user.
- the location module further includes a proximity module configured to detect the presence of the user in a pre-defined location.
- the pre-defined location is a geo-tagged location.
- the determination module includes a web crawler.
- the web crawler is configured to crawl through web, at least one of data related to other user's activities images, postings on social media feeds, call logs details, SMS details, product browsing history, e-commerce purchase history, product reviews, likes of dislikes of one or more products.
- the determination module further includes an image analyzer configured to analyze the crawled data and identify texts, and the presence of other users in at least one image, and their associated activities.
- the users of interest information include a plurality of information about the user and associated one or more users.
- the determination module further includes a data analysis module configured to analyze the call log details, and/or the SMS details, and further configured to determine frequency of calls and/or SMS of the user with the associated users.
- one or more determined users of interest information is provided in the form of timeline.
- the server includes a ranking module configured to rank users of interest information based on the crawled data.
- the server includes a product preference generation module configured to identify preference information of at least one product.
- the product preference generation module further includes an extraction module, a categorization module, a selection module, a context extraction module, and a preference generation module.
- the extraction module is configured to extract the users of interest information from the database and the crawled data related to the product.
- the categorization module is configured to categorize a plurality of products based on the extracted users of interest information and the crawled data related to the product.
- the selection module is configured to select a preferred category of each of the products based on the extracted users of interest information and the crawled data related to the product.
- the context extraction module is configured to analyze a context in the form of time of the year, weather, special occasion, and other occasions.
- the preference generation module is configured to generate contextual category wise product preferences for the user. In an embodiment, the contextual category wise product preferences are generated based on the category of the identified product.
- the server includes a visual extraction module configured to extract attributes of the identified product and compare the extracted attributes with the crawled data.
- the attributes are visual attributes or physical attributes.
- the server includes an object identifier.
- the object identifier is configured to identify at least one object from the identified location received from the location identifier of the client module.
- the object includes a plurality of products, a collection of products, and a similar or dissimilar set of products.
- the server includes a learning module configured to learn a style of the associated user from at least one crawled image.
- the learning module includes an estimation module, an isolation module, a segmentation module, a clustering module, and a classifier.
- the estimation module is configured to estimate an articulated pose by spatially quantizing body part regions of the associated user.
- the isolation unit is configured to isolate at least one portion of the image as a binary mask by using global clothing probability map.
- the segmentation module is configured to segment probable body part regions into visually coherent regions.
- the clustering module is configured to cluster the segmented regions into a single non-connected region.
- the classifier is configured to classify clothing type for the clustered region.
- the client module includes a user input module configured to receive an input from the user associated with the augmented reality device.
- the user input module is further configured to manipulate the users of interest information and the at least one interest information of the users of interest information.
- the user input module is further configured to remove the users of interest information and the at least one interest information of the users of interest information.
- the user input module is configured to prioritize the users of interest information and the at least one interest information of said users of interest information.
- the user input module is configured to reorder the users of interest information and the at least one interest information of the users of interest information.
- the received input is in the form of touch, eye gaze, head movement, voice command, and gesture.
- FIG. 1 illustrates a schematic diagram depicting a computer implemented platform for providing a plurality of contents to an augmented reality device in a client server arrangement (108) (hereinafter referred as "platform"), according to an exemplary implementation of the present invention.
- the platform (108) is communicatively coupled with an augmented reality device (102) via a network (106).
- the augmented reality device (102) includes a display area (104a), a positioning module (104b), a control unit (104c), an output unit (104d), and sensors (104e).
- the augmented reality device (102) can be a wearable glass.
- the display area (104a) is a display screen.
- the positioning module (104b) may include Global Positioning System (GPS).
- GPS Global Positioning System
- the control unit (104c) is configured to control the augmented reality device (102).
- the control unit (104c) is configured to cooperate with the display area (104a), the output unit (104d), and the sensors (104e), and further configured to control the display area (104a), the output unit (104d), and the sensors (104e).
- the control unit (104c) may receive data from the sensors (104e) or the positioning module (104b), and analyze the received data, and output the contents through at least one of the display area (104b) or the output unit (104d).
- the output unit (104d) may be a speaker, a headphone or an earphone that can be worn on the ears of the user.
- the sensors (104e) are configured to sense a motion and actions of the user.
- the sensors (104c) include an acceleration sensor, a tilt sensor, a gyro sensor, a three-axis magnetic sensor, and a proximity sensor.
- the network (106) includes wired and wireless networks. Examples of the wired networks include a Wide Area Network (WAN) or a Local Area Network (LAN), a client-server network, a peer-to-peer network, and so forth. Examples of the wireless networks include Wi-Fi, a Global System for Mobile communications (GSM) network, and a General Packet Radio Service (GPRS) network, an enhanced data GSM environment (EDGE) network, 802.5 communication networks, Code Division Multiple Access (CDMA) networks, or Bluetooth networks.
- GSM Global System for Mobile communications
- GPRS General Packet Radio Service
- EDGE enhanced data GSM environment
- CDMA Code Division Multiple Access
- the platform (108) includes a memory (110), a processor (112), a client module (114), and a server (116).
- the memory (110) is configured to store pre-determined rules related to identification of location, data extraction, determination of information, mapping, recognition of texts and images, and ranking information.
- the memory (110) is also configured to store pre-defined locations.
- the memory (110) can include any computer-readable medium known in the art including, for example, volatile memory, such as static random-access memory (SRAM) and dynamic random-access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
- SRAM static random-access memory
- DRAM dynamic random-access memory
- non-volatile memory such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
- the memory (110) also includes a cache memory to work with the platform (108) more effectively.
- the processor (112) is configured to cooperate with the memory (110) to receive the pre-determined rules.
- the processor (112) is further configured to generate platform processing commands.
- the processor (112) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions.
- the at least one processor (112) is configured to fetch the pre-determined rules from the memory (110) and execute different modules of the platform (108).
- the client module (114) is configured to cooperate with the processor (112).
- Figure 2 illustrates a schematic diagram depicting a client module (114), according to an exemplary implementation of the present invention.
- the client module (114) includes a location identifier (202).
- the location identifier (202) is configured to identify location of a user associated with the augmented reality device (102).
- the location identifier (202) includes a capturing module (204), a recognition module (206), and a proximity module (208).
- the capturing module (204) is configured to capture a real-time view of the user associated with the augmented reality device (102).
- the capturing module (204) can be a camera or a scanner.
- the recognition module (206) is configured to cooperate with the capturing module (204) to receive the captured view.
- the recognition module (206) is further configured to recognize text and image data from the captured view and identify the location of the user.
- the recognition module (206) is configured to recognize scene from the captured view to identify object arrangements.
- the scene recognition includes an activity/ event recognition and a scene text recognition.
- the recognition module (206) configured to extract shopping center names from the captured view using a scene text recognition (STR) technique.
- the recognition module (206) is configured to recognize the activity/ event by using deep learning based three-dimensional convolutional neural networks (3D-CNN) and Recurrent Neural Networks (RNN) models.
- the recognition module (206) is configured to recognize scene texts by using sequence modelling techniques, such as Bidirectional Long-Short Term Memory (LSTM).
- the recognition module (206) is configured to receive the captured view from the capturing module (204) and apply deep learning models on the captured view.
- proposal and classification sub-networks share three-dimensional feature maps.
- a proposal subnet predicts variable length temporal segments that potentially contain activities, while a classification subnet classifies these proposals into specific activity categories or background, and further refines the proposal segment boundaries. It further extends the two-dimensional RoI (Region of Interest) pooling in faster R-CNN to 3D RoI pooling, for extracting features at various resolutions for variable length proposals.
- RoI RoI
- the recognition module (206) is configured to receive the captured view from the capturing module (204) and apply the sequence modeling technique.
- the recognition module (206) recognize a text data from the captured view, and map convolutional features in a convolutional layer.
- the convolutional layer extracts a feature sequence from each input image.
- the recognition module (206) generates a feature sequence data based on the mapped convolutional features and analyzes the sequence data using the deep bidirectional LSTM technique. More particularly, the recognition module (206) is configured to make prediction for each frame of the feature sequence data.
- a per-frame prediction technique is applied on the analyzed data to predict the sequence.
- the transcription layer translates the per-frame predictions by the recurrent layer into a label sequence.
- the location identifier (202) includes a proximity module (208) configured to detect the presence of the user in a pre-defined location.
- the pre-defined location is a geo-tagged location.
- the pre-defined location is a shopping location.
- the proximity module (208) is configured to determine if the user is near to any shopping location tagged in a pre-defined map or a navigator.
- the pre-defined map can be a Google Map.
- Figure 4 illustrates a schematic diagram depicting identifying shopping location of a user (400), according to an exemplary implementation of the present invention.
- the location identifier (202) is configured to identify that the user is at a shopping location (402) by using identification techniques, i.e. user proximity with a geo-tagged shopping location (404), scene understanding of product arrangements (406), and scene text recognition for identifying shopping center names (408).
- the client module (114) further includes a user input module (210).
- the user input module (210) is configured to receive an input from the user associated with the augmented reality device (102).
- the user input module (210) is configured to receive a user input and perform corresponding actions associated with the user input.
- the received input is in the form of touch, eye gaze, head movement, voice command, and gesture.
- the client module (114) includes an assistance module (not shown in figure).
- the assistance module is configured to enable and disable a shopping assistance mode.
- the location identifier (202) identifies that the user is in the shopping location
- the assistance module enables the shopping assistance mode.
- the capturing module (204) captures the real-time view of the user.
- the assistance mode facilitates visualization of the augmented reality shopping assistance mode to the user in the augmented reality device (102).
- the capturing module (204) captures images in a regular interval.
- the user can provide touch input on the augmented reality device (102) to invoke the assistance mode.
- the user can provide gesture inputs which can be detected by the augmented reality device (102), to invoke the assistance mode.
- the client module (114) includes an alert generation module (not shown in figure).
- the alert generation module is configured to generate an alert signal, if the modules are not working correctly, and transmits the alert signal to a user device associated with the user.
- the server (116) is configured to cooperate with the processor (112) and the client module (114).
- Figure 3 illustrates a schematic diagram depicting a server, according to an exemplary implementation of the present invention.
- the server (116) includes a product identifier (302), a determination module (304), a database (312), a mapping module (314), a communication module (316), and a tagging module (318).
- the product identifier (302) is configured to identify at least one product from the identified location received from the location identifier (202) of the client module (114).
- the server (116) includes an object identifier (324).
- the object identifier (324) is configured to identify at least one object from the identified location received from the location identifier (202) of the client module (114).
- the object includes a plurality of products, a collection of products, and a similar or dissimilar set of products.
- the product identifier (302) is configured to cooperate with the object identifier (324) to receive the identified object.
- the product identifier (302) is further configured to identify at least one product from the identified object.
- the object can be background images, signs, texts, and the like.
- the determination module (304) is configured to determine users of interest information associated with the user, and at least one interest information of the users of interest information.
- the determination module (304) includes a web crawler (306), an image analyzer (308), and a data analysis module (310).
- the web crawler (306) is configured to crawl through web, at least one of data related to other user's activities images, posting on social media feeds, call log details, SMS details, product browsing history, e-commerce purchase history, product reviews, likes or dislikes of one or more products.
- FIG. 5 illustrates a schematic diagram depicting determining users of interest information (500), according to an exemplary implementation of the present invention.
- the users of interest information is determined by crawling the data related to socially connected associated users (502), close members based on closeness factor and a relation type (510), members having special occasion (520), and previous history (530).
- the web crawler (306) is configured to crawl the data related to socially connected users from social sites (504) such as Facebook, Twitter, Instagram, Pinterest, and the like, a contact list (506) of the user stored in a mobile phone, and event images (508) such as birthday events, travel, festive events, and the like.
- the web crawler (306) is configured to crawl the data related to close members from analyzing the frequency of social interaction (512), frequency of calls/ SMS interactions (514), types of interactions from SMS content (516), and images (518).
- the types of interactions (516) are determined by formal or business tone or informal tone.
- the images (518) include profile pictures of the associated users.
- the web crawler (520) is configured to crawl the data related to members having special occasions from wishes (522), posts (524), and user calendar events (528).
- the wishes (522) can include birthday wishes, engagement wishes, anniversaries wishes, achievements wishes, and the like.
- the posts (524) can include travel plans, life goals, and the like.
- the web crawler (306) is configured to crawl the data related to the previous history (530) from previous occasions of gift purchase (532), and no gift for long time to family members (534). Based on the crawled data, a target user for gift (536) is determined.
- the image analyzer (308) is configured to cooperate with the web crawler (306) to receive the crawled data.
- the image analyzer (308) is further configured to analyze the crawled data and identify texts, and the presence of other users in at least one image and their associated activities.
- the users of interest information includes a plurality of information about the user and associated one or more users.
- the determination module (304) further includes a data analysis module (310) configured to cooperate with the web crawler (306) to receive the crawled data.
- the data analysis module (310) is further configured to analyze the call log details and/or the SMS details, and determine frequency of calls and/or SMS of the user with the associated users.
- the data analysis module (310) is configured to analyze mobile data including call log data and user messages data. To differentiate the closeness of users with its associated users, three factors are considered, i.e. frequency of call logs and messages, temporal patterns of calls and messages, and aspect of content of messages (business aspect, friendly aspect, and family aspect).
- Figure 6 illustrates a graphical view depicting a call pattern and frequency over a time period, according to an exemplary implementation of the present invention.
- the call pattern identifies the continuity of the user being in touch with the associated users over a time period, and the frequency shows how often the user does call or messages on an average.
- the call pattern and frequency are analyzed based on determining regular pattern with high average frequency, regular pattern with low average frequency, and irregular pattern with the pre-defined time period (for example, Month 1, Month 2, Month 3, and Month 4).
- the data analysis module (310) is configured to determine degree of association based on the determined frequency of calls and/or SMS of the user with the associated users. The degree of association is determined by following formula:
- weightages for the daily frequency, the weekly frequency, and the monthly frequency are pre-set by an expert, or are calculated using machine learning techniques.
- the machine learning techniques include linear regression, logistic regression, principal component analysis (PCA), and neural network techniques.
- the weightages are calculated using the linear regression technique by using following formula:
- This hypothesis function is learned through the linear regression technique by minimizing the cost function (difference of actual value i.e. observed value and i.e. predicted value).
- training data is required containing daily, weekly and monthly frequencies along with the observed value of degree of association ( ). This training data can obtain from the past history of call log data.
- the pattern finds the occurrence of events periodically.
- the degree of periodicity is categorized into three categories, i.e. high regularity, medium regularity, and low regularity.
- the degree of association of the user based on the pattern of calls and/or SMS can be defined are below
- the aspect of the message provides the emotional associations between the users.
- the aspect of messages is determined using text classification techniques.
- the text classification techniques include Convolutional Neural Networks (CNN) based binary text classification, bag of words, support vector machine (SVM), shallow neural networks, and the like. These techniques are also used to extract aspect of each of the messages.
- Figure 7 illustrates a schematic diagram depicting degree of association of a user based on a pattern, according to an exemplary implementation of the present invention.
- the degree of association of a user based on messages pattern is determined by using Convolutional Neural Networks (CNN) for natural language processing.
- CNN Convolutional Neural Networks
- Each sentences of the messages (702) are represent in the form of N X k based on WordToVec Model) and are mapped to embedding vectors and considered as a matrix input. Convolutions are performed across the input word-wise using differently sized kernels, such as 2 or 3 words at a time, on a convolution layer (704). The resulting feature maps are then processed using a max pooling layer (706) and fully connected layer and output layer (Sigmoid) (708) to condense or summarize the extracted features.
- Sigmoid fully connected layer and output layer
- the database (312) is configured to store details related to a plurality of products, pre-determined users of interest information, the determined users of interest information, and the determined interest information of the users of interest information.
- the database (312) includes a look up table configured to store details related to each of the products, pre-determined users of interest information, the determined users of interest information, and the determined interest information of the users of interest information.
- the database (312) can be implemented as enterprise database, remote database, local database, and the like.
- the database (312) can be located within the vicinity of the server (116) or can be located at different geographic locations as compared to that of the server (116). Further, the database (312) may themselves be located either within the vicinity of each other or may be located at different geographic locations.
- the database (312) may be implemented inside the server (312) and the database (312) may be implemented as a single database.
- the mapping module (314) is configured to cooperate with the product identifier (302) and the database (312) to receive the at least one identified product and the stored details.
- the mapping module (314) is further configured to map the at least one identified product with at least one stored information and generate preference information based on the mapped data.
- the server (116) generates a notification based on the generated preference information for recommending the products/ preferred products.
- the notification is in the form of, but is not limited to, text-based notification, icon-based notification, pop-up based notification, notification on a secondary device, and notification as SMS.
- the communication module (316) is configured to cooperate with the mapping module to receive the generated preference information.
- the communication module (316) is further configured to transmit the generated preference information to the augmented reality device (102).
- the tagging module (318) is configured to highlight the product from the preference information on the display area (104) coupled with the augmented reality device (102).
- the server (116) includes a ranking module (320).
- the ranking module (320) is configured to cooperate with the determination module (304) to receive the users of interest information and the crawled data, and is further configured to rank the users of interest information based on the crawled data.
- Figure 8 illustrates a schematic diagram depicting overall ranking of associated users for identifying closeness (802), according to an exemplary implementation of the present invention.
- the ranking module (320) is configured to rank the users of interest information by analyzing the crawled image data, mobile data (call log details and SMS details), and social media data.
- the users of interest information is ranked by determining degree of association based on the image data analysis (808) using relative participation frequency.
- the degree of association (808) is defined as:
- the ranking module (320) determines which associated users are more frequently present in personal photos saved in a user device or social media photos.
- the associated users present in photos corresponding to a specific event or an activity is considered for estimation of degree of association.
- the specific events include, but are not limited to, birthday celebration, anniversary celebration, dining out, vacation trips, and festivals.
- frequency is estimated based on the presence of the associated user(s) in different events.
- the image data includes face recognition of the associated users, identification of each of the associated users, and scene understanding to identify events.
- each of the associated users are identified using a deep learning-based framework.
- the deep learning-based framework includes a single shot detection (SSD) technique.
- SSD single shot detection
- face recognition is performed using a deep convolutional network technique, i.e. VGG 16.
- the users of interest information is ranked by determining the degree of association based on mobile data analysis (804).
- the degree of association (804) is defined as:
- the users of interest information is ranked by determining degree of association based on the social data analysis (806).
- the degree of association is determined by analyzing user's interactions which are directed to each of the associated users.
- the interactions are analyzed by identifying liking a content posted by some user, tagging the associated user in some photo, and posting comments on a social wall or commenting to some post of the user.
- the aggregate frequency of interactions is proportionate to social closeness with each of the associated users.
- the users of interest information is ranked by determining relative interaction frequency using the following formula:
- the overall ranking of the associated users for identifying closeness (802) is determined by using the following formula:
- the server (116) further includes a product preference identification module (326) configured to cooperate with the determination module (304) and the database (312).
- the product preference identification module (326) is configured to identify preference information of at least one product.
- the product preference identification module (326) includes an extraction module (328), a categorization module (330), a selection module (332), a context extraction module (334), and a preference generation module (336).
- the extraction module (328) is configured to extract the users of interest information from the database (312) and the crawled data related to the product.
- the categorization module (330) is configured to cooperate with the extraction module (328).
- the categorization module (330) is further configured to categorize a plurality of products based on the extracted users of interest information and the crawled data related to the product.
- the selection module (332) is configured to cooperate with the categorization module (330) and the extraction module (328).
- the selection module (332) is configured to select a preferred category of each of the products based on the extracted users of interest information and the crawled data related to the product.
- the context extraction module (334) is configured to cooperate with the extraction module (334).
- the context extraction module (334) is further configured to cooperate with the analyze a context in the form of time of the year, weather, special occasion, and other occasions.
- the preference generation module (336) is configured to cooperate with the categorization module (336), the context extraction module (334) and the selection module (332).
- the preference generation module (336) is configured to generate contextual category wise product preferences for the user.
- Figure 9 illustrates a schematic diagram depicting identification of preference information (900), according to an exemplary implementation of the present invention.
- the product preference generation module (326) is configured to identify product preferences by extract user's universal preferences (902) based on the crawled data including analysis of user's publicly available images (904), purchase and browsing history (906) of e-commerce sites, and product reviews (908) given by the user, for identifying the preference information.
- the product preference generation module (326) further configured to extract user's preferences based on current context (910).
- the context includes two contextual features, i.e. time of the year (912), and festive occasion (914).
- time of the year (912) is extracted from time stamp of the purchase history of the product.
- the festive occasion (914) is extracted using third party services, which provides nearby festive season or using a festive calendar.
- the product preference generation module (326) is configured to extract product availability (916) including product types available in the shop (918) and previously purchased items in similar context (920). Based on the user's preferences, context, and the product availability, a target product or gift (922) is identified.
- FIG 10 illustrates a block diagram depicting a learning model, according to an exemplary implementation of the present invention.
- the learning model (1000) is configured to extract and select a preferred category of each of the products.
- the learning model (1000) includes training data (1002), a text pre-processing module (1004), a feature extraction module (1012), an SVM classifier (1018), and a trained model (1020).
- the training data (1000) is an open training text data, which is crawled from the web.
- the text pre-processing module (1004) configured to receive the training data and applying techniques on the training data and generate a feature vector that separates text from individual words.
- the techniques include stop word removal (1006), stemming (1008), and lemmatization techniques (1110).
- the feature extraction module (1012) is configured to cooperate with the text pre-processing module (1004) to receive the feature vector, and is further configured to extract features by using term frequency and inverse document frequency (TF X IDF) (1014), and calculate a TF X IDF score.
- TF X IDF term frequency and inverse document frequency
- the term frequency (TF) and the inverse document frequency are defined by:
- the feature extraction module (1012) is configured to select words above a threshold based on the TF X IDF score (1016).
- the SVM classifier (1018) is configured to determine offline preference category. Further, the SVM classifier (1018) is configured to predict classes for a dataset consisting of features set and labels set.
- the learning model (1000) is not limited to the SVM classifier (1018), but also includes Convolutional Neural Network (CNN) based classifier, Bagging models, Naive Bayes classifier, etc.
- determining the offline preference category is a one-time process. After determining the preference category, the training model (1020) is generated, which is used to extract a user preference category from its purchase history, browsing query history, and product reviews given by the user on review sites.
- Figure 11 illustrates a schematic diagram depicting identification of user's contextual preferences (1100), according to an exemplary implementation of the present invention.
- the user's contextual preferences are identified by using the crawled data including, but is not limited to, purchase history text (1102), product reviews (1104), and browsing history or queries (1106).
- the text pre-processing module (1108) is configured to receive the crawled data and applying techniques on the crawled data and generate a feature vector that separates text from individual words.
- the techniques include stop word removal (1110), stemming (1112), and lemmatization techniques (1114).
- the feature extraction module (1012) is configured to cooperate with the text pre-processing module (1108) to receive the feature vector, and is further configured to extract features by using term frequency and inverse document frequency (TF X IDF) (1118), and calculate a TF X IDF score. Subsequently, the feature extraction module (1116) is configured to select words above a threshold based on the TF X IDF score (1120).
- the classification model (1122) is configured to receive the extracted features from the feature extraction module (1116) and user's contextual preference (1124), and is further configured to classify the user's preferences and create a user profile, and then correlate the user profile with the extracted context received from the context extraction module (334) to provide enhanced user's contextual preference (1126).
- the user's preferences are classified based on the categories which a preference belongs to, for example clothes, watches, accessories, bags, shoes, and the like.
- the user's contextual preferences are stored in the form of user preference ontology.
- Figures 12a, 12b, and 12c illustrate schematic diagrams depicting a contextual preferences ontology, a product ontology, and a user's contextual product preferences ontology, respectively, according to an exemplary implementation of the present invention.
- a context in the form of time of the year, and festive location is stored.
- time of the year includes summer season, rainy season, or winter season
- the festive occasion includes New Year, Christmas, Valentine Day, Halloween, and the like.
- the product ontology (1200b) as illustrated in Figure 12b
- a category wise gift product is stored.
- the gift product can be apparel, educational, apparel accessories, and gadgets.
- the apparel can be trousers or shirts.
- the trousers are further classified in terms of an attribute and a brand.
- the attribute can be texture such as denim, cotton, and the like, and the brand can be Arrow, Levis, and the like.
- the apparel accessory is a scarf or a hat.
- the scarf has attributes such as color, and texture.
- the contextual product preferences ontology (1200c) stores details of a user including name, preferences, time of the year, probability score, product attributes, festive occasion, and the like.
- the user's contextual product preferences ontology (1200c) is generated by combining the contextual preference ontology (1200a), and the product ontology (1200b).
- the probability score is calculated by using the following formula:
- the mapping of the user's contextual product preference (1200c) with the context stored in the contextual preference ontology (1200a) and the product ontology (1200b) is based on probabilistic reasoning of the user's product preference ontology. Further, inferencing about the user's liking on at least one product in a given context is estimated by using a Bayesian Inferencing technique. If the probability of the user linking with a specific product is estimated above a pre-configured threshold, the product can be suggested as a gift item for the target user. In one embodiment, the posterior probability of liking the product is conditioned on the context and is a standard Bayes's probability approach to estimate the probability.
- the server (116) includes a visual extraction module (322).
- the visual extraction module (322) is configured to extract attributes of the identified product and compare the extracted attributes with the crawled data.
- the attributes are visual attributes or physical attributes.
- the visual attributes of the product are extracted by capturing color and texture characteristics for multiple segments of the plurality of products.
- RGB (red, green, and blue) values of each pixel can be quantized to form a uniform color dictionary, for determining color characteristics.
- LBPs local binary patterns
- the visual extraction module (322) is configured to normalize the color and texture characteristics independently and then concatenate to provide a final descriptor.
- similar feature vector can be prepared by identifying apparel and apparel accessories from user's extracted styles from publicly available photos.
- similarity measure such as cosine similarity or Jacard's similarity can be is used to match the product with user's preferred styles.
- a i and B i are components of vector A and B respectively.
- the server (116) includes a learning module (338).
- the learning module (338) is configured to learn a style of the associated user from the at least one crawled image.
- the learning module (338) includes an estimation module (340), an isolation unit (342), a segmentation module (344), a clustering module (346), and a classifier (348).
- the estimation module (340) is configured to estimate an articulated pose by spatially quantizing body part regions of the associated user.
- the isolation unit (342) is configured to isolate at least one portion of the image as a binary mask by using a global clothing probability map.
- the segmentation module (344) is configured to segment probable body part regions into visually coherent regions.
- the clustering module (346) is configured to cluster the segmented regions into a single non-connected region by using an Approximate Gaussian Mixture (AGM) clustering technique.
- the classifier (348) is configured to classify clothing type for the clustered region by using Nearest Neighbor or support vector machine (SVM) technique.
- the user input module (210) is configured to manipulate the users of interest information and the at least one interest information of the users of interest information. In one embodiment, the user input module (210) is configured to remove the users of interest information and the at least one interest information of the users of interest information. In another embodiment, the user input module (210) is configured to prioritize the users of interest information and the at least one interest information of the users of interest information. In one embodiment, the user input module (210) is configured to reorder the users of interest information and the at least one interest information of the users of interest information.
- the user(s) can manually select the users of interest information for the product preferences in their vision. In one embodiment, the user can manually select the categories for the product preferences with the users of interest information.
- the display area (104) of the augmented reality device (102) displays users of interest information at a specific position, i.e. upper/bottom side, or left/right side, the users of interest information along with the special occasion, the preference information along with the image of the associated user, and product categories for filtering of the products.
- Figures 19a illustrates a use-case scenario depicting an augmented reality shopping assistance mode, according to an exemplary implementation of the present invention.
- the platform (108) provides augmented reality content on the display area (104) of the augmented reality device (102), i.e. the users of interest information along with special occasions, and highlights the product preferences information along with the user's information.
- Figure 19b illustrates a use-case scenario depicting an augmented reality shopping assistance mode, according to an exemplary implementation of the present invention.
- the user is provided with a timeline on which special occasion of users of interest information is mentioned. The user can easily identify which occasion is near and the preferences for easy selection of gifts.
- Figure 19c illustrates a use-case scenario depicting an augmented reality shopping assistance mode, according to an exemplary implementation of the present invention.
- the preferred product along with the physical attributes of the associated user is provided.
- Figure 19d illustrates a use-case scenario depicting an augmented reality shopping assistance mode, according to an exemplary implementation of the present invention.
- the product preferences are highlighted along with the weightage (priority etc.) for easy selection of a gift among the highlighted product.
- Figures 13a and 13b illustrate a sequence diagram determining users of interest information, according to an exemplary implementation of the present invention.
- the sequence diagram of Figures 13a and 13b is described in conjunction with Figure 16 in below paragraphs.
- Figure 14 illustrates a sequence diagram depicting identifying preference information of a product, according to an exemplary implementation of the present invention. The sequence diagram of Figure 14 is described in conjunction with Figure 16 in below paragraphs.
- Figures 15a and 15b illustrate a sequence diagram depicting highlighting a product from the preference information, according to an exemplary implementation of the present invention.
- the sequence diagram of Figures 15a and 15b is described in conjunction with Figure 16 in below paragraphs.
- Figure 16 illustrates a flowchart (1600) depicting a method for providing a plurality of contents to an augmented reality device (102) in a client-server arrangement, according to an exemplary implementation of the present invention.
- the sequence diagrams of Figures 13, 14, and 15 are explained with the help of the flowchart (1600) of Figure 16.
- the flowchart (1600) of Figure 16 is explained below with reference to Figures 1, 2, and 3 as described above.
- the flowchart (1600) starts at a step (1602), identifying location of a user associated with an augmented reality device (102).
- a location identifier (202) identifies a location of a user associated with the augmented reality device (102).
- the location is identified by capturing a real-time view of the user associated with the augmented reality device (102), by recognizing text and image data from the captured view, and by detecting the presence of the user in a pre-defined location.
- the pre-defined location is a geo-tagged location.
- identifying at least one product from the identified location In another embodiment, a product identifier (302) identifies at least one product from the identified location.
- determining users of interest information associated with the user, and at least one interest information of the users of interest information determines users of interest information associated with the user, and at least one interest information of the users of interest information.
- a step (1608) storing details related to a plurality of products, pre-determined users of interest information, the determined users of interest information, and the determined interest information of the users of interest information.
- a database (312) stores details related to a plurality of products, pre-determined users of interest information, the determined users of interest information, and the determined interest information of the users of interest information.
- mapping the at least one identified product with at least one stored information, and generating preference information based on the mapped data.
- a mapping module maps the at least one identified product with at least one stored information, and generating preference information based on the mapped data.
- a communication module (316) transmits the preference information to the augmented reality device (102).
- a tagging module (318) highlights the product from the preference information on a display area (104) coupled with the augmented reality device (102).
- Figure 17 illustrates a flowchart depicting identifying preference information of at least one product (1700), according to an exemplary implementation of the present invention.
- the flowchart (1700) of Figure 17 is explained below with reference to Figure 3 as described above.
- the flowchart (1700) starts at a step (1702), crawling through web, data related to associated user's activities, images, social media feeds, call logs details, SMS details, product browsing history, e-commerce purchase history, product reviews, likes and dislikes of one or more products.
- a web crawler (306) crawls through web, data related to associated user's activities, images, social media feeds, call logs details, SMS details, product browsing history, e-commerce purchase history, product reviews, likes and dislikes of one or more products.
- an extraction module extracts the users of interest information from the database (312) and the crawled data related to the product.
- a categorization module categorizes a plurality of products based on the extracted users of interest information and the crawled data related to the product.
- a selection module (332) selects a preferred category of the products based on the extracted users of interest information and the crawled data related to the product.
- a context extraction module (334) analyzes a context of the products in the form of time of the year, weather, special occasion, and other occasions.
- a preference generation module (336) generates contextual category wise product preferences for the user.
- Figure 18 illustrates a flowchart depicting learning a style of an associated user from at least one crawled image (1800), according to an exemplary implementation of the present invention.
- the flowchart (1800) of Figure 18 is explained below with reference to Figure 3 as described above.
- the flowchart (1800) starts at a step (1802), estimating an articulated pose by spatially quantizing body part regions of the associated user.
- an estimation module (340) of a learning module (338) estimates an articulated pose by spatially quantizing body part regions of the associated user.
- an isolation unit isolates at least one portion of the image as a binary mask by using global clothing probability map.
- segmenting probable body part regions into visually coherent regions At a step (1806), segmenting probable body part regions into visually coherent regions.
- a segmentation module (344) segments probable body part regions into visually coherent regions.
- a clustering module (346) clusters the segmented regions into a single non-connected region.
- a classifier (348) classifies clothing type for the clustered region.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Computer Graphics (AREA)
- Marketing (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Finance (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Accounting & Taxation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Business, Economics & Management (AREA)
- Development Economics (AREA)
- Economics (AREA)
- Strategic Management (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present invention discloses a platform (108) and method for providing contents to an augmented reality device (102). The present invention provides a user to view preferred products based on preferences and physical attributes of associated users. The platform (108) identifies location of a user associated with the augmented reality device (102). At least one product from the identified location is identified. The users of interest information associated with the user, and at least one interest information of the users of interest information are determined. Further, the platform (108) generates preference information of the associated users. The identified product from the preference information is highlighted on a display area (104) coupled with the augmented reality device (102).
Description
The present invention relates generally to image processing, and, particularly but not exclusively, to a platform and method for providing contents to an augmented reality device.
Augmented reality (AR) is the integration of digital information with the user's environment, which uses capabilities of a computer-generated display, audio, and texts to enhance a user's real-world experience. Particularly, the augmented reality (AR) adds spatially aligned virtual objects, such as three-dimensional models, two-dimensional textures, and the like, to the user's environment. The retail sector has constantly been evolving to adapt buying patterns, i.e. moving from the age of tele-shopping commercials to the rise of online shopping portals. The retail sector has adapted an augmented reality (AR) shopping experience trend, which provides experience to customers to view colors, sizes, and dimensions of various personal and lifestyle products, such as home furnishing, clothes, etc.
In today's shopping experience, one of the major problems is that if a user wants to buy a product for other user, it becomes very difficult to choose the product for the other user based on the preferences and physical attributes of the other user. For example, if a user wants to buy a gift for his friend, he goes to a store and find difficulties in choosing the gift for the friend as the user does not know the preferences and the physical attributes of the friend. Conventionally, augmented reality (AR) mirrors are used to virtually try wearable clothes. However, the users can only see themselves wearing virtual products. The mirrors cannot identify the preferences and the physical attributes of the other user.
US20140225924 discloses a system for determining trigger items in augmented reality environments. The system generates augmented reality scenario associated with a trigger item, which is detected from one or more frames. In this, an augmented reality content item is generated by another user using other computing device.
US20130050258 discloses a see-through head-mounted display (HMD) device that provides an augmented reality image which is associated with a real-world object, such as a picture frame, wall or billboard. The location and visual characteristics of the objects are determined by the front facing camera of the HMD device. The user selects from among candidate data streams, such as a web page, game feed, and video or stocker ticker. Subsequently, when the user is in the location of the object and looks at the object, the HMD device matches the visual characteristics to the record for identifying the data stream, and then displaying corresponding augmented reality images registered to the object.
US20160253844 discloses a system that provides augmented and virtual reality experience using topologies connecting disparate device types, shared-environments, messaging systems, virtual object placements, etc. Some embodiments employ pose-search systems and methods that provide more granular pose determination than were previously possible.
However, none of the prior art documents provide a user to view preferred products of the other user based on the preferences and the physical attributes of the other users. In addition, providing an easy user interface for the selections of the preferred product among the recommended users is necessary. The selection of product category is also required for highlighting the preferred products of the user in the selected category.
Therefore, there is a need of a platform and method that limits the aforementioned drawbacks and provides augmented reality contents, i.e. users of interest and their preferences.
This summary is provided to introduce concepts related to providing a plurality of contents to an augmented reality device. This summary is neither intended to identify essential features of the present invention nor is it intended for use in determining or limiting the scope of the present invention.
For example, various embodiments herein may include one or more platforms and methods for providing a plurality of contents to an augmented reality device in a client-server arrangement are provided. In one of the embodiment, the method includes identifying location of a user associated with the augmented reality device. The method includes identifying at least one product from the identified location. Further, the method includes determining users of interest information associated with the user, and at least one interest information of the users of interest information. Subsequently, the method includes storing details related to a plurality of products, pre-determined users of interest information, the determined users of interest information, and the determined interest information of the users of interest information in a database. Furthermore, the method includes mapping the at least one identified product with at least one stored information and generating preference information based on the mapped data. Upon generation of the preference information, the method further includes transmitting the preference information based on the mapped data, and then highlighting the product from the preference information on a display area coupled with the augmented reality device.
In another embodiment, a computer implemented platform is configured to provide a plurality of content to an augmented reality device in a client-server arrangement. The platform includes a memory which is configured to store a set of pre-determined rules, and a processor which is configured to generate system processing commands. Further, the platform includes a client module and a server. The client module includes a plurality of modules and a location identifier. The location identifier is configured to identify location of a user associated with the augmented reality device. The server includes a product identifier, a determination module, a database, a mapping module, a communication module, a tagging module, and a plurality of other modules. The product identifier is configured to identify at least one product from the identified location. The determination module is configured to determine users of interest information associated with the user, and at least one interest information of the users of interest information. The database is configured to store details related to a plurality of products, pre-determined users of interest information, the determined users of interest information, and the determined interest information of the users of interest information. The mapping module is configured to map the at least one identified product with at least one stored information and generate preference information based on the mapped data. The communication module is configured to transmit the generated preference information to the augmented reality device. The tagging module is configured to highlight the product from the preference information on a display area coupled with the augmented reality device.
According to various embodiments disclosed herein, an augmented reality device may provide a user to view preferred products of the other user based on the preferences and the physical attributes of the other users.
The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the drawings to reference like features and modules.
Figure 1 illustrates a schematic diagram depicting a computer implemented platform for providing a plurality of contents to an augmented reality device in a client-server arrangement, according to an exemplary implementation of the present invention.
Figure 2 illustrates a schematic diagram depicting a client module, according to an exemplary implementation of the present invention.
Figure 3 illustrates a schematic diagram depicting a server, according to an exemplary implementation of the present invention.
Figure 4 illustrates a schematic diagram depicting identifying shopping location of a user, according to an exemplary implementation of the present invention.
Figure 5 illustrates a schematic diagram depicting determining users of interest information, according to an exemplary implementation of the present invention.
Figure 6 illustrates a graphical view depicting a call pattern and frequency over a time period, according to an exemplary implementation of the present invention.
Figure 7 illustrates a schematic diagram depicting degree of association of a user based on a pattern, according to an exemplary implementation of the present invention.
Figure 8 illustrates a schematic diagram depicting overall ranking of associated users for identifying closeness, according to an exemplary implementation of the present invention.
Figure 9 illustrates a schematic diagram depicting identification of preference information, according to an exemplary implementation of the present invention.
Figure 10 illustrates a block diagram depicting a learning model, according to an exemplary implementation of the present invention.
Figure 11 illustrates a schematic diagram depicting identification of user's contextual preferences, according to an exemplary implementation of the present invention.
Figure 12a illustrates a schematic diagram depicting contextual preferences ontology, according to an exemplary implementation of the present invention.
Figure 12b illustrates a schematic diagram depicting product ontology, according to an exemplary implementation of the present invention.
Figure 12c illustrates a schematic diagram depicting user's contextual product preferences ontology, according to an exemplary implementation of the present invention.
Figure 12d illustrates a schematic diagram depicting generation of user's contextual product preferences ontology, according to an exemplary implementation of the present invention.
Figures 13a and 13b illustrate a sequence diagram determining users of interest information, according to an exemplary implementation of the present invention.
Figure 14 illustrates a sequence diagram depicting identifying preference information of a product, according to an exemplary implementation of the present invention.
Figures 15a and 15b illustrate a sequence diagram depicting highlighting a product from the preference information, according to an exemplary implementation of the present invention.
Figure 16 illustrates a flowchart depicting a method for providing a plurality of contents to an augmented reality device in a client-server arrangement, according to an exemplary implementation of the present invention.
Figure 17 illustrates a flowchart depicting identifying preference information of at least one product, according to an exemplary implementation of the present invention.
Figure 18 illustrates a flowchart depicting learning a style of an associated user from at least one crawled image, according to an exemplary implementation of the present invention.
Figures 19a-19d illustrate use-case scenarios depicting augmented reality shopping assistance modes, according to an exemplary implementation of the present invention.
It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative systems embodying the principles of the present invention. Similarly, it will be appreciated that any flowcharts, flow diagrams, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
In the following description, for the purpose of explanation, specific details are set forth in order to provide an understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without these details. One skilled in the art will recognize that embodiments of the present invention, some of which are described below, may be incorporated into a number of systems.
The various embodiments of the present invention provide a computer implemented platform for providing a plurality of contents to an augmented reality device and method thereof.
Furthermore, connections between components and/or modules within the figures are not intended to be limited to direct connections. Rather, these components and modules may be modified, re-formatted or otherwise changed by intermediary components and modules.
References in the present invention to "one embodiment" or "an embodiment" mean that a particular feature, structure, characteristic, or function described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment.
In one of the implementations, the present invention discloses a computer implemented method for providing a plurality of contents to an augmented reality device. The plurality of contents are augmented reality contents, i.e. users of interest information and their preferences for a product in the vision of a user, while the user is at a pre-defined location and looking at products. The method includes identifying location of a user associated with the augmented reality device. The method includes identifying at least one product from the identified location. Further, the method includes determining users of interest information associated with the user, and at least one interest information of the users of interest information. Subsequently, the method includes storing, in a database, details related to a plurality of products, pre-determined users of interest information, the determined users of interest information, and the determined interest information of the users of interest information. Furthermore, the method includes mapping the at least one identified product with at least one stored information and generating preference information based on the mapped data. Upon generation of the preference information, the method further includes transmitting the preference information based on the mapped data, and then highlighting the product from the preference information on a display area coupled with the augmented reality device.
In another implementation, the step of identifying the location further includes capturing a real-time view of the user associated with the augmented reality device. Subsequently, the step of identifying the location includes recognizing text and image data from the captured view and then identifying the location of the user. Further, the step of identifying the location includes detecting the presence of the user in a pre-defined location, and wherein the pre-defined location is a geo-tagged location.
In another implementation, the step of determining the users of interest information further includes a step of crawling through web, at least one of data related to associated user's activities, images, social media feeds, call log details, SMS details, product browsing history, e-commerce purchase history, product reviews, likes or dislikes of one or more products. Subsequently, the step of determining the users of interest information includes analyzing the crawled data and identifying texts, the presence of other users in at least one image, and their associated activities. In an embodiment, the users of interest information includes a plurality of information about the user and associated one or more users. The step of determining the users of interest information further includes analyzing the call log details, and/or the SMS details, and then determining frequency of calls and/or SMS of the user with the associated users. In one embodiment, the step of determining the users of interest information further includes providing one or more determined users of interest information in the form of timeline.
In another implementation, the method includes a step of ranking users of interest information of associated users based on the crawled data.
In another implementation, the method includes a step of identifying preference information of at least one product. The step of identifying preference information further includes extracting the users of interest information from the database and the crawled data related to the product. Subsequently, the step of identifying preference information includes categorizing a plurality of products based on the extracted users of interest information and the crawled data related to the product and selecting a preferred category of each of the products based on the extracted users of interest information and the crawled data related to the product. The step of identifying preference information further includes analyzing a context of the products in the form of time of the year, weather, special occasion, and other occasions, and then generating contextual category wise product preferences for the user. In an embodiment, the contextual category wise product preferences are generated based on the category of the identified product.
In another implementation, the step of crawling through web further includes extracting attributes of the identified product and comparing the extracted attributes with the crawled data.
In another implementation, the method includes a step of identifying at least one object from the identified location. The object includes a plurality of products, a collection of products, and a similar or dissimilar set of products.
In another implementation, the method includes a step of learning a style of the associated user from at least one crawled image. The step of learning a style further includes estimating an articulated pose by spatially quantizing body part regions of the associated users. The step of learning a style includes estimating at least one portion of the image as a binary mask by using global clothing probability map. Subsequently, the step of learning a style includes segmenting probable body part regions into visually coherent regions and clustering the segmented regions into a single non-connected region. Further, the step of learning a style includes classifying clothing type for the clustered region.
In another implementation, the method includes a step of receiving an input from the user associated with the augmented reality device. The step of receiving the input further includes manipulating the users of interest information and the at least one interest information of the users of interest information. The step of receiving the input further includes removing the users of interest information and the at least one interest information of the users of interest information. The step of receiving the input includes prioritizing the users of interest information and the at least one interest information of the users of interest information. The step of receiving the input includes reordering the users of interest information and the at least one interest information of the users of interest information. The step of receiving the input includes manipulating the users of interest information and the at least one interest information of the users of interest information. The step of receiving the input is in the form of touch, eye gaze, head movement, voice command, and/or gesture.
In another implementation, the present invention discloses a computer implemented platform for providing a plurality of contents to an augmented reality device. The platform includes a memory, a processor, a client module, and a server module. The memory is configured to store a set of pre-determined rules. The processor is configured to generate system processing commands. The client module includes a location identifier. The location identifier is configured to identify location of a user associated with the augmented reality device. The server includes a product identifier, a determination module, a database, a mapping module, a communication module, and a tagging module. The product identifier is configured to identify at least one product from the identified location. The determination module is configured to determine users of interest information associated with the user, and at least one interest information of the users of interest information. The database is configured to store details related to a plurality of products, pre-determined users of interest information, the determined users of interest information, and the determined interest information of the users of interest information. The mapping module is configured to map the at least one identified product with at least one stored information and generate preference information based on the mapped data. The communication module is configured to transmit the generated preference information to the augmented reality device. The tagging module is configured to highlight the product from the preference information on a display area coupled with the augmented reality device.
In an implementation, the location identifier further includes a capturing module and a recognition module. The capturing module is configured to capture a real-time view of the user associated with the augmented reality device. The recognition module is configured to recognize text and image data from the captured view and identify the location of the user. The location module further includes a proximity module configured to detect the presence of the user in a pre-defined location. In an embodiment, the pre-defined location is a geo-tagged location.
In another implementation, the determination module includes a web crawler. The web crawler is configured to crawl through web, at least one of data related to other user's activities images, postings on social media feeds, call logs details, SMS details, product browsing history, e-commerce purchase history, product reviews, likes of dislikes of one or more products. The determination module further includes an image analyzer configured to analyze the crawled data and identify texts, and the presence of other users in at least one image, and their associated activities. In an embodiment, the users of interest information include a plurality of information about the user and associated one or more users. Additionally, the determination module further includes a data analysis module configured to analyze the call log details, and/or the SMS details, and further configured to determine frequency of calls and/or SMS of the user with the associated users. In one embodiment, one or more determined users of interest information is provided in the form of timeline.
In another implementation, the server includes a ranking module configured to rank users of interest information based on the crawled data.
In another implementation, the server includes a product preference generation module configured to identify preference information of at least one product. The product preference generation module further includes an extraction module, a categorization module, a selection module, a context extraction module, and a preference generation module. The extraction module is configured to extract the users of interest information from the database and the crawled data related to the product. The categorization module is configured to categorize a plurality of products based on the extracted users of interest information and the crawled data related to the product. The selection module is configured to select a preferred category of each of the products based on the extracted users of interest information and the crawled data related to the product. The context extraction module is configured to analyze a context in the form of time of the year, weather, special occasion, and other occasions. The preference generation module is configured to generate contextual category wise product preferences for the user. In an embodiment, the contextual category wise product preferences are generated based on the category of the identified product.
In another implementation, the server includes a visual extraction module configured to extract attributes of the identified product and compare the extracted attributes with the crawled data. In an embodiment, the attributes are visual attributes or physical attributes.
In another implementation, the server includes an object identifier. The object identifier is configured to identify at least one object from the identified location received from the location identifier of the client module. In an embodiment, the object includes a plurality of products, a collection of products, and a similar or dissimilar set of products.
In another implementation, the server includes a learning module configured to learn a style of the associated user from at least one crawled image. The learning module includes an estimation module, an isolation module, a segmentation module, a clustering module, and a classifier. The estimation module is configured to estimate an articulated pose by spatially quantizing body part regions of the associated user. The isolation unit is configured to isolate at least one portion of the image as a binary mask by using global clothing probability map. The segmentation module is configured to segment probable body part regions into visually coherent regions. The clustering module is configured to cluster the segmented regions into a single non-connected region. The classifier is configured to classify clothing type for the clustered region.
In another implementation, the client module includes a user input module configured to receive an input from the user associated with the augmented reality device. The user input module is further configured to manipulate the users of interest information and the at least one interest information of the users of interest information. The user input module is further configured to remove the users of interest information and the at least one interest information of the users of interest information. Further, the user input module is configured to prioritize the users of interest information and the at least one interest information of said users of interest information. The user input module is configured to reorder the users of interest information and the at least one interest information of the users of interest information. In an embodiment, the received input is in the form of touch, eye gaze, head movement, voice command, and gesture.
Figure 1 illustrates a schematic diagram depicting a computer implemented platform for providing a plurality of contents to an augmented reality device in a client server arrangement (108) (hereinafter referred as "platform"), according to an exemplary implementation of the present invention. The platform (108) is communicatively coupled with an augmented reality device (102) via a network (106). In an embodiment, the augmented reality device (102) includes a display area (104a), a positioning module (104b), a control unit (104c), an output unit (104d), and sensors (104e). In an embodiment, the augmented reality device (102) can be a wearable glass. In one embodiment, the display area (104a) is a display screen. In another embodiment, the positioning module (104b) may include Global Positioning System (GPS). The control unit (104c) is configured to control the augmented reality device (102). In an embodiment, the control unit (104c) is configured to cooperate with the display area (104a), the output unit (104d), and the sensors (104e), and further configured to control the display area (104a), the output unit (104d), and the sensors (104e). For example, the control unit (104c) may receive data from the sensors (104e) or the positioning module (104b), and analyze the received data, and output the contents through at least one of the display area (104b) or the output unit (104d). In an embodiment, the output unit (104d) may be a speaker, a headphone or an earphone that can be worn on the ears of the user. The sensors (104e) are configured to sense a motion and actions of the user. In an embodiment, the sensors (104c) include an acceleration sensor, a tilt sensor, a gyro sensor, a three-axis magnetic sensor, and a proximity sensor. In one embodiment, the network (106) includes wired and wireless networks. Examples of the wired networks include a Wide Area Network (WAN) or a Local Area Network (LAN), a client-server network, a peer-to-peer network, and so forth. Examples of the wireless networks include Wi-Fi, a Global System for Mobile communications (GSM) network, and a General Packet Radio Service (GPRS) network, an enhanced data GSM environment (EDGE) network, 802.5 communication networks, Code Division Multiple Access (CDMA) networks, or Bluetooth networks.
The platform (108) includes a memory (110), a processor (112), a client module (114), and a server (116).
The memory (110) is configured to store pre-determined rules related to identification of location, data extraction, determination of information, mapping, recognition of texts and images, and ranking information. The memory (110) is also configured to store pre-defined locations. In an embodiment, the memory (110) can include any computer-readable medium known in the art including, for example, volatile memory, such as static random-access memory (SRAM) and dynamic random-access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. The memory (110) also includes a cache memory to work with the platform (108) more effectively.
The processor (112) is configured to cooperate with the memory (110) to receive the pre-determined rules. The processor (112) is further configured to generate platform processing commands. In an embodiment, the processor (112) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the at least one processor (112) is configured to fetch the pre-determined rules from the memory (110) and execute different modules of the platform (108).
The client module (114) is configured to cooperate with the processor (112). Figure 2 illustrates a schematic diagram depicting a client module (114), according to an exemplary implementation of the present invention.
The client module (114) includes a location identifier (202). The location identifier (202) is configured to identify location of a user associated with the augmented reality device (102). The location identifier (202) includes a capturing module (204), a recognition module (206), and a proximity module (208). The capturing module (204) is configured to capture a real-time view of the user associated with the augmented reality device (102). In an embodiment, the capturing module (204) can be a camera or a scanner. The recognition module (206) is configured to cooperate with the capturing module (204) to receive the captured view. The recognition module (206) is further configured to recognize text and image data from the captured view and identify the location of the user. In one embodiment, the recognition module (206) is configured to recognize scene from the captured view to identify object arrangements. The scene recognition includes an activity/ event recognition and a scene text recognition. In another embodiment, the recognition module (206) configured to extract shopping center names from the captured view using a scene text recognition (STR) technique. The recognition module (206) is configured to recognize the activity/ event by using deep learning based three-dimensional convolutional neural networks (3D-CNN) and Recurrent Neural Networks (RNN) models. The recognition module (206) is configured to recognize scene texts by using sequence modelling techniques, such as Bidirectional Long-Short Term Memory (LSTM).
In the present implementation, for recognizing the activity/ event, the recognition module (206) is configured to receive the captured view from the capturing module (204) and apply deep learning models on the captured view. To enable computation and end-to-end training, proposal and classification sub-networks share three-dimensional feature maps. A proposal subnet predicts variable length temporal segments that potentially contain activities, while a classification subnet classifies these proposals into specific activity categories or background, and further refines the proposal segment boundaries. It further extends the two-dimensional RoI (Region of Interest) pooling in faster R-CNN to 3D RoI pooling, for extracting features at various resolutions for variable length proposals.
In another implementation, for recognizing scene texts, the recognition module (206) is configured to receive the captured view from the capturing module (204) and apply the sequence modeling technique. In the sequence modeling technique, the recognition module (206) recognize a text data from the captured view, and map convolutional features in a convolutional layer. The convolutional layer extracts a feature sequence from each input image. In a recurrent layer, the recognition module (206) generates a feature sequence data based on the mapped convolutional features and analyzes the sequence data using the deep bidirectional LSTM technique. More particularly, the recognition module (206) is configured to make prediction for each frame of the feature sequence data. Subsequently, on a transcription layer, a per-frame prediction technique is applied on the analyzed data to predict the sequence. The transcription layer translates the per-frame predictions by the recurrent layer into a label sequence.
In the present implementation, the location identifier (202) includes a proximity module (208) configured to detect the presence of the user in a pre-defined location. The pre-defined location is a geo-tagged location. In an embodiment, the pre-defined location is a shopping location. The proximity module (208) is configured to determine if the user is near to any shopping location tagged in a pre-defined map or a navigator. The pre-defined map can be a Google Map.
Figure 4 illustrates a schematic diagram depicting identifying shopping location of a user (400), according to an exemplary implementation of the present invention. The location identifier (202) is configured to identify that the user is at a shopping location (402) by using identification techniques, i.e. user proximity with a geo-tagged shopping location (404), scene understanding of product arrangements (406), and scene text recognition for identifying shopping center names (408).
In an embodiment, the client module (114) further includes a user input module (210). Further, the user input module (210) is configured to receive an input from the user associated with the augmented reality device (102). The user input module (210) is configured to receive a user input and perform corresponding actions associated with the user input. The received input is in the form of touch, eye gaze, head movement, voice command, and gesture.
In another implementation, the client module (114) includes an assistance module (not shown in figure). The assistance module is configured to enable and disable a shopping assistance mode. Once, the location identifier (202) identifies that the user is in the shopping location, the assistance module enables the shopping assistance mode. If the shopping assistance mode is enabled, the capturing module (204) captures the real-time view of the user. The assistance mode facilitates visualization of the augmented reality shopping assistance mode to the user in the augmented reality device (102). The capturing module (204) captures images in a regular interval. In an embodiment, the user can provide touch input on the augmented reality device (102) to invoke the assistance mode. In one embodiment, the user can provide gesture inputs which can be detected by the augmented reality device (102), to invoke the assistance mode.
In another implementation, the client module (114) includes an alert generation module (not shown in figure). The alert generation module is configured to generate an alert signal, if the modules are not working correctly, and transmits the alert signal to a user device associated with the user.
In another implementation, the server (116) is configured to cooperate with the processor (112) and the client module (114). Figure 3 illustrates a schematic diagram depicting a server, according to an exemplary implementation of the present invention.
The server (116) includes a product identifier (302), a determination module (304), a database (312), a mapping module (314), a communication module (316), and a tagging module (318). The product identifier (302) is configured to identify at least one product from the identified location received from the location identifier (202) of the client module (114).
In another implementation, the server (116) includes an object identifier (324). The object identifier (324) is configured to identify at least one object from the identified location received from the location identifier (202) of the client module (114). The object includes a plurality of products, a collection of products, and a similar or dissimilar set of products. In an embodiment, the product identifier (302) is configured to cooperate with the object identifier (324) to receive the identified object. The product identifier (302) is further configured to identify at least one product from the identified object. In one embodiment, the object can be background images, signs, texts, and the like.
The determination module (304) is configured to determine users of interest information associated with the user, and at least one interest information of the users of interest information. In an embodiment, the determination module (304) includes a web crawler (306), an image analyzer (308), and a data analysis module (310). The web crawler (306) is configured to crawl through web, at least one of data related to other user's activities images, posting on social media feeds, call log details, SMS details, product browsing history, e-commerce purchase history, product reviews, likes or dislikes of one or more products.
Figure 5 illustrates a schematic diagram depicting determining users of interest information (500), according to an exemplary implementation of the present invention. The users of interest information is determined by crawling the data related to socially connected associated users (502), close members based on closeness factor and a relation type (510), members having special occasion (520), and previous history (530). The web crawler (306) is configured to crawl the data related to socially connected users from social sites (504) such as Facebook, Twitter, Instagram, Pinterest, and the like, a contact list (506) of the user stored in a mobile phone, and event images (508) such as birthday events, travel, festive events, and the like. Further, the web crawler (306) is configured to crawl the data related to close members from analyzing the frequency of social interaction (512), frequency of calls/ SMS interactions (514), types of interactions from SMS content (516), and images (518). The types of interactions (516) are determined by formal or business tone or informal tone. The images (518) include profile pictures of the associated users, Furthermore, the web crawler (520) is configured to crawl the data related to members having special occasions from wishes (522), posts (524), and user calendar events (528). The wishes (522) can include birthday wishes, engagement wishes, anniversaries wishes, achievements wishes, and the like. The posts (524) can include travel plans, life goals, and the like. Additionally, the web crawler (306) is configured to crawl the data related to the previous history (530) from previous occasions of gift purchase (532), and no gift for long time to family members (534). Based on the crawled data, a target user for gift (536) is determined.
In another implementation, the image analyzer (308) is configured to cooperate with the web crawler (306) to receive the crawled data. The image analyzer (308) is further configured to analyze the crawled data and identify texts, and the presence of other users in at least one image and their associated activities. In an embodiment, the users of interest information includes a plurality of information about the user and associated one or more users.
In another implementation, the determination module (304) further includes a data analysis module (310) configured to cooperate with the web crawler (306) to receive the crawled data. The data analysis module (310) is further configured to analyze the call log details and/or the SMS details, and determine frequency of calls and/or SMS of the user with the associated users. In an embodiment, the data analysis module (310) is configured to analyze mobile data including call log data and user messages data. To differentiate the closeness of users with its associated users, three factors are considered, i.e. frequency of call logs and messages, temporal patterns of calls and messages, and aspect of content of messages (business aspect, friendly aspect, and family aspect).
Figure 6 illustrates a graphical view depicting a call pattern and frequency over a time period, according to an exemplary implementation of the present invention. The call pattern identifies the continuity of the user being in touch with the associated users over a time period, and the frequency shows how often the user does call or messages on an average. In an exemplary embodiment, the call pattern and frequency are analyzed based on determining regular pattern with high average frequency, regular pattern with low average frequency, and irregular pattern with the pre-defined time period (for example, Month 1, Month 2, Month 3, and Month 4). In an embodiment, the data analysis module (310) is configured to determine degree of association based on the determined frequency of calls and/or SMS of the user with the associated users. The degree of association is determined by following formula:
In one embodiment, weightages for the daily frequency, the weekly frequency, and the monthly frequency are pre-set by an expert, or are calculated using machine learning techniques. The machine learning techniques include linear regression, logistic regression, principal component analysis (PCA), and neural network techniques. In an exemplary embodiment, the weightages are calculated using the linear regression technique by using following formula:
Hypothesis function would be taken as:
b -> constant
This hypothesis function is learned through the linear regression technique by minimizing the cost function (difference of actual value i.e. observed value and i.e. predicted value). For learning purpose, training data is required containing daily, weekly and monthly frequencies along with the observed value of degree of association (). This training data can obtain from the past history of call log data.
In another embodiment, the pattern finds the occurrence of events periodically. The degree of periodicity is categorized into three categories, i.e. high regularity, medium regularity, and low regularity. The degree of association of the user based on the pattern of calls and/or SMS can be defined are below
In yet another embodiment, the aspect of the message provides the emotional associations between the users. The aspect of messages is determined using text classification techniques. The text classification techniques include Convolutional Neural Networks (CNN) based binary text classification, bag of words, support vector machine (SVM), shallow neural networks, and the like. These techniques are also used to extract aspect of each of the messages. Figure 7 illustrates a schematic diagram depicting degree of association of a user based on a pattern, according to an exemplary implementation of the present invention. In Figure 7, the degree of association of a user based on messages pattern is determined by using Convolutional Neural Networks (CNN) for natural language processing. Each sentences of the messages (702) are represent in the form of N X k based on WordToVec Model) and are mapped to embedding vectors and considered as a matrix input. Convolutions are performed across the input word-wise using differently sized kernels, such as 2 or 3 words at a time, on a convolution layer (704). The resulting feature maps are then processed using a max pooling layer (706) and fully connected layer and output layer (Sigmoid) (708) to condense or summarize the extracted features.
The database (312) is configured to store details related to a plurality of products, pre-determined users of interest information, the determined users of interest information, and the determined interest information of the users of interest information. In an embodiment, the database (312) includes a look up table configured to store details related to each of the products, pre-determined users of interest information, the determined users of interest information, and the determined interest information of the users of interest information. In an embodiment, the database (312) can be implemented as enterprise database, remote database, local database, and the like. The database (312) can be located within the vicinity of the server (116) or can be located at different geographic locations as compared to that of the server (116). Further, the database (312) may themselves be located either within the vicinity of each other or may be located at different geographic locations. Furthermore, the database (312) may be implemented inside the server (312) and the database (312) may be implemented as a single database.
The mapping module (314) is configured to cooperate with the product identifier (302) and the database (312) to receive the at least one identified product and the stored details. The mapping module (314) is further configured to map the at least one identified product with at least one stored information and generate preference information based on the mapped data. In an embodiment, the server (116) generates a notification based on the generated preference information for recommending the products/ preferred products. In another embodiment, the notification is in the form of, but is not limited to, text-based notification, icon-based notification, pop-up based notification, notification on a secondary device, and notification as SMS.
The communication module (316) is configured to cooperate with the mapping module to receive the generated preference information. The communication module (316) is further configured to transmit the generated preference information to the augmented reality device (102).
The tagging module (318) is configured to highlight the product from the preference information on the display area (104) coupled with the augmented reality device (102).
In an embodiment, the server (116) includes a ranking module (320). The ranking module (320) is configured to cooperate with the determination module (304) to receive the users of interest information and the crawled data, and is further configured to rank the users of interest information based on the crawled data.
Figure 8 illustrates a schematic diagram depicting overall ranking of associated users for identifying closeness (802), according to an exemplary implementation of the present invention. In an embodiment, the ranking module (320) is configured to rank the users of interest information by analyzing the crawled image data, mobile data (call log details and SMS details), and social media data.
Ranking based on image data analysis: The users of interest information is ranked by determining degree of association based on the image data analysis (808) using relative participation frequency. The degree of association (808) is defined as:
In this, there is no differentiation among event types extracted after the image data analysis. In an embodiment, priorities of each of the events can be pre-set for determining the relative participation frequency. Based on the scene understanding and a user identification in the image, the ranking module (320) determines which associated users are more frequently present in personal photos saved in a user device or social media photos. The associated users present in photos corresponding to a specific event or an activity, is considered for estimation of degree of association. In an embodiment, the specific events include, but are not limited to, birthday celebration, anniversary celebration, dining out, vacation trips, and festivals. In one embodiment, there can be multiple photos of same person in the specific event. Thus, frequency is estimated based on the presence of the associated user(s) in different events. In another embodiment, the image data includes face recognition of the associated users, identification of each of the associated users, and scene understanding to identify events. In yet another embodiment, each of the associated users are identified using a deep learning-based framework. The deep learning-based framework includes a single shot detection (SSD) technique. Once the associated user is identified, face recognition is performed using a deep convolutional network technique, i.e. VGG 16.
Ranking based on mobile data analysis: The users of interest information is ranked by determining the degree of association based on mobile data analysis (804). The degree of association (804) is defined as:
Ranking based on social media data analysis: The users of interest information is ranked by determining degree of association based on the social data analysis (806). The degree of association is determined by analyzing user's interactions which are directed to each of the associated users. The interactions are analyzed by identifying liking a content posted by some user, tagging the associated user in some photo, and posting comments on a social wall or commenting to some post of the user. In an embodiment, the aggregate frequency of interactions is proportionate to social closeness with each of the associated users. The users of interest information is ranked by determining relative interaction frequency using the following formula:
The overall ranking of the associated users for identifying closeness (802) is determined by using the following formula:
User Association Ranking (R)
In an embodiment, the server (116) further includes a product preference identification module (326) configured to cooperate with the determination module (304) and the database (312). The product preference identification module (326) is configured to identify preference information of at least one product. The product preference identification module (326) includes an extraction module (328), a categorization module (330), a selection module (332), a context extraction module (334), and a preference generation module (336). The extraction module (328) is configured to extract the users of interest information from the database (312) and the crawled data related to the product. The categorization module (330) is configured to cooperate with the extraction module (328). The categorization module (330) is further configured to categorize a plurality of products based on the extracted users of interest information and the crawled data related to the product. The selection module (332) is configured to cooperate with the categorization module (330) and the extraction module (328). The selection module (332) is configured to select a preferred category of each of the products based on the extracted users of interest information and the crawled data related to the product. The context extraction module (334) is configured to cooperate with the extraction module (334). The context extraction module (334) is further configured to cooperate with the analyze a context in the form of time of the year, weather, special occasion, and other occasions. The preference generation module (336) is configured to cooperate with the categorization module (336), the context extraction module (334) and the selection module (332). The preference generation module (336) is configured to generate contextual category wise product preferences for the user.
Figure 9 illustrates a schematic diagram depicting identification of preference information (900), according to an exemplary implementation of the present invention. The product preference generation module (326) is configured to identify product preferences by extract user's universal preferences (902) based on the crawled data including analysis of user's publicly available images (904), purchase and browsing history (906) of e-commerce sites, and product reviews (908) given by the user, for identifying the preference information. In an embodiment, the product preference generation module (326). In an embodiment, the product preference generation module (326) further configured to extract user's preferences based on current context (910). The context includes two contextual features, i.e. time of the year (912), and festive occasion (914). In an embodiment, time of the year (912) is extracted from time stamp of the purchase history of the product. In one embodiment, the festive occasion (914) is extracted using third party services, which provides nearby festive season or using a festive calendar. Further, the product preference generation module (326) is configured to extract product availability (916) including product types available in the shop (918) and previously purchased items in similar context (920). Based on the user's preferences, context, and the product availability, a target product or gift (922) is identified.
Figure 10 illustrates a block diagram depicting a learning model, according to an exemplary implementation of the present invention. The learning model (1000) is configured to extract and select a preferred category of each of the products. The learning model (1000) includes training data (1002), a text pre-processing module (1004), a feature extraction module (1012), an SVM classifier (1018), and a trained model (1020). The training data (1000) is an open training text data, which is crawled from the web. The text pre-processing module (1004) configured to receive the training data and applying techniques on the training data and generate a feature vector that separates text from individual words. The techniques include stop word removal (1006), stemming (1008), and lemmatization techniques (1110). The feature extraction module (1012) is configured to cooperate with the text pre-processing module (1004) to receive the feature vector, and is further configured to extract features by using term frequency and inverse document frequency (TF X IDF) (1014), and calculate a TF X IDF score. In an embodiment, the term frequency (TF) and the inverse document frequency are defined by:
Term frequency (TF)
Inverse document frequency (IDF)
N -> total number of documents present data
Subsequently, the feature extraction module (1012) is configured to select words above a threshold based on the TF X IDF score (1016). The SVM classifier (1018) is configured to determine offline preference category. Further, the SVM classifier (1018) is configured to predict classes for a dataset consisting of features set and labels set. In an embodiment, the learning model (1000) is not limited to the SVM classifier (1018), but also includes Convolutional Neural Network (CNN) based classifier, Bagging models, Naive Bayes classifier, etc. In an embodiment, determining the offline preference category is a one-time process. After determining the preference category, the training model (1020) is generated, which is used to extract a user preference category from its purchase history, browsing query history, and product reviews given by the user on review sites.
Figure 11 illustrates a schematic diagram depicting identification of user's contextual preferences (1100), according to an exemplary implementation of the present invention. The user's contextual preferences are identified by using the crawled data including, but is not limited to, purchase history text (1102), product reviews (1104), and browsing history or queries (1106). The text pre-processing module (1108) is configured to receive the crawled data and applying techniques on the crawled data and generate a feature vector that separates text from individual words. The techniques include stop word removal (1110), stemming (1112), and lemmatization techniques (1114). The feature extraction module (1012) is configured to cooperate with the text pre-processing module (1108) to receive the feature vector, and is further configured to extract features by using term frequency and inverse document frequency (TF X IDF) (1118), and calculate a TF X IDF score. Subsequently, the feature extraction module (1116) is configured to select words above a threshold based on the TF X IDF score (1120). The classification model (1122) is configured to receive the extracted features from the feature extraction module (1116) and user's contextual preference (1124), and is further configured to classify the user's preferences and create a user profile, and then correlate the user profile with the extracted context received from the context extraction module (334) to provide enhanced user's contextual preference (1126). In an exemplary embodiment, the user's preferences are classified based on the categories which a preference belongs to, for example clothes, watches, accessories, bags, shoes, and the like. The user's contextual preferences are stored in the form of user preference ontology.
Figures 12a, 12b, and 12c illustrate schematic diagrams depicting a contextual preferences ontology, a product ontology, and a user's contextual product preferences ontology, respectively, according to an exemplary implementation of the present invention. In the contextual preferences ontology (1200a), as illustrated in Figure 12a, a context in the form of time of the year, and festive location, is stored. For example, time of the year includes summer season, rainy season, or winter season, and the festive occasion includes New Year, Christmas, Valentine Day, Halloween, and the like. In the product ontology (1200b), as illustrated in Figure 12b, a category wise gift product is stored. For example, the gift product can be apparel, educational, apparel accessories, and gadgets. The apparel can be trousers or shirts. The trousers are further classified in terms of an attribute and a brand. The attribute can be texture such as denim, cotton, and the like, and the brand can be Arrow, Levis, and the like. In another example, the apparel accessory is a scarf or a hat. The scarf has attributes such as color, and texture. To identify contextual product preferences ontology (1200c), as illustrated in Figures 12c and 12d, stores the user's contextual preference using N-ary relation approach. The contextual product preferences ontology (1200c) stores details of a user including name, preferences, time of the year, probability score, product attributes, festive occasion, and the like. The user's contextual product preferences ontology (1200c) is generated by combining the contextual preference ontology (1200a), and the product ontology (1200b). In an embodiment, the probability score is calculated by using the following formula:
In an embodiment, the mapping of the user's contextual product preference (1200c) with the context stored in the contextual preference ontology (1200a) and the product ontology (1200b) is based on probabilistic reasoning of the user's product preference ontology. Further, inferencing about the user's liking on at least one product in a given context is estimated by using a Bayesian Inferencing technique. If the probability of the user linking with a specific product is estimated above a pre-configured threshold, the product can be suggested as a gift item for the target user. In one embodiment, the posterior probability of liking the product is conditioned on the context and is a standard Bayes's probability approach to estimate the probability.
In an embodiment, the server (116) includes a visual extraction module (322). The visual extraction module (322) is configured to extract attributes of the identified product and compare the extracted attributes with the crawled data. In an embodiment, the attributes are visual attributes or physical attributes. In one embodiment, the visual attributes of the product are extracted by capturing color and texture characteristics for multiple segments of the plurality of products. In another embodiment, RGB (red, green, and blue) values of each pixel can be quantized to form a uniform color dictionary, for determining color characteristics. In yet another embodiment, local binary patterns (LBPs) are considered for determining texture characteristics. In an embodiment, the visual extraction module (322) is configured to normalize the color and texture characteristics independently and then concatenate to provide a final descriptor. In an exemplary embodiment, similar feature vector can be prepared by identifying apparel and apparel accessories from user's extracted styles from publicly available photos. In another embodiment, similarity measure such as cosine similarity or Jacard's similarity can be is used to match the product with user's preferred styles.
where, Ai and Bi are components of vector A and B respectively.
In an embodiment, the server (116) includes a learning module (338). The learning module (338) is configured to learn a style of the associated user from the at least one crawled image. The learning module (338) includes an estimation module (340), an isolation unit (342), a segmentation module (344), a clustering module (346), and a classifier (348). The estimation module (340) is configured to estimate an articulated pose by spatially quantizing body part regions of the associated user. The isolation unit (342) is configured to isolate at least one portion of the image as a binary mask by using a global clothing probability map. The segmentation module (344) is configured to segment probable body part regions into visually coherent regions. The clustering module (346) is configured to cluster the segmented regions into a single non-connected region by using an Approximate Gaussian Mixture (AGM) clustering technique. The classifier (348) is configured to classify clothing type for the clustered region by using Nearest Neighbor or support vector machine (SVM) technique.
In an embodiment, the user input module (210) is configured to manipulate the users of interest information and the at least one interest information of the users of interest information. In one embodiment, the user input module (210) is configured to remove the users of interest information and the at least one interest information of the users of interest information. In another embodiment, the user input module (210) is configured to prioritize the users of interest information and the at least one interest information of the users of interest information. In one embodiment, the user input module (210) is configured to reorder the users of interest information and the at least one interest information of the users of interest information.
In an embodiment, the user(s) can manually select the users of interest information for the product preferences in their vision. In one embodiment, the user can manually select the categories for the product preferences with the users of interest information.
In an embodiment, the display area (104) of the augmented reality device (102) displays users of interest information at a specific position, i.e. upper/bottom side, or left/right side, the users of interest information along with the special occasion, the preference information along with the image of the associated user, and product categories for filtering of the products.
Figures 19a illustrates a use-case scenario depicting an augmented reality shopping assistance mode, according to an exemplary implementation of the present invention. In this scenario, the platform (108) provides augmented reality content on the display area (104) of the augmented reality device (102), i.e. the users of interest information along with special occasions, and highlights the product preferences information along with the user's information.
Figure 19b illustrates a use-case scenario depicting an augmented reality shopping assistance mode, according to an exemplary implementation of the present invention. In this scenario, the user is provided with a timeline on which special occasion of users of interest information is mentioned. The user can easily identify which occasion is near and the preferences for easy selection of gifts.
Figure 19c illustrates a use-case scenario depicting an augmented reality shopping assistance mode, according to an exemplary implementation of the present invention. In this scenario, the preferred product along with the physical attributes of the associated user is provided.
Figure 19d illustrates a use-case scenario depicting an augmented reality shopping assistance mode, according to an exemplary implementation of the present invention. In this scenario, the product preferences are highlighted along with the weightage (priority etc.) for easy selection of a gift among the highlighted product.
Figures 13a and 13b illustrate a sequence diagram determining users of interest information, according to an exemplary implementation of the present invention. The sequence diagram of Figures 13a and 13b is described in conjunction with Figure 16 in below paragraphs.
Figure 14 illustrates a sequence diagram depicting identifying preference information of a product, according to an exemplary implementation of the present invention. The sequence diagram of Figure 14 is described in conjunction with Figure 16 in below paragraphs.
Figures 15a and 15b illustrate a sequence diagram depicting highlighting a product from the preference information, according to an exemplary implementation of the present invention. The sequence diagram of Figures 15a and 15b is described in conjunction with Figure 16 in below paragraphs.
Figure 16 illustrates a flowchart (1600) depicting a method for providing a plurality of contents to an augmented reality device (102) in a client-server arrangement, according to an exemplary implementation of the present invention. The sequence diagrams of Figures 13, 14, and 15 are explained with the help of the flowchart (1600) of Figure 16. The flowchart (1600) of Figure 16 is explained below with reference to Figures 1, 2, and 3 as described above.
The flowchart (1600) starts at a step (1602), identifying location of a user associated with an augmented reality device (102). In another embodiment, a location identifier (202) identifies a location of a user associated with the augmented reality device (102). The location is identified by capturing a real-time view of the user associated with the augmented reality device (102), by recognizing text and image data from the captured view, and by detecting the presence of the user in a pre-defined location. The pre-defined location is a geo-tagged location.
At a step (1604), identifying at least one product from the identified location. In another embodiment, a product identifier (302) identifies at least one product from the identified location.
At a step (1606), determining users of interest information associated with the user, and at least one interest information of the users of interest information. In another embodiment, a determination module (304) determines users of interest information associated with the user, and at least one interest information of the users of interest information.
At a step (1608), storing details related to a plurality of products, pre-determined users of interest information, the determined users of interest information, and the determined interest information of the users of interest information. In another embodiment, a database (312) stores details related to a plurality of products, pre-determined users of interest information, the determined users of interest information, and the determined interest information of the users of interest information.
At a step (1610), mapping the at least one identified product with at least one stored information, and generating preference information based on the mapped data. In another embodiment, a mapping module (314) maps the at least one identified product with at least one stored information, and generating preference information based on the mapped data.
At a step (1612), transmitting the preference information to the augmented reality device. In another embodiment, a communication module (316) transmits the preference information to the augmented reality device (102).
At a step (1614), highlighting the product from the preference information on a display area coupled with the augmented reality device (102). In another embodiment, a tagging module (318) highlights the product from the preference information on a display area (104) coupled with the augmented reality device (102).
Figure 17 illustrates a flowchart depicting identifying preference information of at least one product (1700), according to an exemplary implementation of the present invention. The flowchart (1700) of Figure 17 is explained below with reference to Figure 3 as described above.
The flowchart (1700) starts at a step (1702), crawling through web, data related to associated user's activities, images, social media feeds, call logs details, SMS details, product browsing history, e-commerce purchase history, product reviews, likes and dislikes of one or more products. In another embodiment, where a web crawler (306) crawls through web, data related to associated user's activities, images, social media feeds, call logs details, SMS details, product browsing history, e-commerce purchase history, product reviews, likes and dislikes of one or more products.
At a step (1704), extracting the users of interest information related to the product. In another embodiment, an extraction module (328) extracts the users of interest information from the database (312) and the crawled data related to the product.
At a step (1706), categorizing a plurality of products based on the extracted users of interest information and the crawled data related to the product. In another embodiment, a categorization module (330) categorizes a plurality of products based on the extracted users of interest information and the crawled data related to the product.
At a step (1708), selecting a preferred category of the products based on the extracted users of interest information and the crawled data related to the product. In another embodiment, a selection module (332) selects a preferred category of the products based on the extracted users of interest information and the crawled data related to the product.
At a step (1710), analysing a context of the products in the form of time of the year, weather, special occasion, and other occasions. In another embodiment, a context extraction module (334) analyzes a context of the products in the form of time of the year, weather, special occasion, and other occasions.
At a step (1712), generating contextual category wise product preferences for the user. In another embodiment, a preference generation module (336) generates contextual category wise product preferences for the user.
Figure 18 illustrates a flowchart depicting learning a style of an associated user from at least one crawled image (1800), according to an exemplary implementation of the present invention. The flowchart (1800) of Figure 18 is explained below with reference to Figure 3 as described above.
The flowchart (1800) starts at a step (1802), estimating an articulated pose by spatially quantizing body part regions of the associated user. In another embodiment, an estimation module (340) of a learning module (338) estimates an articulated pose by spatially quantizing body part regions of the associated user.
At a step (1804), isolating at least one portion of the image as a binary mask by using global clothing probability map. In another embodiment, an isolation unit (342) isolates at least one portion of the image as a binary mask by using global clothing probability map.
At a step (1806), segmenting probable body part regions into visually coherent regions. In another embodiment, a segmentation module (344) segments probable body part regions into visually coherent regions.
At a step (1808), clustering the segmented regions into a single non-connected region. In another embodiment, a clustering module (346) clusters the segmented regions into a single non-connected region.
At a step (1810), classifying clothing type for the clustered region. In another embodiment, a classifier (348) classifies clothing type for the clustered region.
It should be noted that the description merely illustrates the principles of the present invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described herein, embody the principles of the present invention. Furthermore, all examples recited herein are principally intended expressly to be only for explanatory purposes to help the reader in understanding the principles of the invention and the concepts contributed by the inventor(s) to furthering the art and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass equivalents thereof.
Claims (15)
- A computer implemented method for providing a plurality of content to an augmented reality device (102) in a client-server arrangement, the method comprising the steps of:identifying location of a user associated with the augmented reality device (102);identifying at least one product from the identified location;determining a plurality of users of interest information associated with the user, and at least one interest information of the plurality of users of interest information;storing, in a database (312), details related to a plurality of products, pre-determined users of interest information, the determined plurality of users of interest information, and the determined interest information of the plurality of users of interest information;mapping the at least one identified product with at least one stored information, and generating preference information based on the mapped data;transmitting the preference information to the augmented reality device (102), andhighlighting the product from the preference information on a display area (104) coupled with the augmented reality device (102).
- The method as claimed in claim 1, wherein the step of identifying the location further comprising:capturing a real-time view of the user associated with the augmented reality device (102); andrecognizing text and image data from the captured view and identifying the location of the user.
- The method as claimed in claim 1, wherein the step of identifying the location further includes detecting the presence of the user in a pre-defined location, and wherein the pre-defined location is a geo-tagged location.
- The method as claimed in claim 1, wherein the step of determining the plurality of users of interest information further includes:crawling through web, at least one of data related to associated user's activities, images, social media feeds, call logs details, SMS details, product browsing history, e-commerce purchase history, product reviews, likes or dislikes of one or more products; andanalyzing the crawled data and identifying texts, the presence of other users in at least one image, and their associated activities, and wherein the users of interest information includes a plurality of information about the user and associated one or more users.
- The method as claimed in claim 4, wherein the step of determining the plurality of users of interest information further includes:analyzing the call log details, and/or the SMS details; anddetermining frequency of calls and/or SMS of the user with the associated users.
- The method as claimed in claim 1, further comprising a step of ranking the plurality of users of interest information based on the crawled data.
- The method as claimed in claim 4, further comprising a step of identifying preference information of at least one product, andwherein the step of identifying the preference information further includes:extracting the plurality of users of interest information from the database (312) and the crawled data related to the product;categorizing a plurality of products based on the extracted plurality of users of interest information and the crawled data related to the product;selecting a preferred category of each of the products based on the extracted users of interest information and the crawled data related to the product;analyzing a context of the products in the form of time of the year, weather, special occasion, and other occasions; andgenerating contextual category wise product preferences for the user.
- The method as claimed in claim 4, wherein the step of crawling through web further comprising extracting attributes of the identified product and comparing the extracted attributes with the crawled data.
- The method as claimed in claim 1, further comprises a step of identifying at least one object from the identified location wherein the object includes a plurality of products, a collection of products, and a similar or dissimilar set of products.
- The method as claimed in claim 4, further includes a step of learning a style of the associated user from at least one crawled image, wherein the step of learning includes:estimating an articulated pose by spatially quantizing body part regions of the associated user;isolating at least one portion of the image as a binary mask by using global clothing probability map;segmenting probable body part regions into visually coherent regions;clustering the segmented regions into a single non-connected region; andclassifying clothing type for the clustered region.
- The method as claimed in claim 1, further comprises a step of receiving an input from the user associated with the augmented reality device (102), wherein the step of receiving the input further includes manipulating the plurality of users of interest information and the at least one interest information of the plurality of users of interest information, wherein the received input is in the form of touch, eye gaze, head movement, voice command, and/or gesture..
- A computer implemented platform (108) for providing a plurality of content to an augmented reality device (102) in a client-server arrangement, the platform (108) comprising:a memory (110) configured to store a set of pre-determined rules;a processor (112) configured to cooperate with the memory (110), the processor further configured to generate platform processing commands;a client module (114) configured to cooperate with the processor (112), the client module (114) comprises:a location identifier (202) configured to identify location of a user associated with the augmented reality device (102), anda server (116) configured to cooperate with the processor (112) and the client module (114), the server (116) comprises:a product identifier (302) configured to identify at least one product from the identified location;a determination module (304) configured to determine s plurality of users of interest information associated with the user, and at least one interest information of the plurality of users of interest information;a database (312) configured to store details related to a plurality of products, pre-determined users of interest information, the determined plurality of users of interest information, and the determined interest information of the plurality of users of interest information;a mapping module (314) configured to cooperate with the database (312) and the product identifier (302), the mapping module (314) further configured to map the at least one identified product with at least one stored information, and generate preference information based on the mapped data;a communication module (316) configured to cooperate with the mapping module (314), the communication module (316) further configured to transmit the generated preference information to the augmented reality device (102); anda tagging module (318) configured to highlight the product from the preference information on a display area (104) coupled with the augmented reality device (102).
- The platform (108) as claimed in claim 12, wherein the determination module (304) includes a web crawler (306), the web crawler (306) is configured to crawl through web, at least one of data related to other user's activities images, postings on social media feeds, call logs details, SMS details, product browsing history, e-commerce purchase history, product reviews, likes or dislikes of one or more products.
- The platform (108) as claimed in claim 13, wherein the server (116) further includes a product preference identification module (326) configured to identify preference information of at least one product, the product preference identification module (326) comprises:an extraction module (328) configured to extract the plurality of users of interest information from the database (312) and the crawled data related to the product;a categorization module (330) configured to cooperate with the extraction module (328), the categorization module (330) further configured to categorize a plurality of products based on the extracted plurality of users of interest information and the crawled data related to the product;a selection module (332) configured to cooperate with the categorization module (330) and the extraction module (328), the selection module (332) further configured to select a preferred category of each of the products based on the extracted users of interest information and the crawled data related to the product;a context extraction module (334) configured to cooperate with the extraction module (328), the context extraction module (334) configured to analyze a context in the form of time of the year, weather, special occasion, and other occasions, anda preference generation module (336) configured to cooperate with the categorization module (330), the selection module (332), and the context extraction module (334), the preference generation module (336) further configured to generate contextual category wise product preferences for the user.
- The platform (108) as claimed in claim 13, wherein the server (116) comprises a learning module (338) configured to learn a style of the associated user from at least one crawled image, the learning module (338) includes:an estimation module (340) configured to estimate an articulated pose by spatially quantizing body part regions of the associated user;an isolation unit (342) configured to isolate at least one portion of the image as a binary mask by using global clothing probability map;a segmentation module (344) configured to segment probable body part regions into visually coherent regions;a clustering module (346) configured to cluster the segmented regions into a single non-connected region; anda classifier (348) configured to classify clothing type for the clustered region.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IN201811035312 | 2018-09-19 | ||
IN201811035312 | 2018-09-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020060012A1 true WO2020060012A1 (en) | 2020-03-26 |
Family
ID=69887356
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2019/008222 WO2020060012A1 (en) | 2018-09-19 | 2019-07-04 | A computer implemented platform for providing contents to an augmented reality device and method thereof |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2020060012A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230196438A1 (en) * | 2021-12-22 | 2023-06-22 | Awoo Intelligence, Inc. | System for awakening non-shopping consumers and implementation method thereof |
US20230196405A1 (en) * | 2021-12-22 | 2023-06-22 | Awoo Intelligence, Inc. | Electronic marketing system and electronic marketing method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140100997A1 (en) * | 2012-10-05 | 2014-04-10 | Jochen Mayerle | Augmented-reality shopping using a networked mobile device |
US20140285338A1 (en) * | 2011-02-23 | 2014-09-25 | Digimarc Corporation | Mobile device indoor navigation |
US20140314313A1 (en) * | 2013-04-17 | 2014-10-23 | Yahoo! Inc. | Visual clothing retrieval |
US20150168538A1 (en) * | 2013-12-06 | 2015-06-18 | Digimarc Corporation | Mobile device indoor navigation |
US20180181997A1 (en) * | 2016-12-27 | 2018-06-28 | Paypal, Inc. | Contextual data in augmented reality processing for item recommendations |
-
2019
- 2019-07-04 WO PCT/KR2019/008222 patent/WO2020060012A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140285338A1 (en) * | 2011-02-23 | 2014-09-25 | Digimarc Corporation | Mobile device indoor navigation |
US20140100997A1 (en) * | 2012-10-05 | 2014-04-10 | Jochen Mayerle | Augmented-reality shopping using a networked mobile device |
US20140314313A1 (en) * | 2013-04-17 | 2014-10-23 | Yahoo! Inc. | Visual clothing retrieval |
US20150168538A1 (en) * | 2013-12-06 | 2015-06-18 | Digimarc Corporation | Mobile device indoor navigation |
US20180181997A1 (en) * | 2016-12-27 | 2018-06-28 | Paypal, Inc. | Contextual data in augmented reality processing for item recommendations |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230196438A1 (en) * | 2021-12-22 | 2023-06-22 | Awoo Intelligence, Inc. | System for awakening non-shopping consumers and implementation method thereof |
US20230196405A1 (en) * | 2021-12-22 | 2023-06-22 | Awoo Intelligence, Inc. | Electronic marketing system and electronic marketing method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020159232A1 (en) | Method, apparatus, electronic device and computer readable storage medium for image searching | |
WO2018117704A1 (en) | Electronic apparatus and operation method thereof | |
WO2016126007A1 (en) | Method and device for searching for image | |
WO2020085694A1 (en) | Image-capturing device and method for controlling same | |
WO2019031714A1 (en) | Method and apparatus for recognizing object | |
WO2018135881A1 (en) | Vision intelligence management for electronic devices | |
WO2020085786A1 (en) | Style recommendation method, device and computer program | |
WO2021006482A1 (en) | Apparatus and method for generating image | |
WO2020080830A1 (en) | Electronic device for reconstructing an artificial intelligence model and a control method thereof | |
WO2021054588A1 (en) | Method and apparatus for providing content based on knowledge graph | |
WO2019083275A1 (en) | Electronic apparatus for searching related image and control method therefor | |
WO2018225939A1 (en) | Method, device, and computer program for providing image-based advertisement | |
EP3539056A1 (en) | Electronic apparatus and operation method thereof | |
WO2019022472A1 (en) | Electronic device and method for controlling the electronic device | |
WO2019093819A1 (en) | Electronic device and operation method therefor | |
EP3523710A1 (en) | Apparatus and method for providing sentence based on user input | |
WO2020060012A1 (en) | A computer implemented platform for providing contents to an augmented reality device and method thereof | |
WO2024025220A1 (en) | System for providing online advertisement content platform | |
WO2019074316A1 (en) | Convolutional artificial neural network-based recognition system in which registration, search, and reproduction of image and video are divided between and performed by mobile device and server | |
WO2020184855A1 (en) | Electronic device for providing response method, and operating method thereof | |
WO2019054792A1 (en) | Method and terminal for providing content | |
EP3707678A1 (en) | Method and device for processing image | |
WO2019190142A1 (en) | Method and device for processing image | |
WO2022025340A1 (en) | System for constructing virtual closet and creating coordinated combination, and method therefor | |
WO2021071240A1 (en) | Method, apparatus, and computer program for recommending fashion product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19862862 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19862862 Country of ref document: EP Kind code of ref document: A1 |