WO2024122916A1 - Systems and methods for generating ambience suggestions for an environment - Google Patents

Systems and methods for generating ambience suggestions for an environment Download PDF

Info

Publication number
WO2024122916A1
WO2024122916A1 PCT/KR2023/018271 KR2023018271W WO2024122916A1 WO 2024122916 A1 WO2024122916 A1 WO 2024122916A1 KR 2023018271 W KR2023018271 W KR 2023018271W WO 2024122916 A1 WO2024122916 A1 WO 2024122916A1
Authority
WO
WIPO (PCT)
Prior art keywords
environment
ambience
objects
score
user
Prior art date
Application number
PCT/KR2023/018271
Other languages
French (fr)
Inventor
Shivani Aggarwal
Pushpinder Goyal
Sumantra DASGUPTA
Original Assignee
Samsung Electronics Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co., Ltd. filed Critical Samsung Electronics Co., Ltd.
Publication of WO2024122916A1 publication Critical patent/WO2024122916A1/en
Priority to US19/049,590 priority Critical patent/US20250182427A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0621Item configuration or customization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/04Architectural design, interior design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/08Bandwidth reduction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2024Style variation

Definitions

  • the present invention generally relates to a personalization of user ambience and more particularly relates to systems and methods for generating ambience suggestion(s) for an environment corresponding to a user.
  • Personalized and managed ambience plays an important role in a person’s life. Specifically, such ambience keeps the lifestyle of the person modern and stylish. Moreover, such a personalized and managed ambience provides elegance and comfort to the person’s lifestyle.
  • Augmented Reality which has been a widely accepted technology enables the person to re-imagine and re-design his/her environment. For example, such technology enables the person to visualize new paint, decor, and furniture in his/her environment. Further, some AR-enabled solutions enable the user to visualize his/her environment in a 3D virtual environment.
  • a method for generating an ambience suggestion for an environment includes processing one or more image frames corresponding to the environment to identify one or more objects in the environment.
  • the method also includes determining a fitness score corresponding to each of the one or more objects based on one or more parameters.
  • the method further includes determining a first ambience score based on the determined fitness scores corresponding to the one or more objects.
  • the method also includes identifying a target space in the environment based on the determined fitness scores corresponding to the one or more objects.
  • the method includes generating an object arrangement for the target space based on the environment.
  • the method includes determining a second ambience score of the environment based on the generated object arrangement.
  • the method includes comparing the first ambience score and the second ambience score. Also, the method includes recommending the ambience suggestion to the user based on the generated object arrangement upon determining the second ambience score being greater than the first ambience score.
  • the ambience suggestion is indicative of a change in the environment based on the generated object arrangement.
  • a system for generating an ambience suggestion for an environment includes a memory and at least one processor communicably coupled to the memory.
  • the at least one processor is configured to process one or more image frames corresponding to the environment to identify one or more objects in the environment.
  • the at least one processor is further configured to determine a fitness score corresponding to each of the one or more objects based on one or more parameters.
  • the at least one processor is configured to determine a first ambience score based on the determined fitness scores corresponding to the one or more objects.
  • the at least one processor is configured to identify a target space in the environment based on the determined fitness scores corresponding to the one or more objects.
  • the at least one processor is configured to generate an object arrangement for the target space based on the environment. Moreover, the at least one processor is configured to determine a second ambience score of the environment based on the generated object arrangement. Furthermore, the at least one processor is configured to compare the first ambience score and the second ambience score. Thereafter, the at least one processor is configured to recommend the ambience suggestion to the user based on the generated object arrangement upon determining the second ambience score being greater than the first ambience score. The ambience suggestion is indicative of a change in the environment based on the generated object arrangement.
  • Figure 1 illustrates an exemplary environment of a system for generating an ambience suggestion for an environment, according to an embodiment of the present disclosure
  • Figure 2 illustrates a schematic block diagram of the system for generating the ambience suggestion for the environment, according to an embodiment of the present disclosure
  • Figure 3 illustrates a schematic block diagram of modules of the system for generating the ambience suggestion for the environment, according to an embodiment of the present invention
  • Figure 4A-4D illustrate an exemplary process flow for generating the ambience suggestion for the environment, according to an embodiment of the present disclosure
  • Figure 5A-5B illustrate a flow chart of a method for generating the ambience suggestion for the environment, according to an embodiment of the present disclosure
  • Figure 6 illustrates the generation of ambience suggestions for the environment, according to an embodiment of the present disclosure
  • Figure 7 illustrates the generation of ambience suggestions for the environment, according to another embodiment of the present disclosure.
  • Figure 8 illustrates the generation of ambience suggestions for the environment, according to yet another embodiment of the present disclosure
  • Figure 9 illustrates the generation of ambience suggestions for the environment based on user activity, according to an embodiment of the present disclosure
  • Figure 10 illustrates the generation of ambience suggestions for the environment based on user voice command, according to an embodiment of the present disclosure.
  • Figure 11 illustrates the generation of ambience suggestions for the environment in a virtual reality device, according to an embodiment of the present disclosure.
  • the present invention is directed towards a method and a system for recommending ambience suggestions to provide a managed and personalized environment to a user.
  • Embodiments include identifying a target space in the environment to make suitable suggestions for modification in the environment.
  • the system enhances the overall user experience in the environment and provides a more personalized and planned environment.
  • the system takes into consideration user’s interest, user’s activity and user’s media watching history while generating ambience recommendations for the environment. Further, embodiments provide a simple and cost-effective technique to improve user experience in an environment.
  • Figure 1 illustrates an exemplary environment 100 of a system 102 for generating an ambience suggestion for an environment, according to an embodiment of the present disclosure.
  • Fig. 1 illustrates a user watching a television 103 coupled with a camera device 104.
  • the camera device 104 may be configured to capture image frame(s) corresponding to an environment 106.
  • the camera device 104 may be configured to capture image frames at a predetermined interval of time.
  • the environment 106 may correspond to a room where the television 103 is installed. Examples of environment may include, but not limited to, a study room, a dining hall, a bedroom and so forth.
  • the camera device 104 may be placed atop of the television 103.
  • embodiments intend to cover or otherwise cover any suitable location of camera device 104 in the environment 106 to suitably capture the image frames.
  • the camera device 104 may be disposed with any other electronic device located within the environment.
  • the television 103 may also be configured to log user watching history.
  • the television 103 may be configured to determine and store user interest based on content viewed by the user.
  • the television 103 is exemplary in nature and embodiments either covers or intend to cover any other suitable display and/or media device.
  • Example of media devices may include, but not limited to, personal computer, smart watch, voice assistant device, Internet of thing (IoT) device, laptop, and so forth.
  • the system 102 may be configured to receive the image frames captured by the camera device 104. Further, the system 102 may be configured to receive information collected and stored by the television 103. In an embodiment, the system 102 may be installed within the television device 103 where the camera device 104 is installed. Further, the system 102 may be configured to log user watching history. Also, the system 102 may be configured to determine and store user interests based on content viewed by the user.
  • system 102 may be a standalone entity remotely coupled to the camera device 104 and television 103. In yet another embodiment, the system 102 may be installed within a mobile device of the user.
  • the system 102 may be configured to process the image frames captured by the camera device 104 to identify object(s) in the environment.
  • the identified objects may include, but not limited to, chairs, a table, a lamp, a cupboard and so forth. Further, the system 102 may determine a fitness score corresponding to each identified object. The fitness score may be indicative of how aesthetically pleasing an object is with respect to the environment. In other embodiment, the fitness score may be an indicative of usefulness of the object in the environment. The usefulness of the object may be a measure of the usage of the object by the user in the environment. For example, in case a lamp is not in use by the user for a predefined long period of time, the lamp may have minimum usefulness in the environment.
  • the fitness score may consider various parameters associated with the object such as, but not limited to, spacing, usage, form, light, color, texture, and pattern.
  • the fitness score may be determined based on one or more parameters including, but not limited to, environment theme, user interest, object location, and object usage.
  • the fitness score may be determined using equation 1, as mentioned below:
  • T Theme may correspond to theme of the environment (interchangeably referred to as room).
  • the theme of the environment may be based on wall color, group of object theme, lighting etc.
  • UI User Interest may correspond to interest level of the user in the object/activity/environment.
  • L Location may correspond to a location of the object.
  • OF Other Factors may correspond to additional factors related to object such as, but not limited to, cost, reachability, etc.
  • F- final - may represent a final fitness score of all the objects or an ambience score of the environment.
  • the UI User Interest may include a value corresponding to a category which may relate to an object, an activity, or an environment.
  • categories for determining the value of U User Interest may include, but not limited to, home decor, traveller, spiritual, homebody, sportsman, party lover, photography, art, technology, gardening, animal, and books.
  • a numeric value corresponding to the different categories of U User Interest may be assigned to the variable U User Interest using any other suitable technique such as, but not limited to, one-hot encoding technique.
  • the one hot encoding technique may assign a binary vector of length m to the variable U User Interest based on a number of categories used for defining user interest.
  • m may be defined as cardinality of set of categories corresponding to user interest.
  • the value of L Location may be determined during determination of object position. Further, the value of L Location may be dependent on a position of the user in the environment. Furthermore, the value of L Location may be either discrete or Boolean in nature.
  • the objects may be classified into two types, namely container or contained.
  • the container object may be configured to store various contained objects. For example, a bookshelf may be considered as a container object and books may be classified as contained objects.
  • the value of L Location may be either accessible or not accessible.
  • the value of L Location may be misplaced or properly placed.
  • a numerical and/or logical value may be assigned to the L Location based on determined value. For example, for an inaccessible object, the value of L Location may be defined as zero, and for an accessible object, the value of L Location may be defined as one.
  • U usage may correspond to a value which defines usability of the object in the environment.
  • the value of U usage may be defined in binary format, where a low usability of the object may be defined as zero and a high usability of the object may be defined as one.
  • the OF Other Factors may correspond to a property of the identified object such as, but not limited to, cost of the object, condition of the object, and so forth.
  • the value of OF Other Factors based on the cost of the object may be numerical and defined as a positive real number.
  • value of OF Other Factors based on the condition of the object may be either discrete or Boolean in nature.
  • the value of OF Other Factors based on the condition of the object may be either good or bad.
  • Such values of OF Other Factors may be suitably converted into numbers using suitable technique such as, but not limited to, one-hot encoding technique.
  • examples described above are exemplary in nature and the different variables/parameters used for determining the value of fitness score may have any suitable value as per implementation or the requirement.
  • a value of OF Other Factors may be defined as zero, and for an affordable object, the value of OF Other Factors may be defined as one.
  • the value of OF Other Factors based on cost may be subject to user profile and may vary from user to user.
  • a value of OF Other Factors based on condition of object may be defined as zero for worn object and one for good object.
  • the fitness score of the object may be defined by following equations Eq2-Eq5:
  • w1-w5 may define weights corresponding to each variable.
  • the system 102 may be configured to identify a target space in the environment based on determined fitness scores of the objects. For example, if the system 102 determines that an object has low fitness score, the system 102 may consider a space occupied by such object(s) as the target space to make suitable modification in the environment. In an embodiment, in order to determine low fitness score, the system 102 may compare the determined fitness score to a previously determined fitness score of the object or a predefined fitness score threshold of the object. For instance, if the determine value of fitness score is less than the previously determined fitness score of the object or the predefined fitness score threshold of the object, the system 102 may consider that the object as low fitness score.
  • the target space may be a part of environment having dimension suitable to accommodate the target object(s).
  • the target space may be formed by combining space occupied by multiple objects or by splitting the space. For instance, the system 102 may identify the space occupied by the chairs and the table as the target space.
  • the system 102 may be configured to generate an object arrangement for the target space based on the environment.
  • the object arrangement may include a re-arrangement of at least one of the one or more identified objects, a replacement of the at least one of the one or more identified objects, or an addition of a new object at the target space.
  • the system 102 may consider various parameters while generating the object arrangement. For example, the system 102 may consider parameters such as, but not limited to, a type of object located at the target space, an orientation of the object at the target space, a position of the object at the target space, neighboring objects, user activities, user interest, user-related events, characteristics of the environment and so forth.
  • the system 102 may be configured to determine that the user has an interest in adventurous activities based on the content viewing history of the user. Based on said determination of user interest, the system102 may determine recommendation(s) to replace chairs and table located at the target space with a camp and trees 108, as depicted in Fig. 1.
  • the system 102 may determine a first ambience score of the environment based on the currently identified objects and a second ambience score of the environment based on the generated object arrangement. The system 102 may compare the first ambience score and the second ambience score. Moreover, the system 102 may recommend an ambience suggestion to the user based on the generated object arrangement (or re-arrangement) upon determining that the second ambience score is greater than the first ambience score.
  • the system 102 may generate a virtual environment representing the environment with the suggested object arrangement.
  • the virtual environment may be displayed to the user via any suitable device such as, but not limited to, the television 103, a mobile device of the user, a virtual reality (VR) device of the user, and so forth.
  • the system 102 may display the object arrangement 108 in the environment via the television 103.
  • the system 102 may enhance user experience and personalization of the environment.
  • the system 102 may effectively manage user ambience which may improve user's wellbeing by suggesting ambience modifications which improves user productivity, user lifestyle, and user mental health.
  • Figure 2 illustrates a schematic block diagram of the system 102 for generating the ambience suggestion for the environment, according to an embodiment of the present disclosure.
  • the system 102 may be included within an electronic/user device associated with a user, for example, a television or a mobile phone.
  • the system 201 may be configured to operate as a standalone device or a system based in a server/cloud architecture communicably coupled to the electronic device/user device associated with the user.
  • Examples of the electronic device may include, but not limited to, a mobile phone, a smart watch, a laptop computer, a desktop computer, a Personal Computer (PC), a notebook, a tablet, a mobile phone, an IoT device, or any other smart device communicably coupled to the camera device 104.
  • PC Personal Computer
  • the system 102 may be configured receive and process image frames captured by the camera device 104 to generate ambience suggestion for the environment of the user.
  • the system 102 may include a processor/controller 202, an Input/Output (I/O) interface 204, one or more modules 206, a transceiver 208, and a memory 210.
  • I/O Input/Output
  • the processor/controller 202 may be operatively coupled to each of the I/O interface 204, the modules 12, the transceiver 208 and the memory 210.
  • the processor/controller 202 may include at least one data processor for executing processes in Virtual Storage Area Network.
  • the processor/controller 202 may include specialized processing units such as, integrated system (bus) controllers, memory management control units, floating point units, graphics processing units, digital signal processing units, etc.
  • the processor/controller 202 may include a central processing unit (CPU), a graphics processing unit (GPU), or both.
  • the processor/controller 202 may be one or more general processors, digital signal processors, application-specific integrated circuits, field-programmable gate arrays, servers, networks, digital circuits, analog circuits, combinations thereof, or other now known or later developed devices for analyzing and processing data.
  • the processor/controller 202 may execute a software program, such as code generated manually (i.e., programmed) to perform the desired operation.
  • the processor/controller 202 may be disposed in communication with one or more input/output (I/O) devices via the I/O interface 204.
  • the I/O interface 204 may employ communication code-division multiple access (CDMA), high-speed packet access (HSPA+), global system for mobile communications (GSM), long-term evolution (LTE), WiMax, or the like, etc.
  • the system 102 may communicate with one or more I/O devices.
  • the input device may be an antenna, microphone, touch screen, touchpad, storage device, transceiver, video device/source, etc.
  • the output devices may be a printer, fax machine, video display (e.g., cathode ray tube (CRT), liquid crystal display (LCD), light-emitting diode (LED), plasma, Plasma Display Panel (PDP), Organic light-emitting diode display (OLED) or the like), audio speaker, etc.
  • the system 201 may communicate with the electronic device associated with the user using the I/O interface 204.
  • the processor/controller 202 may be disposed in communication with a communication network via a network interface.
  • the network interface may be the I/O interface 204.
  • the network interface may connect to the communication network to enable connection of the system 102 with the outside environment and/or device/system.
  • the network interface may employ connection protocols including, without limitation, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/internet protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, etc.
  • the communication network may include, without limitation, a direct interconnection, local area network (LAN), wide area network (WAN), wireless network (e.g., using Wireless Application Protocol), the Internet, etc.
  • the system 102 may communicate with other devices.
  • the network interface may employ connection protocols including, but not limited to, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/internet protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, etc.
  • the processor/controller 202 may receive image frame(s) corresponding to the environment from the camera device 104.
  • the processor/controller 202 may execute a set of instructions on the received image frames to recommend ambience suggestion(s) to the user to improve the environment.
  • the processor/controller 202 may implement various techniques such as, but not limited to, data extraction, Artificial Intelligence (AI), and so forth to achieve the desired objective(s) (for example, to enhance user experience and personalization of the environment).
  • AI Artificial Intelligence
  • the memory 210 may be communicatively coupled to the at least one processor/controller 202.
  • the memory 210 may be configured to store data, instructions executable by the at least one processor/controller 202.
  • the memory 210 may communicate via a bus within the system 201.
  • the memory 210 may include, but not limited to, a non-transitory computer-readable storage media, such as various types of volatile and non-volatile storage media including, but not limited to, random access memory, read-only memory, programmable read-only memory, electrically programmable read-only memory, electrically erasable read-only memory, flash memory, magnetic tape or disk, optical media and the like.
  • the memory 210 may include a cache or random-access memory for the processor/controller 202. In alternative examples, the memory 210 is separate from the processor/controller 202, such as a cache memory of a processor, the system memory, or other memory.
  • the memory 210 may be an external storage device or database for storing data.
  • the memory 210 may be operable to store instructions executable by the processor/controller 202.
  • the functions, acts or tasks illustrated in the figures or described may be performed by the programmed processor/controller 202 for executing the instructions stored in the memory 210.
  • the functions, acts or tasks are independent of the particular type of instructions set, storage media, processor or processing strategy and may be performed by software, hardware, integrated circuits, firmware, micro-code and the like, operating alone or in combination.
  • processing strategies may include multiprocessing, multitasking, parallel processing, and the like.
  • the modules 206 may be included within the memory 210.
  • the memory 210 may further include a database 212 to store data.
  • the one or more modules 206 may include a set of instructions that may be executed to cause the system 201 to perform any one or more of the methods /processes disclosed herein.
  • the modules 206 may be configured to perform one or more operations of the processor 202 to achieve the desired objective of the present disclosure.
  • the one or more modules 206 may be configured to perform the steps of the present disclosure using the data stored in the database 212, to generate ambience recommendation for the environment as discussed herein.
  • each of the one or more modules 206 may be a hardware unit which may be outside the memory 210.
  • the memory 210 may include an operating system 214 for performing one or more tasks of the system 201, as performed by a generic operating system in the communications domain.
  • the transceiver 208 may be configured to receive and/or transmit signals to and from the electronic device associated with the user.
  • the database 212 may be configured to store the information as required by the one or more modules 206 and the processor/controller 202 to perform one or more functions for determining semantic points in a human-to-human conversation.
  • the present invention contemplates a computer-readable medium that includes instructions or receives and executes instructions responsive to a propagated signal. Further, the instructions may be transmitted or received over the network via a communication port or interface or using a bus (not shown).
  • the communication port or interface may be a part of the processor/controller 202 or may be a separate component.
  • the communication port may be created in software or may be a physical connection in hardware.
  • the communication port may be configured to connect with a network, external media, the display, or any other components in system, or combinations thereof.
  • the connection with the network may be a physical connection, such as a wired Ethernet connection or may be established wirelessly. Likewise, the additional connections with other components of the system 102 may be physical or may be established wirelessly.
  • the network may alternatively be directly connected to the bus.
  • At least one of the plurality of modules 206 may be implemented through an Artificial Intelligence (AI) model.
  • AI Artificial Intelligence
  • a function associated with AI may be performed through the non-volatile memory, the volatile memory, and the processor 202.
  • the processor 202 may include one or a plurality of processors.
  • one or a plurality of processors may be a general-purpose processor, such as a central processing unit (CPU), an application processor (AP), or the like, a graphics-only processing unit such as a graphics processing unit (GPU), a visual processing unit (VPU), and/or an AI-dedicated processor such as a neural processing unit (NPU).
  • CPU central processing unit
  • AP application processor
  • GPU graphics-only processing unit
  • VPU visual processing unit
  • NPU neural processing unit
  • the one or a plurality of processors control the processing of the input data/images in accordance with a predefined operating rule or artificial intelligence (AI) model stored in the non-volatile memory and the volatile memory.
  • the predefined operating rule or artificial intelligence model is provided through training or learning.
  • learning means that, by applying a learning technique to a plurality of learning data, a predefined operating rule or AI model of a desired characteristic is made.
  • the learning may be performed in a device itself in which AI according to an embodiment is performed, and/or may be implemented through a separate server/system.
  • the AI model may consist of a plurality of neural network layers. Each layer has a plurality of weight values and performs a layer operation through calculation of a previous layer and an operation of a plurality of weights.
  • Examples of neural networks include, but are not limited to, convolutional neural network (CNN), deep neural network (DNN), recurrent neural network (RNN), restricted Boltzmann Machine (RBM), deep belief network (DBN), bidirectional recurrent deep neural network (BRDNN), generative adversarial networks (GAN), and deep Q-networks.
  • the learning technique is a method for training a predetermined target device (for example, a robot) using a plurality of learning data to cause, allow, or control the target device to make a determination or prediction.
  • Examples of learning techniques include, but are not limited to, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning.
  • the system 102 may use an artificial intelligence model to recommend various object arrangements for the environment. Further, the system 102 may use the AI model to generate instructions for data obtained from various sensors.
  • the processor 202 may perform a pre-processing operation on the data to convert into a form appropriate for use as an input for the artificial intelligence model.
  • the artificial intelligence model may be obtained by training.
  • "obtained by training” means that a predefined operation rule or artificial intelligence model configured to perform a desired feature (or purpose) is obtained by training a basic artificial intelligence model with multiple pieces of training data by a training technique.
  • the artificial intelligence model may include a plurality of neural network layers. Each of the plurality of neural network layers includes a plurality of weight values and performs neural network computation by computation between a result of computation by a previous layer and the plurality of weight values.
  • Reasoning prediction is a technique of logically reasoning and predicting by determining information and includes, e.g., knowledge-based reasoning, optimization prediction, preference-based planning, or recommendation.
  • the architecture, and standard operations of the operating system 214, the memory 210, the database 212, the processor/controller 202, the transceiver 208, the I/O interface 204, and the AI model are not discussed in detail.
  • Figure 3 illustrates a schematic block diagram of the modules 206 of the system 102 for generating the ambience suggestion for the environment, according to an embodiment of the present invention.
  • the modules 206 may include an input module 302, an ambience analysis module 304, an ambience processing module 306, a virtual object creator module 308, an output module 310, and a database module 312.
  • the modules 304, 306, 308, 310, 312 may be communicably coupled to each other.
  • the input module 302 may be configured to generate one or more inputs required to generate the ambience suggestion.
  • the input module 302 may be configured to collect input data from user for the system 102.
  • the input module 302 may act as an interface between various input devices and the system 102.
  • Example of input devices may include, but not limited to, a camera device, a microphone, a speaker, and a display.
  • the input module 302 may be communicably coupled to the input devices to generate/receive various inputs, such as, but not limited to, image/object input, voice input, gesture/sensors input, calendar input, user preference and history input, and so forth.
  • the image/object input may correspond to image frames captured by the camera device 104 and/or information associated with objects identified from the captured image frames.
  • the voice input may be generated based on input received from microphone.
  • the gesture/sensors input may be generated based on inputs from various sensors monitoring the user.
  • the gesture/sensors input may include hand gestures, finger gestures, face gestures, eyes gestures and so forth.
  • the calendar input may include information defining date/time for receiving various input data, for example image frames corresponding to the environment.
  • the user preference and history input may be based upon user viewing history on the television 103, user interest information such as, adventurous, sports, etc.
  • the various information collected and/or generated by the input module 302 may be stored in the database module 312.
  • the input module 302 may include a settings sub-module configured to generate and/or store configuration files, user interfaces and settings.
  • the setting module may be configured to include predefined rules and user preference with respect to operation of the system 102 and/or the modules 206.
  • the input module 302 may store all the settings pertaining to a user in the user data section of the database module 312.
  • the setting may include information such as, a threshold indicating a number of days to observe an event before generating a suggestion, user demographic data, user preferences, and so forth.
  • the system 102 may initialize the ambience analysis module 304.
  • the ambience analysis module 304 may be configured to process the inputs received from the input module 302 and identify object(s) in the environment.
  • the ambience analysis module 304 may be configured to determine parameters associated with the objects and/or environment. The parameters may include, but not limited to, object color, object material, object position, object orientation, user's lifestyle, lighting condition in the environment, available space in the environment, space consumption information, and so forth.
  • the ambience analysis module 304 may also be configured to update the database module 312 based on the determined parameters.
  • the ambience analysis module 304 may use suitable object detection Application Programming Interface (API) to detect objects and the parameters of the environment.
  • API Application Programming Interface
  • the ambience analysis module 304 nay use techniques such as, but not limited to, convolutional neural networks (Region-Based Convolutional Neural Networks), Fast R-CNN, and YOLO (You Only Look Once).
  • the ambience analysis module 302 may be configured to transmit the generated information related to the objects and the parameters to the ambience processing module 306.
  • the ambience processing module 306 may be configured to receive information related to the objects and the parameters from the ambience analysis module 304.
  • the ambience processing module 306 may be configured to process the received information to enable the system 102 to generate the ambience recommendation(s).
  • the ambience processing module 306 may include various sub-modules namely, a classification module, a data processing module, a score generator module, a target identification module, and a recommendation module.
  • the classification module may include components, such as, but not limited to, a type identifier, a neighbor identifier, a score definer, threshold definer, and so forth.
  • the type identifier may identify a type of each identified object in the environment.
  • the objects may be classified into two types, namely container or contained.
  • the container object may be configured to store various contained objects. For example, a bookshelf may be considered as a container object and books may be classified as contained objects.
  • the neighbor identifier may be configured to identify neighboring objects corresponding to each identified object. Particularly, the neighbor identifier configured to determine a distance between objects to identify neighboring objects corresponding to each object.
  • the score definer may be configured to define rules for determining a fitness score corresponding to each object.
  • the threshold definer may be configured to define a threshold for monitoring the user before recommending the ambience suggestion(s).
  • the system 102 may define the threshold as five days. In such a scenario, the system 102 may not recommend the user the generated ambience suggestion unless the system 102 monitors the user for at least five days.
  • the classification module may be configured to transmit the generated information to the data processing module.
  • the data processing module for receiving inputs from the classification module and perform different operations based on the type of objects.
  • the data processing module may be configured to perform multi-level classification of objects using non-supervised Machine Learning (ML) model for container type of object.
  • ML Machine Learning
  • the data processing module may receive a cluster of objects as input and generate a similarity between the objects in the cluster.
  • the data processing module may be configured to use rule-based model to process the contained type of objects.
  • the data processing module may take contained type of objects as input and generate an association of such objects with the container object.
  • the objects may also be classified based on user preference and usage.
  • the data processing module may be configured to use usage-based model to take image frames as input and generate the user preference and associated objects.
  • the score generator module may be configured to a fitness score corresponding to each identified object based the parameters associated with objects and/or environment.
  • the score generator module may be configured to generate threshold corresponding to the fitness scores. Further, based on a comparison of the fitness with the generated threshold, the system 102 may determine a target space "i.e., determine if the object needs to be replaced, removed, or modified".
  • the score generator module may be configured to transmit the generated fitness scores and threshold to the target identification module.
  • the target identification module may be configured to generate a first ambience score based on the fitness scores of the identified objects in the environment.
  • the first ambience score may be a summation of fitness scores of all the identified objects in the environment.
  • the target identification module may include a target space finder configured to identify a target space in the environment based on the determined fitness scores corresponding to the identified objects in the environment.
  • the target identification module may include a combination creator configured to generate an object arrangement for the target space by creating a combination of different objects in the target space.
  • the target identification module may also include merger and splitter module configured to merge or split the target space based to generate an effective object arrangement.
  • the target identification module may be configured to identify target spaces and associated obstacles to determine whether to merge or split the target space.
  • Example of the target spaces may include, but not limited to, movable objects like table, chairs, etc., non-essential objects like painting, wall arts, etc., unused objects such as lamps, bookshelves, etc., and spaces selected by the user.
  • Example of the obstacles may include, but not limited to, non-moveable objects like beds, almirah etc., essential objects like monitor, computer, etc., and objects selected by the user.
  • the target identification module may also be configured to generate a second ambience score of the environment based on the generated object arrangement.
  • the ambience processing module 306 may also include the recommendation module configured to receive inputs from the target identification module and generate the ambience suggestion to the user.
  • the recommendation module may include an object generator configured to generate the ambience suggestion based at least a comparison of the first ambience score and the second ambience score, and the fitness scores corresponding to the objects.
  • the recommendation module may include a request generator configured to generate a request for the virtual object creator module 308 to generate the virtual objects corresponding to the generated object arrangement.
  • the virtual object creator module 308 may be responsible for performing operations to generate a virtual environment based the generated object arrangement and/or ambience suggestions.
  • the virtual object creator module 308 may include a virtual object database including a plurality of virtual objects along with associated meta data.
  • the virtual object creator module 308 may also include a virtual object manager configured to query the virtual object database for a virtual object based on the request received from the recommendation module.
  • the virtual object database may return one or more virtual objects based on the query requests received from the virtual object manager.
  • the virtual object creator module 308 may also include a response module configured to return the requested virtual object and/or virtual environment to the recommendation module and/or the ambience processing module 306.
  • the recommendation module may receive the virtual object and/or virtual environment from the virtual object creator module 308.
  • the recommendation module may only receive the virtual object from the virtual object creator module 308 and include a virtual view creator configured to generate a virtual environment using the received virtual object.
  • the ambience processing module 306 may be communicably coupled to the output module 310 to generate the ambience suggestion for the user and/or to display the generated virtual environment to the user.
  • the output module 310 may include media devices such as, televisions, mobile devices, virtual reality headsets, refrigerators, or any other suitable ambience with a display.
  • the media devices may also include components such as, but not limited to, I/O interfaces, display devices, operating systems, memory, AR/VR/MR modules, and so forth.
  • Each of the modules 302-310 may be communicably coupled to the database module 312 to store or retrieve information.
  • the database module 312 may include knowledge-based information, rule-based information, object data, position data, sensor data, fitness scores, threshold data, as discussed throughout the specification.
  • the database module 312 may also include user data, container/contained data, target space data, positions data, movable and/or non-movable object data, and utility-based data.
  • modules 302-312 may interchange operations based on the requirement. Further, in some embodiments, one or more operations of the modules 302-312 may be performed by the processor 202. Further, the modules 206 may be coupled with an external device using a network.
  • Figure 4A-4D illustrate an exemplary process flow 400 for generating the ambience suggestion(s) for the environment, according to an embodiment of the present disclosure.
  • the camera device 104 may capture the environment and generate image frames corresponding to the environment.
  • the system 102 may perform scene understanding based on the image frames generated by the camera device 104. Specifically, the system 102 may process the image frames to identified object(s) of the environment.
  • the system 102 may identify a 3D position and orientation of each of the identified object in the environment.
  • the system 102 may determine which category an identified object may correspond to.
  • the system 102 may classify each of the identified objects into three types namely, container object, contained object, and user preference and usage-based object.
  • the system 102 may also identify neighboring objects corresponding to each of the identified object.
  • the system 102 may define threshold for generating recommendation.
  • the threshold may define a number of days the system 102 need to monitor the environment before generating the ambience suggestion.
  • the system 102 may also determine a fitness scores corresponding to each identified object.
  • the system 102 may identify a first ambience score based on the determined fitness scores. Further, the system 102 may determine a predefined fitness score threshold and compare the fitness score of each object with the predefined fitness score threshold.
  • the system 102 may identify one or more object as target objects based on said comparison of fitness score with the predefined fitness score threshold. Particularly, the system 102 may identify an object as a target object when the comparison of the fitness score of said object with the predefined fitness score threshold indicates that the fitness score is below the predefined fitness score threshold. In an exemplary embodiment, the system 102 may identify a type of the target object. Upon determining, the type of the target object as "container", the system 102 may perform steps at 410, for the type of target object as "contained”, the system 102 may perform steps at 412, and for the type of target object as "user preference and usage", the system 102 may perform steps at 414.
  • the system 102 may use non-supervised ML model to generate a similarity between the identified target object and the environment. Further, at step 410, the system 102 may perform the sequence of operation as illustrated by Fig. 4B. Specifically, the system 102 may make a cluster of objects by combining the target object with the neighboring objects. The system102 may determine a theme of cluster and compare the determined theme with the ambience of the environment. Further, the system 102 may determine whether the determined cluster matches with the ambience of the environment or not based on comparison of the cluster theme with the ambience. Upon determining that that the determined cluster does not matches with the ambience of the environment, the system 102 may perform step 416.
  • the system 102 may determine if any relocation is required. Next, the system 102 may move to the step 414 where the system 102 may use a usage-based model to take image frames as input and generate an output indicating user preference.
  • the system 102 may use rule-based model to take the contained type of target object as input and generate an output indicating a corresponding association of the target object with a container object. Further, at step 410, the system 102 may perform the sequence of operation as illustrated by Fig. 4C. Specifically, the system 102 may identify a container for the target object using rule-based model. The rule-based model may be based on a set of rules defining a relationship of a plurality of contained objects and associated container object. The system 102 may also check for association of the target object with the identified container object. For example, the system 102 may check whether the target object is suitably placed, misplaced, not available or so forth. Next, the system 102 may determine whether the target is misplaced or not.
  • the system 102 may determine whether the associated container is accessible or not. If the associated container object is accessible, the system 102 may notify the user to keep the target object at a right place i.e., at the container object. If the container is not accessible, the system 102 may check for relocation of the container object. In such scenario, the system 102 may move to step 414 where the system 102 may use a usage-based model to take image frames as input and generate an output indicating user preference.
  • the system 102 may determine a new container for the target object. Next, the system 102 may match the theme of newly identified container with the ambience of the environment using any suitable technique such as, but not limited to, AI, ML and so forth. Last, the system 102 may move to step 416 where the system 102 may identify a target space based on the target object.
  • the system 102 may use the usage-based model to take image frames as input and generate the output indicating user preference.
  • the system 102 may perform the sequence of operation as illustrated by Fig. 4D.
  • the system 102 may take different image frames corresponding to the environment and captured at different time intervals as input.
  • the system 102 may identify a reference frame for the classified object/target object.
  • the system 102 may determine whether the target object is misplaced in reference frame or not. If the target object is not misplaced in the reference frame, the system 102 may move back to capturing image frames of the environment. However, if the target object is misplaced, the system 102 may identify the container object which is occupied. Further, the system 102 may create a sorted list of container objects based on number of usage times. Further, the system 102 may determine a user interest based on a higher value of container object.
  • the system 102 may identify the target space based on the target object. Further, the system 102 may generate all possible combination of the object arrangements at the target space. The system 102 may generate ambience scores based on the different object arrangements. Further, the system 102 may select an object arrangement with highest ambience score and consider the associated ambience score as the second ambience score. At step 418, the system 102 may compare the first ambience score and the second ambience score. Upon determine that the second ambience score is greater than the first ambience score, the system 102 may recommend object arrangement to the user, as shown in step 420. The system 102 may also generate a virtual view of generated object recommendation.
  • Figure 5A-5B illustrate a flow chart of a method 500 for generating the ambience suggestion for the environment, according to an embodiment of the present disclosure.
  • the method 500 may be performed by the system 102.
  • the method 500 includes processing one or more image frames corresponding to the environment to identify one or more objects in the environment.
  • the method 500 includes determining a fitness score corresponding to each of the one or more objects based on one or more parameters.
  • the one or more parameters comprises at least one of environment theme, user interest, object location, and object usage.
  • the method includes determining an object threshold value corresponding to each of the one or more identified objects.
  • the method 500 includes comparing the fitness score of each object with the corresponding object threshold value.
  • the method 500 includes determining a first ambience score based on the determined fitness scores corresponding to the one or more objects.
  • the first ambience score may include a summation of the fitness scores corresponding to the one or more objects.
  • the method 500 includes identifying a target space in the environment.
  • the method 500 includes determining a type of object for each of the identified one or more objects.
  • the method 500 includes determining an orientation and a position of each of the identified one or more objects.
  • the method 500 includes determining one or more neighboring objects corresponding to each of the one or more identified objects based on the orientation and the position corresponding to the identified object.
  • the method 500 includes generating one or more clusters of objects based on the determined one or more neighboring objects and the corresponding identified object.
  • the method 500 includes monitoring one or more user activities in the environment.
  • the method 500 includes determining a user interest based on the one or more user activities.
  • the method 500 includes determining one or more user-related events.
  • the method includes determining one or more additional characteristics of environment. The one or more additional characteristics comprising color of the identified objects, material of the identified objects, lighting condition of the environment, and space occupancy in the environment.
  • the method 500 includes generating an object arrangement for the target space.
  • the object arrangement may include a re-arrangement of at least one of the one or more identified objects, a replacement of the at least one of the one or more identified objects, or an addition of a new object at the target space.
  • the method 500 includes determining a second ambience score of the environment based on the generated object arrangement.
  • the method 500 includes comparing the first ambience score and the second ambience score.
  • the method 500 includes recommending the ambience suggestion to the user based on the generated object arrangement upon determining the second ambience score being greater than the first ambience score.
  • the method 500 includes generating a virtual environment corresponding to the environment. Lastly at step 540, the method 500 includes rendering the recommended ambience suggestion in the virtual environment.
  • Figure 6 illustrates generation of ambience suggestion(s) for the environment, according to an embodiment of the present disclosure.
  • the system 102 may monitor a user environment for a period as defined by the threshold.
  • the environment may correspond to a room.
  • the user may define the threshold.
  • the system 102 may monitor that there is a bookshelf in the room and user keeps all the books in the bookshelf daily. However, one day the user leaves a book on the side table and forgets to keep the book on the shelf.
  • the system 102 may monitor the behavior of the user for a number of days.
  • the system 102 may generate an ambience suggestion as "the book is not in the right place".
  • the generated ambience suggestion(s) may be displayed to the user using the television monitoring the environment.
  • the television may include the camera device 104 to determine displacement of the book.
  • the system 102 may observe that the user is keeping the books at the side table. Therefore, the system 102 may determine that the bookshelf is not accessible to the user. Based on said determination, the system 102 may suggest relocation of the bookshelf to make the bookshelf more accessible to the user. Alternatively, the system 102 may suggest replacement of the side table with the bookshelf.
  • the user may properly place the book in the bookshelf, thereby able to effectively manage the environment.
  • the system 102 may identify the book(s) (object) placed on the side table.
  • the system 102 may identify a type of each book. For example, the system 102 may classify the book(s) as contained object. Further, for each book, the system 102 may identify a corresponding container object, for example, the bookshelf. Further, the system 102 may identify a theme of the environment as "Bedroom".
  • the system 102 may generate a fitness score(s) of the book(s). Initially, the system 102 may classify user based on user interest as "Book Lover". The system 102 may identify the side table as a target object. Further, the system 102 may determine suitable replacement for the side table (the target object) as the bookshelf. Therefore, the system 102 may generate the ambience suggestion as "move book to bookshelf". Further, with the placement of each book, the value of variable L Location (as shown in Eq. 2) may increase which increases overall fitness score of the object/environment.
  • the system 102 may generate a fitness score of the bookshelf.
  • the system 102 may identify that the bookshelf is inaccessible.
  • the system 102 may generate the ambience suggestion as "move the bookshelf to another place which is more accessible to the user". Further, with the relocation of the bookshelf, the bookshelf may become more accessible to the user resulting in increase in value of variable L Location (as shown in Eq. 2) which increases overall fitness score of the object/environment.
  • the system 102 may generate suggestions which result in varying the variables of fitness score to increase overall fitness score of the environment.
  • Figure 7 illustrates generation of ambience suggestion(s) for the environment, according to another embodiment of the present disclosure.
  • the system 102 may monitor an environment, particularly room ambience and determine the object "i.e., a bedsheet" does not match with the overall environment. The determination may be made based on a fitness score of the bedsheet in view of the environment. Therefore, to enhance overall ambience score of the environment, the system 102 may suggest another bedsheet which a higher fitness score.
  • Figure 8 illustrates generation of ambience suggestion(s) for the environment, according to yet another embodiment of the present disclosure.
  • the system 102 may monitor the environment of the user.
  • the user may display a packed bedsheet to the system 102.
  • the system 102 may capture the image frame corresponding to displayed bedsheet.
  • the system 102 may process the image frame to generate a virtual environment having the bedsheet on the bed.
  • the system 102 may provide an effective way to visualize a change in the environment based on user's input.
  • Figure 9 illustrates generation of ambience suggestion(s) for the environment based on user activity, according to an embodiment of the present disclosure.
  • the system 102 may monitor user's environment along with user viewing history. The system 102 may determine that the user watches romantic movies on Tuesday and Fridays. Therefore, based on said determination, the system 102 may generate an ambience suggestion as addition of components with romantic theme to enhance user experience.
  • Figure 10 illustrates generation of ambience suggestion(s) for the environment based on user voice command, according to an embodiment of the present disclosure.
  • the system 102 may receive a voice command from the user "i.e., change the table lamp position and add some light". The system 102 may process said command from the user and generate the ambience suggestion based on received command.
  • Figure 11 illustrates generation of ambience suggestion for the environment in a virtual reality device, according to an embodiment of the present disclosure.
  • the system 102 may generate the ambience suggestion at the wearable virtual reality headset of the user, to provide the user interactive experience with the modified environment.
  • the system 102 is configured to identify a misplaced object inside an environment and suggest a user to place object at a right place. Further, the system 102 may identify objects which do not matches with environment theme and also suggest suitable replacement/relocation of such objects. Further, the system 102 may be able to provide personalized ambience to a user based on user interest and command.
  • the present invention provides for various technical advancements based on the key features discussed above.
  • the present invention may provide well-managed and personalized environment to the user.
  • the present invention may also enable a user to visualize a change in the environment in an interactive way "i.e., based on voice command or user gestures".
  • the present invention may lead to enhancement with user's wellbeing by providing an environment which is user-friendly, aesthetically pleasing and effectively managed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Computer Graphics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • Software Systems (AREA)
  • Development Economics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Tourism & Hospitality (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Architecture (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Primary Health Care (AREA)
  • Artificial Intelligence (AREA)
  • Human Resources & Organizations (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A system and a method for generating an ambience suggestion for an environment is provided. The method includes processing image frames corresponding to the environment to identify objects in the environment. The method also includes determining a fitness score corresponding to each object. Further, the method includes determining a first ambience score based on the fitness scores. Also, the method includes identifying a target space in the environment based on the fitness scores corresponding to the objects. The method further includes generating an object arrangement for the target space. The method also includes determining a second ambience score of the environment based on the object arrangement. Moreover, the method includes comparing the first and the second ambience scores. Furthermore, the method includes recommending the ambience suggestion to the user based on the generated object arrangement upon determining the second ambience score being greater than the first ambience score.

Description

SYSTEMS AND METHODS FOR GENERATING AMBIENCE SUGGESTIONS FOR AN ENVIRONMENT
The present invention generally relates to a personalization of user ambience and more particularly relates to systems and methods for generating ambience suggestion(s) for an environment corresponding to a user.
Personalized and managed ambience plays an important role in a person’s life. Specifically, such ambience keeps the lifestyle of the person modern and stylish. Moreover, such a personalized and managed ambience provides elegance and comfort to the person’s lifestyle.
Augmented Reality (AR) which has been a widely accepted technology enables the person to re-imagine and re-design his/her environment. For example, such technology enables the person to visualize new paint, decor, and furniture in his/her environment. Further, some AR-enabled solutions enable the user to visualize his/her environment in a 3D virtual environment.
However, none of the existing techniques provides an efficient way of managing, enhancing, and/or personalizing the ambience of the person. Accordingly, there is a need for a technique to overcome the above-mentioned problems.
This summary is provided to introduce a selection of concepts, in a simplified format, that are further described in the detailed description of the invention. This summary is neither intended to identify key or essential inventive concepts of the invention nor is it intended for determining the scope of the invention.
According to one embodiment of the present disclosure, a method for generating an ambience suggestion for an environment is disclosed. The method includes processing one or more image frames corresponding to the environment to identify one or more objects in the environment. The method also includes determining a fitness score corresponding to each of the one or more objects based on one or more parameters. The method further includes determining a first ambience score based on the determined fitness scores corresponding to the one or more objects. The method also includes identifying a target space in the environment based on the determined fitness scores corresponding to the one or more objects. Further, the method includes generating an object arrangement for the target space based on the environment. Furthermore, the method includes determining a second ambience score of the environment based on the generated object arrangement. Moreover, the method includes comparing the first ambience score and the second ambience score. Also, the method includes recommending the ambience suggestion to the user based on the generated object arrangement upon determining the second ambience score being greater than the first ambience score. The ambience suggestion is indicative of a change in the environment based on the generated object arrangement.
According to another embodiment of the present disclosure, a system for generating an ambience suggestion for an environment is disclosed. The system includes a memory and at least one processor communicably coupled to the memory. The at least one processor is configured to process one or more image frames corresponding to the environment to identify one or more objects in the environment. The at least one processor is further configured to determine a fitness score corresponding to each of the one or more objects based on one or more parameters. Moreover, the at least one processor is configured to determine a first ambience score based on the determined fitness scores corresponding to the one or more objects. Further, the at least one processor is configured to identify a target space in the environment based on the determined fitness scores corresponding to the one or more objects. Also, the at least one processor is configured to generate an object arrangement for the target space based on the environment. Moreover, the at least one processor is configured to determine a second ambience score of the environment based on the generated object arrangement. Furthermore, the at least one processor is configured to compare the first ambience score and the second ambience score. Thereafter, the at least one processor is configured to recommend the ambience suggestion to the user based on the generated object arrangement upon determining the second ambience score being greater than the first ambience score. The ambience suggestion is indicative of a change in the environment based on the generated object arrangement.
To further clarify the advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which are illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings.
These and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
Figure 1 illustrates an exemplary environment of a system for generating an ambience suggestion for an environment, according to an embodiment of the present disclosure;
Figure 2 illustrates a schematic block diagram of the system for generating the ambience suggestion for the environment, according to an embodiment of the present disclosure;
Figure 3 illustrates a schematic block diagram of modules of the system for generating the ambience suggestion for the environment, according to an embodiment of the present invention;
Figure 4A-4D illustrate an exemplary process flow for generating the ambience suggestion for the environment, according to an embodiment of the present disclosure;
Figure 5A-5B illustrate a flow chart of a method for generating the ambience suggestion for the environment, according to an embodiment of the present disclosure;
Figure 6 illustrates the generation of ambience suggestions for the environment, according to an embodiment of the present disclosure;
Figure 7 illustrates the generation of ambience suggestions for the environment, according to another embodiment of the present disclosure;
Figure 8 illustrates the generation of ambience suggestions for the environment, according to yet another embodiment of the present disclosure;
Figure 9 illustrates the generation of ambience suggestions for the environment based on user activity, according to an embodiment of the present disclosure;
Figure 10 illustrates the generation of ambience suggestions for the environment based on user voice command, according to an embodiment of the present disclosure; and
Figure 11 illustrates the generation of ambience suggestions for the environment in a virtual reality device, according to an embodiment of the present disclosure.
Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present invention. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
For the purpose of promoting an understanding of the principles of the invention, reference will now be made to the various embodiments and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended, such alterations and further modifications in the illustrated system, and such further applications of the principles of the invention as illustrated therein being contemplated as would normally occur to one skilled in the art to which the invention relates.
It will be understood by those skilled in the art that the foregoing general description and the following detailed description are explanatory of the invention and are not intended to be restrictive thereof.
Reference throughout this specification to “an aspect”, “another aspect” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrase “in an embodiment”, “in another embodiment” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
The terms “comprise”, “comprising”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such process or method. Similarly, one or more devices or sub-systems or elements or structures or components proceeded by “comprises... a” does not, without more constraints, preclude the existence of other devices or other sub-systems or other elements or other structures or other components or additional devices or additional sub-systems or additional elements or additional structures or additional components.
The present invention is directed towards a method and a system for recommending ambience suggestions to provide a managed and personalized environment to a user. Embodiments include identifying a target space in the environment to make suitable suggestions for modification in the environment. The system enhances the overall user experience in the environment and provides a more personalized and planned environment. Moreover, the system takes into consideration user’s interest, user’s activity and user’s media watching history while generating ambience recommendations for the environment. Further, embodiments provide a simple and cost-effective technique to improve user experience in an environment.
Figure 1 illustrates an exemplary environment 100 of a system 102 for generating an ambience suggestion for an environment, according to an embodiment of the present disclosure.
Fig. 1 illustrates a user watching a television 103 coupled with a camera device 104. The camera device 104 may be configured to capture image frame(s) corresponding to an environment 106. In an embodiment, the camera device 104 may be configured to capture image frames at a predetermined interval of time. Further, the environment 106 may correspond to a room where the television 103 is installed. Examples of environment may include, but not limited to, a study room, a dining hall, a bedroom and so forth. Further, in the illustrated embodiment, the camera device 104 may be placed atop of the television 103. However, embodiments intend to cover or otherwise cover any suitable location of camera device 104 in the environment 106 to suitably capture the image frames. In some embodiments, the camera device 104 may be disposed with any other electronic device located within the environment. Further, the television 103 may also be configured to log user watching history. The television 103 may be configured to determine and store user interest based on content viewed by the user. Further, the television 103 is exemplary in nature and embodiments either covers or intend to cover any other suitable display and/or media device. Example of media devices may include, but not limited to, personal computer, smart watch, voice assistant device, Internet of thing (IoT) device, laptop, and so forth.
The system 102 may be configured to receive the image frames captured by the camera device 104. Further, the system 102 may be configured to receive information collected and stored by the television 103. In an embodiment, the system 102 may be installed within the television device 103 where the camera device 104 is installed. Further, the system 102 may be configured to log user watching history. Also, the system 102 may be configured to determine and store user interests based on content viewed by the user.
In alternative embodiment, the system 102 may be a standalone entity remotely coupled to the camera device 104 and television 103. In yet another embodiment, the system 102 may be installed within a mobile device of the user.
In an exemplary embodiment, the system 102 may be configured to process the image frames captured by the camera device 104 to identify object(s) in the environment. In the illustrated embodiment, the identified objects may include, but not limited to, chairs, a table, a lamp, a cupboard and so forth. Further, the system 102 may determine a fitness score corresponding to each identified object. The fitness score may be indicative of how aesthetically pleasing an object is with respect to the environment. In other embodiment, the fitness score may be an indicative of usefulness of the object in the environment. The usefulness of the object may be a measure of the usage of the object by the user in the environment. For example, in case a lamp is not in use by the user for a predefined long period of time, the lamp may have minimum usefulness in the environment. The fitness score may consider various parameters associated with the object such as, but not limited to, spacing, usage, form, light, color, texture, and pattern. In an exemplary embodiment, the fitness score may be determined based on one or more parameters including, but not limited to, environment theme, user interest, object location, and object usage.
In an exemplary embodiment, the fitness score may be determined using equation 1, as mentioned below:
Figure PCTKR2023018271-appb-img-000001
Figure PCTKR2023018271-appb-img-000002
- Eq 1
Here TTheme may correspond to theme of the environment (interchangeably referred to as room). The theme of the environment may be based on wall color, group of object theme, lighting etc.
UIUser Interest may correspond to interest level of the user in the object/activity/environment.
LLocation may correspond to a location of the object.
OFOther Factors may correspond to additional factors related to object such as, but not limited to, cost, reachability, etc.
Further, zero may represent a lowest value of any component in the Eq. 1.
One may represent a highest value of any component in the Eq.1.
Further, F-final -may represent a final fitness score of all the objects or an ambience score of the environment.
The TTheme may include a value corresponding to a category/theme of the environment, such as, but not limited to, festive, living, personal usage, bedroom, dining, activity and so forth. In an exemplary embodiment, a numeric value corresponding to the categories of the environment may be assigned to the variable TTheme using any other suitable technique such as, but not limited to, one-hot encoding technique. The one hot encoding technique may assign a binary vector of length n to the variable TTheme based on a number of themes/categories. Here, n may be defined as cardinality of set of themes/categories.
Further, the UIUser Interest may include a value corresponding to a category which may relate to an object, an activity, or an environment. Examples of the categories for determining the value of UUser Interest may include, but not limited to, home decor, traveller, spiritual, homebody, sportsman, party lover, photography, art, technology, gardening, animal, and books. In an exemplary embodiment, a numeric value corresponding to the different categories of UUser Interest may be assigned to the variable UUser Interest using any other suitable technique such as, but not limited to, one-hot encoding technique. The one hot encoding technique may assign a binary vector of length m to the variable UUser Interest based on a number of categories used for defining user interest. Here, m may be defined as cardinality of set of categories corresponding to user interest.
The value of LLocation may be determined during determination of object position. Further, the value of LLocation may be dependent on a position of the user in the environment. Furthermore, the value of LLocation may be either discrete or Boolean in nature. In an exemplary embodiment, the objects may be classified into two types, namely container or contained. The container object may be configured to store various contained objects. For example, a bookshelf may be considered as a container object and books may be classified as contained objects. Also, for a container type of object, the value of LLocation may be either accessible or not accessible. Further, for a contained type of object, the value of LLocation may be misplaced or properly placed. Further, a numerical and/or logical value may be assigned to the LLocation based on determined value. For example, for an inaccessible object, the value of LLocation may be defined as zero, and for an accessible object, the value of LLocation may be defined as one.
Further, Uusage may correspond to a value which defines usability of the object in the environment. The value of Uusage may be defined in binary format, where a low usability of the object may be defined as zero and a high usability of the object may be defined as one.
In an embodiment, the OFOther Factors may correspond to a property of the identified object such as, but not limited to, cost of the object, condition of the object, and so forth. The value of OFOther Factors based on the cost of the object may be numerical and defined as a positive real number. Further, value of OFOther Factors based on the condition of the object may be either discrete or Boolean in nature. For example, the value of OFOther Factors based on the condition of the object may be either good or bad. Such values of OFOther Factors may be suitably converted into numbers using suitable technique such as, but not limited to, one-hot encoding technique. Furthermore, examples described above are exemplary in nature and the different variables/parameters used for determining the value of fitness score may have any suitable value as per implementation or the requirement. In an embodiment, for an expensive object, a value of OFOther Factors may be defined as zero, and for an affordable object, the value of OFOther Factors may be defined as one. Further, the value of OFOther Factors based on cost may be subject to user profile and may vary from user to user. Further, a value of OFOther Factors based on condition of object may be defined as zero for worn object and one for good object.
In some additional embodiments, the fitness score of the object may be defined by following equations Eq2-Eq5:
Figure PCTKR2023018271-appb-img-000003
- eq 2
Here, w1-w5 may define weights corresponding to each variable.
Figure PCTKR2023018271-appb-img-000004
- eq 3
Figure PCTKR2023018271-appb-img-000005
- eq 4
Figure PCTKR2023018271-appb-img-000006
- eq 5
Further, the system 102 may be configured to identify a target space in the environment based on determined fitness scores of the objects. For example, if the system 102 determines that an object has low fitness score, the system 102 may consider a space occupied by such object(s) as the target space to make suitable modification in the environment. In an embodiment, in order to determine low fitness score, the system 102 may compare the determined fitness score to a previously determined fitness score of the object or a predefined fitness score threshold of the object. For instance, if the determine value of fitness score is less than the previously determined fitness score of the object or the predefined fitness score threshold of the object, the system 102 may consider that the object as low fitness score. The target space may be a part of environment having dimension suitable to accommodate the target object(s). The target space may be formed by combining space occupied by multiple objects or by splitting the space. For instance, the system 102 may identify the space occupied by the chairs and the table as the target space.
The system 102 may be configured to generate an object arrangement for the target space based on the environment. The object arrangement may include a re-arrangement of at least one of the one or more identified objects, a replacement of the at least one of the one or more identified objects, or an addition of a new object at the target space. The system 102 may consider various parameters while generating the object arrangement. For example, the system 102 may consider parameters such as, but not limited to, a type of object located at the target space, an orientation of the object at the target space, a position of the object at the target space, neighboring objects, user activities, user interest, user-related events, characteristics of the environment and so forth. For example, in the illustrated embodiment, the system 102 may be configured to determine that the user has an interest in adventurous activities based on the content viewing history of the user. Based on said determination of user interest, the system102 may determine recommendation(s) to replace chairs and table located at the target space with a camp and trees 108, as depicted in Fig. 1.
In some embodiments, before recommending the determined object arrangement to the user, the system 102 may determine a first ambience score of the environment based on the currently identified objects and a second ambience score of the environment based on the generated object arrangement. The system 102 may compare the first ambience score and the second ambience score. Moreover, the system 102 may recommend an ambience suggestion to the user based on the generated object arrangement (or re-arrangement) upon determining that the second ambience score is greater than the first ambience score.
In an embodiment, the system 102 may generate a virtual environment representing the environment with the suggested object arrangement. The virtual environment may be displayed to the user via any suitable device such as, but not limited to, the television 103, a mobile device of the user, a virtual reality (VR) device of the user, and so forth. In the illustrated embodiment, the system 102 may display the object arrangement 108 in the environment via the television 103.
Thus, the system 102 may enhance user experience and personalization of the environment. The system 102 may effectively manage user ambience which may improve user's wellbeing by suggesting ambience modifications which improves user productivity, user lifestyle, and user mental health.
Figure 2 illustrates a schematic block diagram of the system 102 for generating the ambience suggestion for the environment, according to an embodiment of the present disclosure. In an embodiment, the system 102 may be included within an electronic/user device associated with a user, for example, a television or a mobile phone. In another embodiment, the system 201 may be configured to operate as a standalone device or a system based in a server/cloud architecture communicably coupled to the electronic device/user device associated with the user. Examples of the electronic device may include, but not limited to, a mobile phone, a smart watch, a laptop computer, a desktop computer, a Personal Computer (PC), a notebook, a tablet, a mobile phone, an IoT device, or any other smart device communicably coupled to the camera device 104.
The system 102 may be configured receive and process image frames captured by the camera device 104 to generate ambience suggestion for the environment of the user. The system 102 may include a processor/controller 202, an Input/Output (I/O) interface 204, one or more modules 206, a transceiver 208, and a memory 210.
In an exemplary embodiment, the processor/controller 202 may be operatively coupled to each of the I/O interface 204, the modules 12, the transceiver 208 and the memory 210. In one embodiment, the processor/controller 202 may include at least one data processor for executing processes in Virtual Storage Area Network. The processor/controller 202 may include specialized processing units such as, integrated system (bus) controllers, memory management control units, floating point units, graphics processing units, digital signal processing units, etc. In one embodiment, the processor/controller 202 may include a central processing unit (CPU), a graphics processing unit (GPU), or both. The processor/controller 202 may be one or more general processors, digital signal processors, application-specific integrated circuits, field-programmable gate arrays, servers, networks, digital circuits, analog circuits, combinations thereof, or other now known or later developed devices for analyzing and processing data. The processor/controller 202 may execute a software program, such as code generated manually (i.e., programmed) to perform the desired operation.
The processor/controller 202 may be disposed in communication with one or more input/output (I/O) devices via the I/O interface 204. The I/O interface 204 may employ communication code-division multiple access (CDMA), high-speed packet access (HSPA+), global system for mobile communications (GSM), long-term evolution (LTE), WiMax, or the like, etc.
Using the I/O interface 204, the system 102 may communicate with one or more I/O devices. For example, the input device may be an antenna, microphone, touch screen, touchpad, storage device, transceiver, video device/source, etc. The output devices may be a printer, fax machine, video display (e.g., cathode ray tube (CRT), liquid crystal display (LCD), light-emitting diode (LED), plasma, Plasma Display Panel (PDP), Organic light-emitting diode display (OLED) or the like), audio speaker, etc. In an embodiment, the system 201 may communicate with the electronic device associated with the user using the I/O interface 204.
The processor/controller 202 may be disposed in communication with a communication network via a network interface. In an embodiment, the network interface may be the I/O interface 204. The network interface may connect to the communication network to enable connection of the system 102 with the outside environment and/or device/system. The network interface may employ connection protocols including, without limitation, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/internet protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, etc. The communication network may include, without limitation, a direct interconnection, local area network (LAN), wide area network (WAN), wireless network (e.g., using Wireless Application Protocol), the Internet, etc. Using the network interface and the communication network, the system 102 may communicate with other devices. The network interface may employ connection protocols including, but not limited to, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/internet protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, etc.
In an exemplary embodiment, the processor/controller 202 may receive image frame(s) corresponding to the environment from the camera device 104. The processor/controller 202 may execute a set of instructions on the received image frames to recommend ambience suggestion(s) to the user to improve the environment. The processor/controller 202 may implement various techniques such as, but not limited to, data extraction, Artificial Intelligence (AI), and so forth to achieve the desired objective(s) (for example, to enhance user experience and personalization of the environment).
In some embodiments, the memory 210 may be communicatively coupled to the at least one processor/controller 202. The memory 210 may be configured to store data, instructions executable by the at least one processor/controller 202. In one embodiment, the memory 210 may communicate via a bus within the system 201. The memory 210 may include, but not limited to, a non-transitory computer-readable storage media, such as various types of volatile and non-volatile storage media including, but not limited to, random access memory, read-only memory, programmable read-only memory, electrically programmable read-only memory, electrically erasable read-only memory, flash memory, magnetic tape or disk, optical media and the like. In one example, the memory 210 may include a cache or random-access memory for the processor/controller 202. In alternative examples, the memory 210 is separate from the processor/controller 202, such as a cache memory of a processor, the system memory, or other memory. The memory 210 may be an external storage device or database for storing data. The memory 210 may be operable to store instructions executable by the processor/controller 202. The functions, acts or tasks illustrated in the figures or described may be performed by the programmed processor/controller 202 for executing the instructions stored in the memory 210. The functions, acts or tasks are independent of the particular type of instructions set, storage media, processor or processing strategy and may be performed by software, hardware, integrated circuits, firmware, micro-code and the like, operating alone or in combination. Likewise, processing strategies may include multiprocessing, multitasking, parallel processing, and the like.
In some embodiments, the modules 206 may be included within the memory 210. The memory 210 may further include a database 212 to store data. The one or more modules 206 may include a set of instructions that may be executed to cause the system 201 to perform any one or more of the methods /processes disclosed herein. In some embodiments, the modules 206 may be configured to perform one or more operations of the processor 202 to achieve the desired objective of the present disclosure. The one or more modules 206 may be configured to perform the steps of the present disclosure using the data stored in the database 212, to generate ambience recommendation for the environment as discussed herein. In an embodiment, each of the one or more modules 206 may be a hardware unit which may be outside the memory 210. Further, the memory 210 may include an operating system 214 for performing one or more tasks of the system 201, as performed by a generic operating system in the communications domain. The transceiver 208 may be configured to receive and/or transmit signals to and from the electronic device associated with the user. In one embodiment, the database 212 may be configured to store the information as required by the one or more modules 206 and the processor/controller 202 to perform one or more functions for determining semantic points in a human-to-human conversation.
Further, the present invention contemplates a computer-readable medium that includes instructions or receives and executes instructions responsive to a propagated signal. Further, the instructions may be transmitted or received over the network via a communication port or interface or using a bus (not shown). The communication port or interface may be a part of the processor/controller 202 or may be a separate component. The communication port may be created in software or may be a physical connection in hardware. The communication port may be configured to connect with a network, external media, the display, or any other components in system, or combinations thereof. The connection with the network may be a physical connection, such as a wired Ethernet connection or may be established wirelessly. Likewise, the additional connections with other components of the system 102 may be physical or may be established wirelessly. The network may alternatively be directly connected to the bus.
In some embodiments, at least one of the plurality of modules 206 may be implemented through an Artificial Intelligence (AI) model. A function associated with AI may be performed through the non-volatile memory, the volatile memory, and the processor 202.
The processor 202 may include one or a plurality of processors. At this time, one or a plurality of processors may be a general-purpose processor, such as a central processing unit (CPU), an application processor (AP), or the like, a graphics-only processing unit such as a graphics processing unit (GPU), a visual processing unit (VPU), and/or an AI-dedicated processor such as a neural processing unit (NPU).
The one or a plurality of processors control the processing of the input data/images in accordance with a predefined operating rule or artificial intelligence (AI) model stored in the non-volatile memory and the volatile memory. The predefined operating rule or artificial intelligence model is provided through training or learning.
Here, being provided through learning means that, by applying a learning technique to a plurality of learning data, a predefined operating rule or AI model of a desired characteristic is made. The learning may be performed in a device itself in which AI according to an embodiment is performed, and/or may be implemented through a separate server/system.
The AI model may consist of a plurality of neural network layers. Each layer has a plurality of weight values and performs a layer operation through calculation of a previous layer and an operation of a plurality of weights. Examples of neural networks include, but are not limited to, convolutional neural network (CNN), deep neural network (DNN), recurrent neural network (RNN), restricted Boltzmann Machine (RBM), deep belief network (DBN), bidirectional recurrent deep neural network (BRDNN), generative adversarial networks (GAN), and deep Q-networks.
The learning technique is a method for training a predetermined target device (for example, a robot) using a plurality of learning data to cause, allow, or control the target device to make a determination or prediction. Examples of learning techniques include, but are not limited to, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning.
According to the disclosure, in a method for generating the ambience suggestion(s) for an environment of a user, the system 102 may use an artificial intelligence model to recommend various object arrangements for the environment. Further, the system 102 may use the AI model to generate instructions for data obtained from various sensors. The processor 202 may perform a pre-processing operation on the data to convert into a form appropriate for use as an input for the artificial intelligence model. The artificial intelligence model may be obtained by training. Here, "obtained by training" means that a predefined operation rule or artificial intelligence model configured to perform a desired feature (or purpose) is obtained by training a basic artificial intelligence model with multiple pieces of training data by a training technique. The artificial intelligence model may include a plurality of neural network layers. Each of the plurality of neural network layers includes a plurality of weight values and performs neural network computation by computation between a result of computation by a previous layer and the plurality of weight values.
Reasoning prediction is a technique of logically reasoning and predicting by determining information and includes, e.g., knowledge-based reasoning, optimization prediction, preference-based planning, or recommendation.
For the sake of brevity, the architecture, and standard operations of the operating system 214, the memory 210, the database 212, the processor/controller 202, the transceiver 208, the I/O interface 204, and the AI model are not discussed in detail.
Figure 3 illustrates a schematic block diagram of the modules 206 of the system 102 for generating the ambience suggestion for the environment, according to an embodiment of the present invention.
The modules 206 may include an input module 302, an ambience analysis module 304, an ambience processing module 306, a virtual object creator module 308, an output module 310, and a database module 312. The modules 304, 306, 308, 310, 312 may be communicably coupled to each other.
The input module 302 may be configured to generate one or more inputs required to generate the ambience suggestion. In an embodiment, the input module 302 may be configured to collect input data from user for the system 102. The input module 302 may act as an interface between various input devices and the system 102. Example of input devices may include, but not limited to, a camera device, a microphone, a speaker, and a display. The input module 302 may be communicably coupled to the input devices to generate/receive various inputs, such as, but not limited to, image/object input, voice input, gesture/sensors input, calendar input, user preference and history input, and so forth. The image/object input may correspond to image frames captured by the camera device 104 and/or information associated with objects identified from the captured image frames. The voice input may be generated based on input received from microphone. The gesture/sensors input may be generated based on inputs from various sensors monitoring the user. The gesture/sensors input may include hand gestures, finger gestures, face gestures, eyes gestures and so forth. The calendar input may include information defining date/time for receiving various input data, for example image frames corresponding to the environment. Further, the user preference and history input may be based upon user viewing history on the television 103, user interest information such as, adventurous, sports, etc. The various information collected and/or generated by the input module 302 may be stored in the database module 312.
Further, the input module 302 may include a settings sub-module configured to generate and/or store configuration files, user interfaces and settings. Particularly, the setting module may be configured to include predefined rules and user preference with respect to operation of the system 102 and/or the modules 206. In an embodiment, the input module 302 may store all the settings pertaining to a user in the user data section of the database module 312. The setting may include information such as, a threshold indicating a number of days to observe an event before generating a suggestion, user demographic data, user preferences, and so forth.
After receiving the inputs from the input module 302, the system 102 may initialize the ambience analysis module 304. The ambience analysis module 304 may be configured to process the inputs received from the input module 302 and identify object(s) in the environment. The ambience analysis module 304 may be configured to determine parameters associated with the objects and/or environment. The parameters may include, but not limited to, object color, object material, object position, object orientation, user's lifestyle, lighting condition in the environment, available space in the environment, space consumption information, and so forth. The ambience analysis module 304 may also be configured to update the database module 312 based on the determined parameters. In some embodiments, the ambience analysis module 304 may use suitable object detection Application Programming Interface (API) to detect objects and the parameters of the environment. In some other embodiments, the ambience analysis module 304 nay use techniques such as, but not limited to, convolutional neural networks (Region-Based Convolutional Neural Networks), Fast R-CNN, and YOLO (You Only Look Once). The ambience analysis module 302 may be configured to transmit the generated information related to the objects and the parameters to the ambience processing module 306.
The ambience processing module 306 may be configured to receive information related to the objects and the parameters from the ambience analysis module 304. The ambience processing module 306 may be configured to process the received information to enable the system 102 to generate the ambience recommendation(s). The ambience processing module 306 may include various sub-modules namely, a classification module, a data processing module, a score generator module, a target identification module, and a recommendation module.
The classification module may include components, such as, but not limited to, a type identifier, a neighbor identifier, a score definer, threshold definer, and so forth. The type identifier may identify a type of each identified object in the environment. In an exemplary embodiment, the objects may be classified into two types, namely container or contained. The container object may be configured to store various contained objects. For example, a bookshelf may be considered as a container object and books may be classified as contained objects. The neighbor identifier may be configured to identify neighboring objects corresponding to each identified object. Particularly, the neighbor identifier configured to determine a distance between objects to identify neighboring objects corresponding to each object. The score definer may be configured to define rules for determining a fitness score corresponding to each object. Further, the threshold definer may be configured to define a threshold for monitoring the user before recommending the ambience suggestion(s). For example, the system 102 may define the threshold as five days. In such a scenario, the system 102 may not recommend the user the generated ambience suggestion unless the system 102 monitors the user for at least five days. The classification module may be configured to transmit the generated information to the data processing module.
The data processing module for receiving inputs from the classification module and perform different operations based on the type of objects. For example, the data processing module may be configured to perform multi-level classification of objects using non-supervised Machine Learning (ML) model for container type of object. Specifically, the data processing module may receive a cluster of objects as input and generate a similarity between the objects in the cluster. Further, the data processing module may be configured to use rule-based model to process the contained type of objects. The data processing module may take contained type of objects as input and generate an association of such objects with the container object. In some embodiments, the objects may also be classified based on user preference and usage. The data processing module may be configured to use usage-based model to take image frames as input and generate the user preference and associated objects.
Further, the score generator module may be configured to a fitness score corresponding to each identified object based the parameters associated with objects and/or environment. The score generator module may be configured to generate threshold corresponding to the fitness scores. Further, based on a comparison of the fitness with the generated threshold, the system 102 may determine a target space "i.e., determine if the object needs to be replaced, removed, or modified". The score generator module may be configured to transmit the generated fitness scores and threshold to the target identification module.
The target identification module may be configured to generate a first ambience score based on the fitness scores of the identified objects in the environment. In an embodiment, the first ambience score may be a summation of fitness scores of all the identified objects in the environment. Further, the target identification module may include a target space finder configured to identify a target space in the environment based on the determined fitness scores corresponding to the identified objects in the environment. Moreover, the target identification module may include a combination creator configured to generate an object arrangement for the target space by creating a combination of different objects in the target space. The target identification module may also include merger and splitter module configured to merge or split the target space based to generate an effective object arrangement. The target identification module may be configured to identify target spaces and associated obstacles to determine whether to merge or split the target space. Example of the target spaces may include, but not limited to, movable objects like table, chairs, etc., non-essential objects like painting, wall arts, etc., unused objects such as lamps, bookshelves, etc., and spaces selected by the user. Example of the obstacles may include, but not limited to, non-moveable objects like beds, almirah etc., essential objects like monitor, computer, etc., and objects selected by the user. In some embodiments, the target identification module may also be configured to generate a second ambience score of the environment based on the generated object arrangement.
The ambience processing module 306 may also include the recommendation module configured to receive inputs from the target identification module and generate the ambience suggestion to the user. In some embodiments, the recommendation module may include an object generator configured to generate the ambience suggestion based at least a comparison of the first ambience score and the second ambience score, and the fitness scores corresponding to the objects. The recommendation module may include a request generator configured to generate a request for the virtual object creator module 308 to generate the virtual objects corresponding to the generated object arrangement. The virtual object creator module 308 may be responsible for performing operations to generate a virtual environment based the generated object arrangement and/or ambience suggestions. The virtual object creator module 308 may include a virtual object database including a plurality of virtual objects along with associated meta data. The virtual object creator module 308 may also include a virtual object manager configured to query the virtual object database for a virtual object based on the request received from the recommendation module. The virtual object database may return one or more virtual objects based on the query requests received from the virtual object manager. The virtual object creator module 308 may also include a response module configured to return the requested virtual object and/or virtual environment to the recommendation module and/or the ambience processing module 306. Further, the recommendation module may receive the virtual object and/or virtual environment from the virtual object creator module 308. In an exemplary embodiment, the recommendation module may only receive the virtual object from the virtual object creator module 308 and include a virtual view creator configured to generate a virtual environment using the received virtual object.
The ambience processing module 306 may be communicably coupled to the output module 310 to generate the ambience suggestion for the user and/or to display the generated virtual environment to the user. The output module 310 may include media devices such as, televisions, mobile devices, virtual reality headsets, refrigerators, or any other suitable ambience with a display. The media devices may also include components such as, but not limited to, I/O interfaces, display devices, operating systems, memory, AR/VR/MR modules, and so forth.
Each of the modules 302-310 may be communicably coupled to the database module 312 to store or retrieve information. The database module 312 may include knowledge-based information, rule-based information, object data, position data, sensor data, fitness scores, threshold data, as discussed throughout the specification. The database module 312 may also include user data, container/contained data, target space data, positions data, movable and/or non-movable object data, and utility-based data.
While the embodiments are exemplary in nature, the modules 302-312 may interchange operations based on the requirement. Further, in some embodiments, one or more operations of the modules 302-312 may be performed by the processor 202. Further, the modules 206 may be coupled with an external device using a network.
Figure 4A-4D illustrate an exemplary process flow 400 for generating the ambience suggestion(s) for the environment, according to an embodiment of the present disclosure.
At step 402, the camera device 104 may capture the environment and generate image frames corresponding to the environment. Next at step 404, the system 102 may perform scene understanding based on the image frames generated by the camera device 104. Specifically, the system 102 may process the image frames to identified object(s) of the environment. At step 406, the system 102 may identify a 3D position and orientation of each of the identified object in the environment.
Next at step 408, the system 102 may determine which category an identified object may correspond to. The system 102 may classify each of the identified objects into three types namely, container object, contained object, and user preference and usage-based object. The system 102 may also identify neighboring objects corresponding to each of the identified object. The system 102 may define threshold for generating recommendation. The threshold may define a number of days the system 102 need to monitor the environment before generating the ambience suggestion. The system 102 may also determine a fitness scores corresponding to each identified object. Also, the system 102 may identify a first ambience score based on the determined fitness scores. Further, the system 102 may determine a predefined fitness score threshold and compare the fitness score of each object with the predefined fitness score threshold. The system 102 may identify one or more object as target objects based on said comparison of fitness score with the predefined fitness score threshold. Particularly, the system 102 may identify an object as a target object when the comparison of the fitness score of said object with the predefined fitness score threshold indicates that the fitness score is below the predefined fitness score threshold. In an exemplary embodiment, the system 102 may identify a type of the target object. Upon determining, the type of the target object as "container", the system 102 may perform steps at 410, for the type of target object as "contained", the system 102 may perform steps at 412, and for the type of target object as "user preference and usage", the system 102 may perform steps at 414.
At step 410, the system 102 may use non-supervised ML model to generate a similarity between the identified target object and the environment. Further, at step 410, the system 102 may perform the sequence of operation as illustrated by Fig. 4B. Specifically, the system 102 may make a cluster of objects by combining the target object with the neighboring objects. The system102 may determine a theme of cluster and compare the determined theme with the ambience of the environment. Further, the system 102 may determine whether the determined cluster matches with the ambience of the environment or not based on comparison of the cluster theme with the ambience. Upon determining that that the determined cluster does not matches with the ambience of the environment, the system 102 may perform step 416. However, upon determining that the determined cluster matches with the ambience of the environment, the system 102 may determine if any relocation is required. Next, the system 102 may move to the step 414 where the system 102 may use a usage-based model to take image frames as input and generate an output indicating user preference.
At step 412, the system 102 may use rule-based model to take the contained type of target object as input and generate an output indicating a corresponding association of the target object with a container object. Further, at step 410, the system 102 may perform the sequence of operation as illustrated by Fig. 4C. Specifically, the system 102 may identify a container for the target object using rule-based model. The rule-based model may be based on a set of rules defining a relationship of a plurality of contained objects and associated container object. The system 102 may also check for association of the target object with the identified container object. For example, the system 102 may check whether the target object is suitably placed, misplaced, not available or so forth. Next, the system 102 may determine whether the target is misplaced or not. Upon determining that the target object is misplaced, the system 102 may determine whether the associated container is accessible or not. If the associated container object is accessible, the system 102 may notify the user to keep the target object at a right place i.e., at the container object. If the container is not accessible, the system 102 may check for relocation of the container object. In such scenario, the system 102 may move to step 414 where the system 102 may use a usage-based model to take image frames as input and generate an output indicating user preference.
Further, upon determining that the target object is not misplaced, the system 102 may determine a new container for the target object. Next, the system 102 may match the theme of newly identified container with the ambience of the environment using any suitable technique such as, but not limited to, AI, ML and so forth. Last, the system 102 may move to step 416 where the system 102 may identify a target space based on the target object.
At step 414, the system 102 may use the usage-based model to take image frames as input and generate the output indicating user preference. Specifically, at step 410, the system 102 may perform the sequence of operation as illustrated by Fig. 4D. The system 102 may take different image frames corresponding to the environment and captured at different time intervals as input. The system 102 may identify a reference frame for the classified object/target object. The system 102 may determine whether the target object is misplaced in reference frame or not. If the target object is not misplaced in the reference frame, the system 102 may move back to capturing image frames of the environment. However, if the target object is misplaced, the system 102 may identify the container object which is occupied. Further, the system 102 may create a sorted list of container objects based on number of usage times. Further, the system 102 may determine a user interest based on a higher value of container object.
Next at step 416, the system 102 may identify the target space based on the target object. Further, the system 102 may generate all possible combination of the object arrangements at the target space. The system 102 may generate ambience scores based on the different object arrangements. Further, the system 102 may select an object arrangement with highest ambience score and consider the associated ambience score as the second ambience score. At step 418, the system 102 may compare the first ambience score and the second ambience score. Upon determine that the second ambience score is greater than the first ambience score, the system 102 may recommend object arrangement to the user, as shown in step 420. The system 102 may also generate a virtual view of generated object recommendation.
Figure 5A-5B illustrate a flow chart of a method 500 for generating the ambience suggestion for the environment, according to an embodiment of the present disclosure. The method 500 may be performed by the system 102.
At step 502, the method 500 includes processing one or more image frames corresponding to the environment to identify one or more objects in the environment. At step 504, the method 500 includes determining a fitness score corresponding to each of the one or more objects based on one or more parameters. In an embodiment, the one or more parameters comprises at least one of environment theme, user interest, object location, and object usage. At step 506, the method includes determining an object threshold value corresponding to each of the one or more identified objects. At step 508, the method 500 includes comparing the fitness score of each object with the corresponding object threshold value.
At step 510, the method 500 includes determining a first ambience score based on the determined fitness scores corresponding to the one or more objects. In an embodiment, the first ambience score may include a summation of the fitness scores corresponding to the one or more objects.
At step 512, the method 500 includes identifying a target space in the environment. At step 514, the method 500 includes determining a type of object for each of the identified one or more objects. Next at step 516, the method 500 includes determining an orientation and a position of each of the identified one or more objects.
At step 518, the method 500 includes determining one or more neighboring objects corresponding to each of the one or more identified objects based on the orientation and the position corresponding to the identified object. Next at step 520, the method 500 includes generating one or more clusters of objects based on the determined one or more neighboring objects and the corresponding identified object.
At step 522, the method 500 includes monitoring one or more user activities in the environment. Next at step 524, the method 500 includes determining a user interest based on the one or more user activities. At step 526, the method 500 includes determining one or more user-related events. At step 528, the method includes determining one or more additional characteristics of environment. The one or more additional characteristics comprising color of the identified objects, material of the identified objects, lighting condition of the environment, and space occupancy in the environment. Further, at step 530, the method 500 includes generating an object arrangement for the target space. In an embodiment, the object arrangement may include a re-arrangement of at least one of the one or more identified objects, a replacement of the at least one of the one or more identified objects, or an addition of a new object at the target space.
At step 532, the method 500 includes determining a second ambience score of the environment based on the generated object arrangement. Next at step 534, the method 500 includes comparing the first ambience score and the second ambience score. Further, at step 536, the method 500 includes recommending the ambience suggestion to the user based on the generated object arrangement upon determining the second ambience score being greater than the first ambience score.
At step 538, the method 500 includes generating a virtual environment corresponding to the environment. Lastly at step 540, the method 500 includes rendering the recommended ambience suggestion in the virtual environment.
While the above discussed steps in Fig. 5 are shown and described in a particular sequence, the steps may occur in variations to the sequence in accordance with various embodiments.
Figure 6 illustrates generation of ambience suggestion(s) for the environment, according to an embodiment of the present disclosure. In the illustrated embodiment, the system 102 may monitor a user environment for a period as defined by the threshold. Here, the environment may correspond to a room. In an embodiment, the user may define the threshold. The system 102 may monitor that there is a bookshelf in the room and user keeps all the books in the bookshelf daily. However, one day the user leaves a book on the side table and forgets to keep the book on the shelf. The system 102 may monitor the behavior of the user for a number of days. Upon determining that the user has not placed the book at the bookshelf, the system 102 may generate an ambience suggestion as "the book is not in the right place". The generated ambience suggestion(s) may be displayed to the user using the television monitoring the environment. The television may include the camera device 104 to determine displacement of the book. In some other embodiments, the system 102 may observe that the user is keeping the books at the side table. Therefore, the system 102 may determine that the bookshelf is not accessible to the user. Based on said determination, the system 102 may suggest relocation of the bookshelf to make the bookshelf more accessible to the user. Alternatively, the system 102 may suggest replacement of the side table with the bookshelf. Thus, based on the suggestion(s) by the system 102, the user may properly place the book in the bookshelf, thereby able to effectively manage the environment.
In an exemplary embodiment, to generate the ambience suggestion in above scenario, the system 102 may identify the book(s) (object) placed on the side table. The system 102 may identify a type of each book. For example, the system 102 may classify the book(s) as contained object. Further, for each book, the system 102 may identify a corresponding container object, for example, the bookshelf. Further, the system 102 may identify a theme of the environment as "Bedroom".
Upon determining the book(s) on the side table, the system 102 may generate a fitness score(s) of the book(s). Initially, the system 102 may classify user based on user interest as "Book Lover". The system 102 may identify the side table as a target object. Further, the system 102 may determine suitable replacement for the side table (the target object) as the bookshelf. Therefore, the system 102 may generate the ambience suggestion as "move book to bookshelf". Further, with the placement of each book, the value of variable LLocation (as shown in Eq. 2) may increase which increases overall fitness score of the object/environment.
In another embodiment, the system 102 may generate a fitness score of the bookshelf. The system 102 may identify that the bookshelf is inaccessible. Thus, the system 102 may generate the ambience suggestion as "move the bookshelf to another place which is more accessible to the user". Further, with the relocation of the bookshelf, the bookshelf may become more accessible to the user resulting in increase in value of variable LLocation (as shown in Eq. 2) which increases overall fitness score of the object/environment.
Thus, the system 102 may generate suggestions which result in varying the variables of fitness score to increase overall fitness score of the environment.
Figure 7 illustrates generation of ambience suggestion(s) for the environment, according to another embodiment of the present disclosure. In the illustrated embodiment, the system 102 may monitor an environment, particularly room ambience and determine the object "i.e., a bedsheet" does not match with the overall environment. The determination may be made based on a fitness score of the bedsheet in view of the environment. Therefore, to enhance overall ambience score of the environment, the system 102 may suggest another bedsheet which a higher fitness score.
Figure 8 illustrates generation of ambience suggestion(s) for the environment, according to yet another embodiment of the present disclosure. In the illustrated embodiment, the system 102 may monitor the environment of the user. The user may display a packed bedsheet to the system 102. The system 102 may capture the image frame corresponding to displayed bedsheet. The system 102 may process the image frame to generate a virtual environment having the bedsheet on the bed. Thus, the system 102 may provide an effective way to visualize a change in the environment based on user's input.
Figure 9 illustrates generation of ambience suggestion(s) for the environment based on user activity, according to an embodiment of the present disclosure. In the illustrated embodiment, the system 102 may monitor user's environment along with user viewing history. The system 102 may determine that the user watches romantic movies on Tuesday and Fridays. Therefore, based on said determination, the system 102 may generate an ambience suggestion as addition of components with romantic theme to enhance user experience.
Figure 10 illustrates generation of ambience suggestion(s) for the environment based on user voice command, according to an embodiment of the present disclosure. In the illustrated embodiment, the system 102 may receive a voice command from the user "i.e., change the table lamp position and add some light". The system 102 may process said command from the user and generate the ambience suggestion based on received command.
Figure 11 illustrates generation of ambience suggestion for the environment in a virtual reality device, according to an embodiment of the present disclosure. In the illustrated embodiment, the system 102 may generate the ambience suggestion at the wearable virtual reality headset of the user, to provide the user interactive experience with the modified environment.
The system 102 is configured to identify a misplaced object inside an environment and suggest a user to place object at a right place. Further, the system 102 may identify objects which do not matches with environment theme and also suggest suitable replacement/relocation of such objects. Further, the system 102 may be able to provide personalized ambience to a user based on user interest and command.
The present invention provides for various technical advancements based on the key features discussed above. For example, the present invention may provide well-managed and personalized environment to the user. The present invention may also enable a user to visualize a change in the environment in an interactive way "i.e., based on voice command or user gestures". The present invention may lead to enhancement with user's wellbeing by providing an environment which is user-friendly, aesthetically pleasing and effectively managed.
While specific language has been used to describe the present subject matter, any limitations arising on account thereto, are not intended. As would be apparent to a person in the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein. The drawings and the foregoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment.

Claims (15)

  1. A method for generating an ambience suggestion for an environment, the method comprising:
    processing one or more image frames corresponding to the environment to identify one or more objects in the environment;
    determining a fitness score corresponding to each of the one or more objects based on one or more parameters;
    determining a first ambience score based on the determined fitness scores corresponding to the one or more objects;
    identifying a target space in the environment based on the determined fitness scores corresponding to the one or more objects;
    generating an object arrangement for the target space based on the environment;
    determining a second ambience score of the environment based on the generated object arrangement;
    comparing the first ambience score and the second ambience score; and
    recommending the ambience suggestion to the user based on the generated object arrangement upon determining the second ambience score being greater than the first ambience score, wherein the ambience suggestion is indicative of a change in the environment based on the generated object arrangement.
  2. The method as claimed in claim 1, wherein the one or more parameters comprises at least one of environment theme, user interest, object location, and object usage.
  3. The method as claimed in claim 1, wherein the first ambience score comprises a summation of the fitness scores corresponding to the one or more objects.
  4. The method as claimed in claim 1, wherein the object arrangement includes a re-arrangement of at least one of the one or more identified objects, a replacement of the at least one of the one or more identified objects, or an addition of a new object at the target space.
  5. The method as claimed in claim 1, comprising:
    determining an object threshold value corresponding to each of the one or more identified objects;
    comparing the fitness score of each object with the corresponding object threshold value; and
    identifying the target space in the environment based on the comparison of the first score of each object with the corresponding object threshold value.
  6. The method as claimed in claim 1, comprising:
    determining a type of object for each of the identified one or more objects;
    determining an orientation and a position of each of the identified one or more objects;
    determining one or more neighboring objects corresponding to each of the one or more identified objects based on the orientation and the position corresponding to the identified object;
    generating one or more clusters of objects based on the determined one or more neighboring objects and the corresponding identified object; and
    generating the object arrangement for the target space based at least on the type of object corresponding to the identified one or more objects at the target space and the generated one or more clusters of objects for the corresponding identified one or more objects at the target space.
  7. The method as claimed in claim 5, wherein the type of object comprises one of a container object and a contained object.
  8. The method as claimed in claim 1, comprising:
    monitoring one or more user activities in the environment;
    determining a user interest based on the one or more user activities; and
    generating the object arrangement for the target space based on the determined user interest.
  9. The method as claimed in claim 1, comprising:
    determining one or more user-related events; and
    generating the object arrangement for the target space based on the one or more user-related events.
  10. The method as claimed in claim 1, comprising:
    determining one or more additional characteristics of environment, the one or more additional characteristics comprising color of the identified objects, material of the identified objects, lighting condition of the environment, and space occupancy in the environment; and
    generating the object arrangement for the target space based on the one or more additional characteristics of environment.
  11. The method as claimed in claim 1, comprising:
    generating a virtual environment corresponding to the environment; and
    rendering the recommended ambience suggestion in the virtual environment.
  12. A system for generating an ambience suggestion for an environment, the system comprising:
    a memory;
    at least one processor communicably coupled to the memory, the at least one processor is configured to:
    process one or more image frames corresponding to the environment to identify one or more objects in the environment;
    determine a fitness score corresponding to each of the one or more objects based on one or more parameters;
    determine a first ambience score based on the determined fitness scores corresponding to the one or more objects;
    identify a target space in the environment based on the determined fitness scores corresponding to the one or more objects;
    generate an object arrangement for the target space based on the environment;
    determine a second ambience score of the environment based on the generated object arrangement;
    compare the first ambience score and the second ambience score; and
    recommend the ambience suggestion to the user based on the generated object arrangement upon determining the second ambience score being greater than the first ambience score, wherein the ambience suggestion is indicative of a change in the environment based on the generated object arrangement.
  13. The system as claimed in claim 12, wherein the one or more parameters comprises at least one of environment theme, user interest, object location, and object usage.
  14. The system as claimed in claim 12, wherein the first ambience score comprises a summation of the fitness scores corresponding to the one or more objects.
  15. The system as claimed in claim 12, wherein the object arrangement includes a re-arrangement of at least one of the one or more identified objects, a replacement of the at least one of the one or more identified objects, or an addition of a new object at the target space.
PCT/KR2023/018271 2022-12-09 2023-11-14 Systems and methods for generating ambience suggestions for an environment WO2024122916A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US19/049,590 US20250182427A1 (en) 2022-12-09 2025-02-10 Systems and methods for generating ambience suggestions for an environment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202211071100 2022-12-09
IN202211071100 2022-12-09

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US19/049,590 Continuation US20250182427A1 (en) 2022-12-09 2025-02-10 Systems and methods for generating ambience suggestions for an environment

Publications (1)

Publication Number Publication Date
WO2024122916A1 true WO2024122916A1 (en) 2024-06-13

Family

ID=91379550

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2023/018271 WO2024122916A1 (en) 2022-12-09 2023-11-14 Systems and methods for generating ambience suggestions for an environment

Country Status (2)

Country Link
US (1) US20250182427A1 (en)
WO (1) WO2024122916A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170228809A1 (en) * 2014-06-12 2017-08-10 A9.Com, Inc. Recommendations utilizing visual image analysis
US20170323023A1 (en) * 2011-06-20 2017-11-09 Primal Fusion Inc. Techniques for presenting content to a user based on the user's preferences
US20180121988A1 (en) * 2016-10-31 2018-05-03 Adobe Systems Incorporated Product recommendations based on augmented reality viewpoints
US20190378204A1 (en) * 2018-06-11 2019-12-12 Adobe Inc. Generating and providing augmented reality representations of recommended products based on style similarity in relation to real-world surroundings
KR20220011034A (en) * 2020-07-20 2022-01-27 김석진 Method for recommending additional furniture through existing furniture

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170323023A1 (en) * 2011-06-20 2017-11-09 Primal Fusion Inc. Techniques for presenting content to a user based on the user's preferences
US20170228809A1 (en) * 2014-06-12 2017-08-10 A9.Com, Inc. Recommendations utilizing visual image analysis
US20180121988A1 (en) * 2016-10-31 2018-05-03 Adobe Systems Incorporated Product recommendations based on augmented reality viewpoints
US20190378204A1 (en) * 2018-06-11 2019-12-12 Adobe Inc. Generating and providing augmented reality representations of recommended products based on style similarity in relation to real-world surroundings
KR20220011034A (en) * 2020-07-20 2022-01-27 김석진 Method for recommending additional furniture through existing furniture

Also Published As

Publication number Publication date
US20250182427A1 (en) 2025-06-05

Similar Documents

Publication Publication Date Title
CN109299384B (en) Scene recommendation method, device and system and storage medium
WO2019035619A1 (en) Method for displaying content and electronic device thereof
CN111832360A (en) Method, apparatus, electronic device and readable storage medium for processing prompt information
CN110609903A (en) Information presentation method and device
WO2019139364A1 (en) Method and apparatus for modifying features associated with applications
US20220019689A1 (en) Privacy Preserving Server-Side Personalized Content Selection
WO2018097389A1 (en) Image searching device, data storing method, and data storing device
US10945018B2 (en) System and method for display adjustments based on content characteristics
WO2021143284A1 (en) Image processing method and apparatus, terminal and storage medium
EP3107012A1 (en) Modifying search results based on context characteristics
US12284254B2 (en) Method and apparatus for determining supplementary parameters of electronic content
TW200818906A (en) Dynamic triggering of media signal capture
WO2024122916A1 (en) Systems and methods for generating ambience suggestions for an environment
CN119577814A (en) A smart glasses control method and system based on multimodal privacy protection
US12333643B2 (en) Techniques for resizing virtual objects
CN104137101A (en) Method, apparatus and computer program product for management of media files
EP4268095A1 (en) Method of displaying web pages and browser display system
WO2017099535A1 (en) Method and system for auto-viewing of contents
WO2021251733A1 (en) Display device and control method therefor
WO2021040317A1 (en) Apparatus, method and computer program for determining configuration settings for a display apparatus
WO2022234878A1 (en) Transition strategy search method and operating device, using user state vectors
WO2022181950A1 (en) Method of displaying web pages and browser display system
US12051133B2 (en) Color treatment and color normalization for digital assets
WO2024096242A1 (en) Effective multi-scale multi-granular targeting for game users
WO2025121724A1 (en) System and method for controlling a cursor on a user interface

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23900935

Country of ref document: EP

Kind code of ref document: A1