WO2021240226A1 - System and method to create sensory stimuli events - Google Patents
System and method to create sensory stimuli events Download PDFInfo
- Publication number
- WO2021240226A1 WO2021240226A1 PCT/IB2020/057326 IB2020057326W WO2021240226A1 WO 2021240226 A1 WO2021240226 A1 WO 2021240226A1 IB 2020057326 W IB2020057326 W IB 2020057326W WO 2021240226 A1 WO2021240226 A1 WO 2021240226A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- odour
- module
- viewer
- video scene
- analysing
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M21/00—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
- A61M21/02—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis for inducing sleep or relaxation, e.g. by direct nerve stimulation, hypnosis, analgesia
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63J—DEVICES FOR THEATRES, CIRCUSES, OR THE LIKE; CONJURING APPLIANCES OR THE LIKE
- A63J25/00—Equipment specially adapted for cinemas
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16Y—INFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
- G16Y10/00—Economic sectors
- G16Y10/65—Entertainment or amusement; Sports
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16Y—INFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
- G16Y20/00—Information sensed or collected by the things
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M21/00—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
- A61M2021/0005—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
- A61M2021/0016—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the smell sense
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M21/00—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
- A61M2021/0005—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
- A61M2021/0044—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the sight sense
- A61M2021/005—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the sight sense images, e.g. video
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M2205/00—General characteristics of the apparatus
- A61M2205/35—Communication
- A61M2205/3546—Range
- A61M2205/3553—Range remote, e.g. between patient's home and doctor's office
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M2205/00—General characteristics of the apparatus
- A61M2205/50—General characteristics of the apparatus with microprocessors or computers
Definitions
- Embodiments of a present disclosure relates to creation of sensory stimuli experiences, and more particularly to a system to create sensory stimuli events in accordance to multimedia content and a method to operate the same.
- the entertainment industry incorporates seat movements, special effects such as snow, wind, rain, and the like. Such inclusions help in relaxing and rejuvenating mind as people can completely immerse in the ongoing show and enjoy it to the fullest.
- smell had been introduced as an additional sensory stimuli, where the system is basically designed to release scents/fragrances/perfumes during the showing of a film so that viewers could experience a “smell” related to what was happening in the film screen.
- scents/fragrances/perfumes during the showing of a film so that viewers could experience a “smell” related to what was happening in the film screen.
- Such dispersed scents provide enhanced experiences, that allow viewers to develop deeper memories and emotional connections as it adds one more dimension or stimuli to be experienced by the viewers.
- Such scent dispersing system lags real time control over release and dispersion of the fragrances.
- the present system is unable to control the intensity or concentration of dispersed fragrance to the liking of the viewers. If the concentration of fragrance is high, then it becomes irritating and if concentration is less it will not be even noticed by the viewers. This defeats the whole purpose of adding smell as an additional stimuli or experience for the user/viewer. Since people have too many options of entertainment at their disposal, they are getting bored very fast with what they have now. Driven by basic human tendency people always crave for something new. It’s about time that entertainment system should add something new to engage audience/viewers.
- a system to create sensory stimuli events includes a video scene analysing module.
- the video scene analysing module comprises a first set of image capturing devices communicatively coupled with at least one IoT device.
- the video scene analysing module is configured to capture an image frame corresponding to a video scene of interest.
- the video scene analysing module is also configured to analyse the captured image frame for odour evaluation.
- the system also includes an odour stimuli module operable by the at least one IoT device.
- the odour stimuli module is operatively coupled to the video scene analysing module.
- the odour stimuli module is configured to select an odour from a pre-stored set of odour in-accordance to evaluated odour.
- the odour stimuli module is also configured to disperse a selected odour from the one or more canisters.
- the odour stimuli module is also configured to disperse an odour neutralizer from one or more odour neutralizer canisters.
- the system also includes a facial analysis module.
- the facial analysis module comprises a second set of image capturing devices communicatively coupled with the at least one IoT device.
- the facial analysis module is configured to capture a viewer facial expression image.
- the facial analysis module is also configured to analyse the viewer facial expression image by one or more emotion detection techniques to identify acceptance level of the dispersed odour.
- the facial analysis module is also configured to analyse the viewer facial expression image by the one or more emotion detection techniques to detect maximum interaction instant with respect to the video scene of interest.
- the system also includes a holographic simulation module operable by the at least one IoT device.
- the holographic simulation is operatively coupled to the facial analysis module.
- the holographic simulation module is configured to create a holographic simulation of analysed video scene of interest based on detected maximum interaction instant of the viewer.
- the holographic simulation module is also configured to present created holographic simulation for the viewer in real time along with the video scene.
- a method for creating sensory stimuli events includes capturing an image frame corresponding to a video scene of interest.
- the method also includes analysing a captured image frame for odour evaluation.
- the method also includes selecting an odour from a pre- stored set of odour in-accordance to evaluated odour.
- the method also includes dispersing a selected odour from the one or more canisters.
- the method also includes dispersing an odour neutralizer from one or more odour neutralizer canisters.
- the method also includes capturing a viewer facial expression image.
- the method also includes analysing the viewer facial expression image by one or more emotion detection techniques to identify the acceptance level of the dispersed odour.
- the method also includes recalibrating intensity of the dispersed odour.
- the method also includes analysing the viewer facial expression image by the one or more emotion detection techniques to detect maximum interaction instant with respect to the video scene of interest.
- the method also includes creating a holographic simulation of analysed video scene of interest based on detected maximum interaction instant of the viewer.
- the method also includes presenting created holographic simulation for the viewer in real time along with the video scene.
- FIG. 1 is a block diagram representation of a system to create sensory stimuli events in accordance with an embodiment of the present disclosure
- FIG. 2 is a schematic representation of an embodiment representing the system to create sensory stimuli events of FIG. 1 in accordance of an embodiment of the present disclosure
- FIG. 3 is a block diagram of a computer or a server in accordance with an embodiment of the present disclosure.
- FIG. 4 is a flowchart representing the steps of a method for creating sensory stimuli events in accordance with an embodiment of the present disclosure.
- Embodiments of the present disclosure relate to a system to create sensory stimuli events.
- the system includes a video scene analysing module.
- the video scene analysing module comprises a first set of image capturing devices communicatively coupled with at least one IoT device.
- the video scene analysing module is configured to capture an image frame corresponding to a video scene of interest.
- the video scene analysing module is also configured to analyse the captured image frame for odour evaluation.
- the system also includes an odour stimuli module operable by the at least one IoT device.
- the odour stimuli module is operatively coupled to the video scene analysing module.
- the odour stimuli module is configured to select an odour from a pre-stored set of odour in-accordance to evaluated odour.
- the odour stimuli module is also configured to disperse a selected odour from the one or more canisters.
- the odour stimuli module is also configured to disperse an odour neutralizer from one or more odour neutralizer canisters.
- the system also includes a facial analysis module.
- the facial analysis module comprises a second set of image capturing devices communicatively coupled with the at least one IoT device.
- the facial analysis module is configured to capture a viewer facial expression image.
- the facial analysis module is also configured to analyse the viewer facial expression image by one or more emotion detection techniques to identify acceptance level of the dispersed odour.
- the facial analysis module is also configured to analyse the viewer facial expression image by the one or more emotion detection techniques to detect maximum interaction instant with respect to the video scene of interest.
- the system also includes a holographic simulation module operable by the at least one IoT device.
- the holographic simulation is operatively coupled to the facial analysis module.
- the holographic simulation module is configured to create a holographic simulation of analysed video scene of interest based on detected maximum interaction instant of the viewer.
- the holographic simulation module is also configured to present created holographic simulation for the viewer in real time along with the video scene.
- FIG. 1 is a block diagram representation of a system (10) to create sensory stimuli events in accordance with an embodiment of the present disclosure.
- the term “sensory stimulus” refers to any event or object that is received by the senses and elicits a response from a person.
- the system (10) includes a video scene analysing module (20).
- the video scene analysing module (20) comprises a first set of image capturing devices (60) communicatively coupled with at least one IoT device (70).
- IOT Internet of things
- the term “Internet of things (IOT)” refers to a system of interrelated computing devices, mechanical and digital machines provided with unique identifiers (UIDs) and the ability to transfer data over a network without requiring human-to-human or human- to-computer interaction.
- the first set of image capturing devices (60) may be any cameras such as smart phone camera, film cameras, point and shoot cameras and the like. In such embodiment, the first set of image capturing devices (60) are affixed around a video displaying frame.
- the video scene analysing module (20) is configured to capture an image frame corresponding to a video scene of interest.
- captured image frame includes a set of static image frames and textual information.
- the first set of cameras (60) captures multiple image frames for further understanding of the scene.
- the video scene analysing module (20) is also configured to analyse the captured image frame for odour evaluation.
- the captured image frame is analysed by object detection technique to realize the odour consistent with the captured image frame.
- object detection technique to realize the odour consistent with the captured image frame.
- each of the captured multiple frame images is analysed by object detection technique.
- the technique analyses the textual information as well as the static frame objects to understand an odour.
- the “object detection technique” refers to a technology related to computer vision which deals with detecting instances of semantic objects of a certain class (such as humans, buildings, or vehicles) in digital videos and images.
- the system (10) analyses the textual information provided and static frame objects provided for understanding the class of associated objects. After such associated object detection, the system (10) enables understanding of specific odour corresponding to the detected object.
- the system (10) may access stored odour details for evaluation and selection.
- a stored database may automatically store details about specific odour and specific scenarios for using.
- the system (10) also includes an odour stimuli module (30) operable by the at least one IoT device (70).
- the odour stimuli module (30) is operatively coupled to the video scene analysing module (20).
- the odour stimuli module (30) is configured to select an odour from a pre- stored set of odour in-accordance to evaluated odour.
- the pre stored set of odours are stored in one or more canisters.
- the one or more canisters are affixed around the video displaying frame.
- the term “canister” refers to a round or cylindrical container used for storing such selected substances.
- selection of odour is realized by an odour matching technique.
- the odour matching technique includes selecting of odour after matching pre- stored details of the pre stored set of odours in the one or more canisters with evaluated odour details. It is pertinent to note that, the one or more canisters stores more than one type of odour and with which every odour specific detail might be stored about specific situation use.
- various geographical types of rose odour may be stored in the canisters.
- the odour stimuli module (30) may select the odour that matches with geographical details as portrayed by the captured image frame.
- the odour stimuli module (30) is also configured to disperse a selected odour from the one or more canisters.
- the dispersion of the selected odour is triggered in real time in-accordance to the captured image frame.
- the dispersion of the selected odour happens simultaneously with streaming of the image frame in the video, thereby providing enjoyable experience.
- the odour stimuli module (30) is also configured to disperse an odour neutralizer from one or more odour neutralizer canisters.
- the dispersion of the odour neutralizer is triggered after a pre-determined time interval with respect to the dispersion of the selected odour.
- the pre-determined time interval may be manipulated by the IoT device as required.
- the one or more odour neutralizer canisters are affixed around the video displaying frame. It is pertinent to note that, the odour neutralizer eliminates variety type of odours that is released according to specific scenes.
- the system (10) also includes a facial analysis module (40) comprising a second set of image capturing devices (80) communicatively coupled with the at least one IoT device (70).
- the facial analysis module (40) is operatively coupled with the odour stimuli module (30).
- the facial analysis module (40) is configured to capture a viewer facial expression image with the help of the second set of image capturing devices (80).
- the second set of image capturing devices are affixed, in the hall or premise showing the media content, in a manner to capture picture of faces of users/viewers/audience.
- the image capturing devices may include digital cameras of required resolution for capturing facial expression of the viewers. Facial expression may be categorised into happy, neutral, average, below average, bad and the like as a representative of acceptance level (defined by maximum interaction instant).
- the facial analysis module (40) is also configured to analyse the viewer facial expression image by one or more emotion detection techniques to identify the acceptance level of the users/viewers/audience for the dispersed odour.
- the identification of the acceptance level enables recalibration or adjustment of intensity of the dispersed odour by the odour stimuli module (30). Such recalibration of intensity refers to increase or decrease of spreading odour in real time.
- the identified acceptance level of the users/viewers/audience may be used to suggest an alternative fragrance/odour to the odour being dispersed. This may happen where the acceptance level of the users/viewers/audience remains low even after recalibration. This helps in customising the odour with respect to the liking of different set of audiences of different regions or countries. This makes the experience more personalised and likable.
- the captured facial expression of a viewer is analysed to understand emotions while the viewer is experiencing the dispersed odour along with corresponding scene.
- various human body parameters are analysed for understanding the real time emotions.
- the body parameters include eye activity, motion analysis and the like.
- eye movement, lip movement, head movement, talking gestures etc. enable automatic detection of emotion.
- the maximum interaction instant is evaluated by analysing the viewer facial expression data sets.
- the facial analysis module (40) is also configured to analyse the viewer facial expression image by the one or more emotion detection techniques to detect maximum interaction instant with respect to the video scene of interest.
- the viewer facial expression is analysed continuously for detecting maximum interaction instant.
- the system (10) via the facial analysis module (40) enables detection of maximum interaction instant.
- maximum interaction instant refers to instant when the viewer interacts greatest with the ongoing video scene.
- the facial analysis module (30) captures the viewer facial expression by the second set of image capturing devices (80) and further analyses the facial expression in accordance with humane body parameters.
- the video scene of maximum interaction based on viewer facial expression is noted for further usage.
- the system (10) also includes a holographic simulation module (50).
- the holographic simulation module (50) is operable by the at least one IoT device (70) and operatively coupled to the facial analysis module (40).
- the holographic simulation module (50) is configured to create a holographic simulation of analysed video scene of interest based on detected maximum interaction instant of the viewer.
- holography is a photographic technique that records the light scattered from an object, and then presents it in a way that appears three-dimensional.
- the holographic simulation module (50) is configured to present created holographic simulation for the viewer in real time along with the video scene.
- the created holographic simulation is presented via one or more simulated holographic devices.
- the at least one IoT device (70) enable creation and presentation of holographic scenes.
- the scenes may be projected through pre-fixed IoT devices (70) at pre-defined places before the video screen. It is pertinent to note that, such presentation of holographic scenes in real time along with ongoing videos will increase viewer’s visual experience.
- FIG. 2 is a schematic representation of an embodiment representing the system (10) to create sensory stimuli events of FIG. 1 in accordance of an embodiment of the present disclosure.
- a viewer Y (100) is exposed to sensory stimuli events as presented during a movie screening at movie screen X (90).
- a set of cameras (60 and 80) are placed in conjunction with movie screen X (90).
- the set cameras (60 and 80) capture images relating to screen X (90) as well as facial expression in relation to viewer Y (100).
- IoT devices (70) are also placed in conjunction with the screen X (90) for functioning of the system (10).
- a video scene analysing module (20) enables capturing of an image frame of a video scene screened on the movie screen X (90).
- the video scene analysing module (20) further analyses the captured image frame by object detection technique for a predefined duration to understand a relevant object being shown for which a relevant odour could be released. For example, if the image frames or video depicts a car passing through road covered with Eucalyptus trees for the predefined duration, the object detection technique analyses the image frames by image processing and detect instances of semantic objects of a certain class like trees, and thereby evaluating that odour of Eucalyptus oil is required for use.
- the predefined duration may vary, and it is customisable. For example, in one instance it may be 8 to 10 seconds.
- an odour stimuli module (30) selects in real time a prestored odour from an adjoining canister. For selection of the odour, the odour stimuli module (30) uses odour matching technique. In above stated example, the system (10) may access the stored details of specific container to identify the odour details. If the details match with the required odour, the canister sprays the eucalyptus odour in real time. Thereby, enhancing real time show experience of viewer Y (100) with the eucalyptus smell. The odour stimuli module (30) may further disperse odour neutralizer for neutralizing the smell before any other odour is dispersed. In pre-defined time, the odour neutralizer may be dispersed accordingly.
- the video scene analysing module (20) may also select different objects or events being depicted on the screen such as, but not limited to, foods, flowers, rain and likes.
- a facial analysis module (40) further enables understanding of the viewer Y (100) emotion by facial expression.
- the viewer Y (100) facial expression is captured while on going screening in movie screen X (90).
- the viewer Y (100) face expression is captured as odour stimulus is introduced with particular video streaming.
- Analysis of the captured facial expression enables identification of odour acceptance level. Such identification enables recalibration of intensity of the dispersed odour.
- the viewer Y (100) facial expression may indicate whether the odour should be increased or decreased, or whether the detected object should be even used as a reference for dispersing the related odour.
- the captured facial expression may also indicate the instant at which the viewer Y (100) interacts more with screening video.
- the scene with which the viewer Y (100) interacts most is analysed.
- a holographic simulation module (50) creates a 3d holography video of the analysed maximum interaction scene.
- the holographic simulation module (50) enables creation of simulation of snow-covered mountains or snow fall and simultaneous presentation. Such presentation of 3d simulation is enabled by various holographic devices.
- the video scene analysing module (20), the odour stimuli module (30), the facial analysis module (40) and the holographic simulation module (50) in FIG. 2 is substantially equivalent to the video scene analysing module (20), the odour stimuli module (30), the facial analysis module (40) and holographic simulation module (50) of FIG. 1.
- FIG. 3 is a block diagram of a computer or a server (110) in accordance with an embodiment of the present disclosure.
- the server (110) includes processor(s) (140), and memory (120) coupled to the processor(s) (140).
- the processor(s) (140), as used herein, means any type of computational circuit, such as, but not limited to, a microprocessor, a microcontroller, a complex instruction set computing microprocessor, a reduced instruction set computing microprocessor, a very long instruction word microprocessor, an explicitly parallel instruction computing microprocessor, a digital signal processor, or any other type of processing circuit, or a combination thereof.
- the memory (120) includes a plurality of modules stored in the form of executable program which instructs the processor (140) via a bus (130) to perform the method steps illustrated in Fig 1.
- the memory (120) has following modules: the video scene analysing module (20), the odour stimuli module (30), the facial analysis module (40) and holographic simulation module (50).
- the video scene analysing module (20) is configured to capture an image frame corresponding to a video scene of interest.
- the video scene analysing module (20) is also configured to analyse the captured image frame for odour evaluation.
- the odour stimuli module (30) is configured to select an odour from a pre-stored set of odour in-accordance to evaluated odour.
- the odour stimuli module (30) is also configured to disperse a selected odour from the one or more canisters.
- the odour stimuli module (30) is also configured to disperse an odour neutralizer from one or more odour neutralizer canisters.
- the facial analysis module (40) is configured to capture a viewer facial expression image.
- the facial analysis module (40) is also configured to analyse the viewer facial expression image by one or more emotion detection techniques to identify acceptance level of the dispersed odour.
- the facial analysis module (40) is also configured to analyse the viewer facial expression image by the one or more emotion detection techniques to detect maximum interaction instant with respect to the video scene of interest.
- the holographic simulation module (50) is configured to create a holographic simulation of analysed video scene of interest based on detected maximum interaction instant of the viewer.
- the holographic simulation module (50) is also configured to present created holographic simulation for the viewer in real time along with the video scene.
- Computer memory elements may include any suitable memory device(s) for storing data and executable program, such as read only memory, random access memory, erasable programmable read only memory, electrically erasable programmable read only memory, hard drive, removable media drive for handling memory cards and the like.
- Embodiments of the present subject matter may be implemented in conjunction with program modules, including functions, procedures, data structures, and application programs, for performing tasks, or defining abstract data types or low-level hardware contexts.
- Executable program stored on any of the above-mentioned storage media may be executable by the processor(s) (140).
- FIG. 4 is a flowchart representing the steps of a method (150) for creating sensory stimuli events in accordance with an embodiment of the present disclosure.
- the method (150) includes capturing an image frame corresponding to a video scene of interest in step 160.
- capturing the image frame corresponding to the video scene of interest includes capturing the image frame corresponding to the video scene of interest by a video scene analysing module.
- the method (150) also includes analysing a captured image frame for odour evaluation in step 170.
- analysing the captured image frame for odour evaluation includes analysing the captured image frame for odour evaluation by the video scene analysing module.
- analysing the captured image frame for odour evaluation includes analysing the captured image frame by object detection technique to realize the odour consistent with the captured image frame.
- the method (150) also includes selecting an odour from a pre-stored set of odour in accordance to evaluated odour in step 180.
- selecting the odour from the pre-stored set of odour in-accordance to evaluated odour includes selecting the odour from the pre-stored set of odour in-accordance to evaluated odour by a odour stimuli module.
- selecting the odour from the pre-stored set of odour in-accordance to evaluated odour includes selecting the odour from the pre stored set of odours by an odour matching technique.
- selecting the odour from the pre-stored set of odour in-accordance to evaluated odour includes selecting the odour by the odour matching technique comprising matching of pre-stored details of the pre-stored set of odours in one or more canisters.
- the method (150) also includes dispersing a selected odour from the one or more canisters in step 190.
- dispersing the selected odour from the one or more canisters includes dispersing the selected odour from the one or more canisters by the odour stimuli module.
- dispersing the selected odour from the one or more canisters includes dispersing odour neutralizer after a pre determined time interval with respect to the dispersion of the selected odour.
- the method (150) also includes dispersing an odour neutralizer from one or more odour neutralizer canisters in step 200.
- dispersing the odour neutralizer from the one or more odour neutralizer canisters includes dispersing the odour neutralizer from the one or more odour neutralizer canisters the odour stimuli module.
- the method (150) also includes capturing a viewer facial expression image in step 210.
- capturing the viewer facial expression image includes capturing the viewer facial expression image by a facial analysis module.
- the method (150) also includes analysing the viewer facial expression image by one or more emotion detection techniques to identify the acceptance level of the dispersed odour in step 220.
- analysing the viewer facial expression image by the one or more emotion detection techniques to identify the acceptance level of the dispersed odour includes analysing the viewer facial expression image by the one or more emotion detection techniques to identify the acceptance level of the dispersed odour by the facial analysis module.
- the method (150) also includes recalibrating intensity of the dispersed odour in step 230.
- recalibrating the intensity of the dispersed odour includes recalibrating the intensity of the dispersed odour by the odour stimuli module.
- recalibrating the intensity of the dispersed odour includes recalibrating intensity of the dispersed odour in accordance of acceptance level.
- the method (150) also includes analysing the viewer facial expression image by the one or more emotion detection techniques to detect maximum interaction instant with respect to the video scene of interest in step 240.
- analysing the viewer facial expression image by the one or more emotion detection techniques to detect maximum interaction instant with respect to the video scene of interest includes analysing the viewer facial expression image by the one or more emotion detection techniques to detect maximum interaction instant with respect to the video scene of interest by the facial analysis module.
- analysing the viewer facial expression image by the one or more emotion detection techniques to detect maximum interaction instant with respect to the video scene of interest includes analysing the viewer facial expression image continuously for detecting maximum interaction instant.
- the method (150) also includes creating a holographic simulation of analysed video scene of interest based on detected maximum interaction instant of the viewer in step 250.
- creating the holographic simulation of the analysed video scene of interest based on the detected maximum interaction instant of the viewer includes creating the holographic simulation of the analysed video scene of interest based on the detected maximum interaction instant of the viewer by a holographic simulation module.
- the method (150) also includes presenting created holographic simulation for the viewer in real time along with the video scene in step 260.
- presenting the created holographic simulation for the viewer in real time along with the video scene includes presenting the created holographic simulation for the viewer in real time along with the video scene by the holographic simulation module.
- presenting the created holographic simulation for the viewer in real time along with the video scene includes presenting the created holographic simulation via one or more simulated holographic devices.
- Present disclosure enables creation of sensory stimuli events in accordance to multimedia content.
- the system provides interactive digital entertainment ecosystem in association with augmented reality, virtual reality, Internet of Things, Holographic Technique and the like with artificial intelligence.
- the present invention solves the existing issues in the smell dispersion techniques implemented in the present entertainment systems. Furthermore, it add one more dimension of experience in present entertainment systems in the form of real time holographic projections.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Ophthalmology & Optometry (AREA)
- Anesthesiology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Psychology (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Hematology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- Acoustics & Sound (AREA)
- Veterinary Medicine (AREA)
- Pain & Pain Management (AREA)
- Business, Economics & Management (AREA)
- Accounting & Taxation (AREA)
- Development Economics (AREA)
- Economics (AREA)
- General Business, Economics & Management (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
A system to create sensory stimuli events is disclosed. The system includes a video scene analysing module, configured to analyse an image frame for odour evaluation. The system includes an odour stimuli module, configured to select an odour from a pre-stored set of odour in-accordance to evaluated odour and disperse a selected odour from the one or more canisters. The system includes a facial analysis module, configured to analyse the viewer facial expression image by emotion detection techniques to identify acceptance level of the dispersed odour and also detect maximum interaction instant with respect to the video scene of interest. The system includes a holographic simulation module, configured to create a holographic simulation of analysed video scene of interest based on detected maximum interaction instant of the viewer and configured to present created holographic simulation for the viewer in real time along with the video scene.
Description
SYSTEM AND METHOD TO CREATE SENSORY STIMULI EVENTS
This International Application claims priority from a Complete patent application filed in India having Patent Application No. 202021022556, filed on May 29, 2020, and titled “SYSTEM AND METHOD TO CREATE SENSORY STIMULI EVENTS”.
FIELD OF INVENTION
Embodiments of a present disclosure relates to creation of sensory stimuli experiences, and more particularly to a system to create sensory stimuli events in accordance to multimedia content and a method to operate the same. BACKGROUND
Businesses like production studios, movie theatres, and amusement parks have long attempted to enhance interactive experiences by introducing various sensory stimuli experiences. For example, engaging audience via 3D to 7D experience. To engage with audience, the entertainment industry incorporates seat movements, special effects such as snow, wind, rain, and the like. Such inclusions help in relaxing and rejuvenating mind as people can completely immerse in the ongoing show and enjoy it to the fullest.
In the recent past, smell had been introduced as an additional sensory stimuli, where the system is basically designed to release scents/fragrances/perfumes during the showing of a film so that viewers could experience a “smell” related to what was happening in the film screen. Such dispersed scents provide enhanced experiences, that allow viewers to develop deeper memories and emotional connections as it adds one more dimension or stimuli to be experienced by the viewers.
Such scent dispersing system lags real time control over release and dispersion of the fragrances. However, the present system is unable to control the intensity or concentration of dispersed fragrance to the liking of the viewers. If the concentration of fragrance is high, then it becomes irritating and if concentration is less it will not be even noticed by the viewers. This defeats the whole purpose of adding smell as an additional stimuli or experience for the user/viewer.
Since people have too many options of entertainment at their disposal, they are getting bored very fast with what they have now. Driven by basic human tendency people always crave for something new. It’s about time that entertainment system should add something new to engage audience/viewers.
Hence, there is a need for an improved system to create sensory stimuli events and a method to operate the same and therefore address the aforementioned issues.
BRIEF DESCRIPTION
In accordance with one embodiment of the disclosure, a system to create sensory stimuli events is disclosed. The system includes a video scene analysing module. The video scene analysing module comprises a first set of image capturing devices communicatively coupled with at least one IoT device. The video scene analysing module is configured to capture an image frame corresponding to a video scene of interest. The video scene analysing module is also configured to analyse the captured image frame for odour evaluation.
The system also includes an odour stimuli module operable by the at least one IoT device. The odour stimuli module is operatively coupled to the video scene analysing module. The odour stimuli module is configured to select an odour from a pre-stored set of odour in-accordance to evaluated odour. The odour stimuli module is also configured to disperse a selected odour from the one or more canisters. The odour stimuli module is also configured to disperse an odour neutralizer from one or more odour neutralizer canisters.
The system also includes a facial analysis module. The facial analysis module comprises a second set of image capturing devices communicatively coupled with the at least one IoT device. The facial analysis module is configured to capture a viewer facial expression image. The facial analysis module is also configured to analyse the viewer facial expression image by one or more emotion detection techniques to identify acceptance level of the dispersed odour. The facial analysis module is also configured to analyse the viewer facial expression image by the one or more emotion detection techniques to detect maximum interaction instant with respect to the video scene of interest.
The system also includes a holographic simulation module operable by the at least one IoT device. The holographic simulation is operatively coupled to the facial analysis module. The holographic simulation module is configured to create a holographic simulation of analysed video scene of interest based on detected maximum interaction instant of the viewer. The holographic simulation module is also configured to present created holographic simulation for the viewer in real time along with the video scene.
In accordance with one embodiment of the disclosure, a method for creating sensory stimuli events is disclosed. The method includes capturing an image frame corresponding to a video scene of interest. The method also includes analysing a captured image frame for odour evaluation. The method also includes selecting an odour from a pre- stored set of odour in-accordance to evaluated odour.
The method also includes dispersing a selected odour from the one or more canisters. The method also includes dispersing an odour neutralizer from one or more odour neutralizer canisters. The method also includes capturing a viewer facial expression image. The method also includes analysing the viewer facial expression image by one or more emotion detection techniques to identify the acceptance level of the dispersed odour.
The method also includes recalibrating intensity of the dispersed odour. The method also includes analysing the viewer facial expression image by the one or more emotion detection techniques to detect maximum interaction instant with respect to the video scene of interest. The method also includes creating a holographic simulation of analysed video scene of interest based on detected maximum interaction instant of the viewer. The method also includes presenting created holographic simulation for the viewer in real time along with the video scene.
To further clarify the advantages and features of the present disclosure, a more particular description of the disclosure will follow by reference to specific embodiments thereof, which are illustrated in the appended figures. It is to be appreciated that these figures depict only typical embodiments of the disclosure and are therefore not to be considered limiting in scope. The disclosure will be described and explained with additional specificity and detail with the appended figures.
BRIEF DESCRIPTION OF THE DRAWINGS
The disclosure will be described and explained with additional specificity and detail with the accompanying figures in which:
FIG. 1 is a block diagram representation of a system to create sensory stimuli events in accordance with an embodiment of the present disclosure;
FIG. 2 is a schematic representation of an embodiment representing the system to create sensory stimuli events of FIG. 1 in accordance of an embodiment of the present disclosure;
FIG. 3 is a block diagram of a computer or a server in accordance with an embodiment of the present disclosure; and
FIG. 4 is a flowchart representing the steps of a method for creating sensory stimuli events in accordance with an embodiment of the present disclosure.
Further, those skilled in the art will appreciate that elements in the figures are illustrated for simplicity and may not have necessarily been drawn to scale. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the figures by conventional symbols, and the figures may show only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the figures with details that will be readily apparent to those skilled in the art having the benefit of the description herein.
DETAIFED DESCRIPTION
For the purpose of promoting an understanding of the principles of the disclosure, reference will now be made to the embodiment illustrated in the figures and specific language will be used to describe them. It will nevertheless be understood that no limitation of the scope of the disclosure is thereby intended. Such alterations and further modifications in the illustrated online platform, and such further applications of the principles of the disclosure as would normally occur to those skilled in the art are to be construed as being within the scope of the present disclosure.
The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such a process or method. Similarly, one or more devices or subsystems or elements or structures or components preceded by "comprises... a" does not, without more constraints, preclude the existence of other devices, subsystems, elements, structures, components, additional devices, additional subsystems, additional elements, additional structures or additional components. Appearances of the phrase "in an embodiment", "in another embodiment" and similar language throughout this specification may, but not necessarily do, all refer to the same embodiment.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the art to which this disclosure belongs. The system, methods, and examples provided herein are only illustrative and not intended to be limiting.
In the following specification and the claims, reference will be made to a number of terms, which shall be defined to have the following meanings. The singular forms “a”, “an”, and “the” include plural references unless the context clearly dictates otherwise.
Embodiments of the present disclosure relate to a system to create sensory stimuli events. The system includes a video scene analysing module. The video scene analysing module comprises a first set of image capturing devices communicatively coupled with at least one IoT device. The video scene analysing module is configured to capture an image frame corresponding to a video scene of interest. The video scene analysing module is also configured to analyse the captured image frame for odour evaluation.
The system also includes an odour stimuli module operable by the at least one IoT device. The odour stimuli module is operatively coupled to the video scene analysing module. The odour stimuli module is configured to select an odour from a pre-stored set of odour in-accordance to evaluated odour. The odour stimuli module is also configured to disperse a selected odour from the one or more canisters. The odour stimuli module is also configured to disperse an odour neutralizer from one or more odour neutralizer canisters.
The system also includes a facial analysis module. The facial analysis module comprises a second set of image capturing devices communicatively coupled with the at least one IoT device. The facial analysis module is configured to capture a viewer facial expression image. The facial analysis module is also configured to analyse the viewer facial expression image by one or more emotion detection techniques to identify acceptance level of the dispersed odour. The facial analysis module is also configured to analyse the viewer facial expression image by the one or more emotion detection techniques to detect maximum interaction instant with respect to the video scene of interest.
The system also includes a holographic simulation module operable by the at least one IoT device. The holographic simulation is operatively coupled to the facial analysis module. The holographic simulation module is configured to create a holographic simulation of analysed video scene of interest based on detected maximum interaction instant of the viewer. The holographic simulation module is also configured to present created holographic simulation for the viewer in real time along with the video scene.
FIG. 1 is a block diagram representation of a system (10) to create sensory stimuli events in accordance with an embodiment of the present disclosure. As used herein, the term “sensory stimulus” refers to any event or object that is received by the senses and elicits a response from a person.
Smell and vision are powerful senses that may evoke different feelings and memories, as well as impact a viewer's experience during any transaction. While watching any video in a show theatre, techniques enhancing any viewer smell or visual senses will surely affect the movie experience. For such realization the system (10) uses various sets of cameras along with various IoT devices.
The system (10) includes a video scene analysing module (20). The video scene analysing module (20) comprises a first set of image capturing devices (60) communicatively coupled with at least one IoT device (70). As used herein, the term “Internet of things (IOT)” refers to a system of interrelated computing devices, mechanical and digital machines provided with unique identifiers (UIDs) and the ability to transfer data over a network without requiring human-to-human or human- to-computer interaction. In one embodiment, the first set of image capturing devices
(60) may be any cameras such as smart phone camera, film cameras, point and shoot cameras and the like. In such embodiment, the first set of image capturing devices (60) are affixed around a video displaying frame.
The video scene analysing module (20) is configured to capture an image frame corresponding to a video scene of interest. In one embodiment, captured image frame includes a set of static image frames and textual information. In one exemplary embodiment, as the movie video is streaming in the display frame, the first set of cameras (60) captures multiple image frames for further understanding of the scene.
The video scene analysing module (20) is also configured to analyse the captured image frame for odour evaluation. The captured image frame is analysed by object detection technique to realize the odour consistent with the captured image frame. In the above stated exemplary embodiment, each of the captured multiple frame images is analysed by object detection technique. In such embodiment, the technique analyses the textual information as well as the static frame objects to understand an odour.
As used herein, the “object detection technique” refers to a technology related to computer vision which deals with detecting instances of semantic objects of a certain class (such as humans, buildings, or vehicles) in digital videos and images. In such embodiment, the system (10) analyses the textual information provided and static frame objects provided for understanding the class of associated objects. After such associated object detection, the system (10) enables understanding of specific odour corresponding to the detected object. In one specific embodiment, after analysing the system (10) may access stored odour details for evaluation and selection. In such embodiment, a stored database may automatically store details about specific odour and specific scenarios for using.
The system (10) also includes an odour stimuli module (30) operable by the at least one IoT device (70). The odour stimuli module (30) is operatively coupled to the video scene analysing module (20). The odour stimuli module (30) is configured to select an odour from a pre- stored set of odour in-accordance to evaluated odour. In one embodiment, the pre stored set of odours are stored in one or more canisters. In such embodiment, the one or more canisters are affixed around the video displaying frame.
As used herein, the term “canister” refers to a round or cylindrical container used for storing such selected substances.
Furthermore, in one specific embodiment, selection of odour is realized by an odour matching technique. In such embodiment, the odour matching technique includes selecting of odour after matching pre- stored details of the pre stored set of odours in the one or more canisters with evaluated odour details. It is pertinent to note that, the one or more canisters stores more than one type of odour and with which every odour specific detail might be stored about specific situation use.
For example, various geographical types of rose odour may be stored in the canisters. After analysing the captured image frame the odour stimuli module (30) may select the odour that matches with geographical details as portrayed by the captured image frame.
Moreover, the odour stimuli module (30) is also configured to disperse a selected odour from the one or more canisters. The dispersion of the selected odour is triggered in real time in-accordance to the captured image frame. In one specific embodiment, the dispersion of the selected odour happens simultaneously with streaming of the image frame in the video, thereby providing enjoyable experience.
The odour stimuli module (30) is also configured to disperse an odour neutralizer from one or more odour neutralizer canisters. In one embodiment, the dispersion of the odour neutralizer is triggered after a pre-determined time interval with respect to the dispersion of the selected odour. In another embodiment, the pre-determined time interval may be manipulated by the IoT device as required. In such embodiment, the one or more odour neutralizer canisters are affixed around the video displaying frame. It is pertinent to note that, the odour neutralizer eliminates variety type of odours that is released according to specific scenes.
The system (10) also includes a facial analysis module (40) comprising a second set of image capturing devices (80) communicatively coupled with the at least one IoT device (70). The facial analysis module (40) is operatively coupled with the odour stimuli module (30). The facial analysis module (40) is configured to capture a viewer facial expression image with the help of the second set of image capturing devices (80).
In one embodiment, the second set of image capturing devices are affixed, in the hall or premise showing the media content, in a manner to capture picture of faces of users/viewers/audience. In such embodiment, the image capturing devices may include digital cameras of required resolution for capturing facial expression of the viewers. Facial expression may be categorised into happy, neutral, average, below average, bad and the like as a representative of acceptance level (defined by maximum interaction instant).
Furthermore, the facial analysis module (40) is also configured to analyse the viewer facial expression image by one or more emotion detection techniques to identify the acceptance level of the users/viewers/audience for the dispersed odour. In one embodiment, the identification of the acceptance level enables recalibration or adjustment of intensity of the dispersed odour by the odour stimuli module (30). Such recalibration of intensity refers to increase or decrease of spreading odour in real time. Further, in an embodiment, the identified acceptance level of the users/viewers/audience may be used to suggest an alternative fragrance/odour to the odour being dispersed. This may happen where the acceptance level of the users/viewers/audience remains low even after recalibration. This helps in customising the odour with respect to the liking of different set of audiences of different regions or countries. This makes the experience more personalised and likable.
In one another embodiment, the captured facial expression of a viewer is analysed to understand emotions while the viewer is experiencing the dispersed odour along with corresponding scene. In such embodiment, various human body parameters are analysed for understanding the real time emotions. The body parameters include eye activity, motion analysis and the like. In one particular embodiment, eye movement, lip movement, head movement, talking gestures etc. enable automatic detection of emotion. The maximum interaction instant is evaluated by analysing the viewer facial expression data sets.
Moreover, the facial analysis module (40) is also configured to analyse the viewer facial expression image by the one or more emotion detection techniques to detect maximum interaction instant with respect to the video scene of interest. In one embodiment, the viewer facial expression is analysed continuously for detecting maximum interaction instant. In one specific embodiment, during continuous image
capturing of the facial expression, the system (10) via the facial analysis module (40) enables detection of maximum interaction instant. In such embodiment, maximum interaction instant refers to instant when the viewer interacts greatest with the ongoing video scene.
The facial analysis module (30) captures the viewer facial expression by the second set of image capturing devices (80) and further analyses the facial expression in accordance with humane body parameters. The video scene of maximum interaction based on viewer facial expression is noted for further usage.
The system (10) also includes a holographic simulation module (50). The holographic simulation module (50) is operable by the at least one IoT device (70) and operatively coupled to the facial analysis module (40). The holographic simulation module (50) is configured to create a holographic simulation of analysed video scene of interest based on detected maximum interaction instant of the viewer. As used herein, the term “holography” is a photographic technique that records the light scattered from an object, and then presents it in a way that appears three-dimensional.
The holographic simulation module (50) is configured to present created holographic simulation for the viewer in real time along with the video scene. In one embodiment, the created holographic simulation is presented via one or more simulated holographic devices.
In one embodiment, the at least one IoT device (70) enable creation and presentation of holographic scenes. The scenes may be projected through pre-fixed IoT devices (70) at pre-defined places before the video screen. It is pertinent to note that, such presentation of holographic scenes in real time along with ongoing videos will increase viewer’s visual experience.
In one specific embodiment, the system (10) has previously analysed the scenes which any viewer had maximum interaction. During the screening of the same scene, the system (10) with the help of the holographic simulation module (50) enable presentation of 3D hologram corresponding to that video scene. Such presentation may enable simultaneous visual interaction facilities between the viewers and video screen.
FIG. 2 is a schematic representation of an embodiment representing the system (10) to create sensory stimuli events of FIG. 1 in accordance of an embodiment of the present disclosure. In one exemplary embodiment, a viewer Y (100) is exposed to sensory stimuli events as presented during a movie screening at movie screen X (90). A set of cameras (60 and 80) are placed in conjunction with movie screen X (90). The set cameras (60 and 80) capture images relating to screen X (90) as well as facial expression in relation to viewer Y (100). Further, IoT devices (70) are also placed in conjunction with the screen X (90) for functioning of the system (10).
A video scene analysing module (20) enables capturing of an image frame of a video scene screened on the movie screen X (90). The video scene analysing module (20) further analyses the captured image frame by object detection technique for a predefined duration to understand a relevant object being shown for which a relevant odour could be released. For example, if the image frames or video depicts a car passing through road covered with Eucalyptus trees for the predefined duration, the object detection technique analyses the image frames by image processing and detect instances of semantic objects of a certain class like trees, and thereby evaluating that odour of Eucalyptus oil is required for use. The predefined duration may vary, and it is customisable. For example, in one instance it may be 8 to 10 seconds.
After odour evaluation, an odour stimuli module (30) selects in real time a prestored odour from an adjoining canister. For selection of the odour, the odour stimuli module (30) uses odour matching technique. In above stated example, the system (10) may access the stored details of specific container to identify the odour details. If the details match with the required odour, the canister sprays the eucalyptus odour in real time. Thereby, enhancing real time show experience of viewer Y (100) with the eucalyptus smell. The odour stimuli module (30) may further disperse odour neutralizer for neutralizing the smell before any other odour is dispersed. In pre-defined time, the odour neutralizer may be dispersed accordingly.
The video scene analysing module (20) may also select different objects or events being depicted on the screen such as, but not limited to, foods, flowers, rain and likes.
A facial analysis module (40) further enables understanding of the viewer Y (100) emotion by facial expression. In above stated example, by another camera (80) the
viewer Y (100) facial expression is captured while on going screening in movie screen X (90). Here, the viewer Y (100) face expression is captured as odour stimulus is introduced with particular video streaming. Analysis of the captured facial expression enables identification of odour acceptance level. Such identification enables recalibration of intensity of the dispersed odour. The viewer Y (100) facial expression may indicate whether the odour should be increased or decreased, or whether the detected object should be even used as a reference for dispersing the related odour.
Moreover, in above stated example, the captured facial expression may also indicate the instant at which the viewer Y (100) interacts more with screening video. The scene with which the viewer Y (100) interacts most is analysed. A holographic simulation module (50) creates a 3d holography video of the analysed maximum interaction scene. In accordance to above stated exemplary embodiment, as viewer Y (100) enjoys a snow-covered hills along odour stimulus, the holographic simulation module (50) enables creation of simulation of snow-covered mountains or snow fall and simultaneous presentation. Such presentation of 3d simulation is enabled by various holographic devices.
The video scene analysing module (20), the odour stimuli module (30), the facial analysis module (40) and the holographic simulation module (50) in FIG. 2 is substantially equivalent to the video scene analysing module (20), the odour stimuli module (30), the facial analysis module (40) and holographic simulation module (50) of FIG. 1.
FIG. 3 is a block diagram of a computer or a server (110) in accordance with an embodiment of the present disclosure. The server (110) includes processor(s) (140), and memory (120) coupled to the processor(s) (140).
The processor(s) (140), as used herein, means any type of computational circuit, such as, but not limited to, a microprocessor, a microcontroller, a complex instruction set computing microprocessor, a reduced instruction set computing microprocessor, a very long instruction word microprocessor, an explicitly parallel instruction computing microprocessor, a digital signal processor, or any other type of processing circuit, or a combination thereof.
The memory (120) includes a plurality of modules stored in the form of executable program which instructs the processor (140) via a bus (130) to perform the method steps illustrated in Fig 1. The memory (120) has following modules: the video scene analysing module (20), the odour stimuli module (30), the facial analysis module (40) and holographic simulation module (50).
The video scene analysing module (20) is configured to capture an image frame corresponding to a video scene of interest. The video scene analysing module (20) is also configured to analyse the captured image frame for odour evaluation.
The odour stimuli module (30) is configured to select an odour from a pre-stored set of odour in-accordance to evaluated odour. The odour stimuli module (30) is also configured to disperse a selected odour from the one or more canisters. The odour stimuli module (30) is also configured to disperse an odour neutralizer from one or more odour neutralizer canisters.
The facial analysis module (40) is configured to capture a viewer facial expression image. The facial analysis module (40) is also configured to analyse the viewer facial expression image by one or more emotion detection techniques to identify acceptance level of the dispersed odour. The facial analysis module (40) is also configured to analyse the viewer facial expression image by the one or more emotion detection techniques to detect maximum interaction instant with respect to the video scene of interest.
The holographic simulation module (50) is configured to create a holographic simulation of analysed video scene of interest based on detected maximum interaction instant of the viewer. The holographic simulation module (50) is also configured to present created holographic simulation for the viewer in real time along with the video scene.
Computer memory elements may include any suitable memory device(s) for storing data and executable program, such as read only memory, random access memory, erasable programmable read only memory, electrically erasable programmable read only memory, hard drive, removable media drive for handling memory cards and the like. Embodiments of the present subject matter may be implemented in conjunction with program modules, including functions, procedures, data structures, and
application programs, for performing tasks, or defining abstract data types or low-level hardware contexts. Executable program stored on any of the above-mentioned storage media may be executable by the processor(s) (140). FIG. 4 is a flowchart representing the steps of a method (150) for creating sensory stimuli events in accordance with an embodiment of the present disclosure. The method (150) includes capturing an image frame corresponding to a video scene of interest in step 160. In one embodiment, capturing the image frame corresponding to the video scene of interest includes capturing the image frame corresponding to the video scene of interest by a video scene analysing module.
The method (150) also includes analysing a captured image frame for odour evaluation in step 170. In one embodiment, analysing the captured image frame for odour evaluation includes analysing the captured image frame for odour evaluation by the video scene analysing module. In another embodiment, analysing the captured image frame for odour evaluation includes analysing the captured image frame by object detection technique to realize the odour consistent with the captured image frame.
The method (150) also includes selecting an odour from a pre-stored set of odour in accordance to evaluated odour in step 180. In one embodiment, selecting the odour from the pre-stored set of odour in-accordance to evaluated odour includes selecting the odour from the pre-stored set of odour in-accordance to evaluated odour by a odour stimuli module. In another embodiment, selecting the odour from the pre-stored set of odour in-accordance to evaluated odour includes selecting the odour from the pre stored set of odours by an odour matching technique. In yet another embodiment, selecting the odour from the pre-stored set of odour in-accordance to evaluated odour includes selecting the odour by the odour matching technique comprising matching of pre-stored details of the pre-stored set of odours in one or more canisters.
The method (150) also includes dispersing a selected odour from the one or more canisters in step 190. In one embodiment, dispersing the selected odour from the one or more canisters includes dispersing the selected odour from the one or more canisters by the odour stimuli module. In another embodiment, dispersing the selected odour from the one or more canisters includes dispersing odour neutralizer after a pre determined time interval with respect to the dispersion of the selected odour.
The method (150) also includes dispersing an odour neutralizer from one or more odour neutralizer canisters in step 200. In one embodiment, dispersing the odour neutralizer from the one or more odour neutralizer canisters includes dispersing the odour neutralizer from the one or more odour neutralizer canisters the odour stimuli module.
The method (150) also includes capturing a viewer facial expression image in step 210. In one embodiment, capturing the viewer facial expression image includes capturing the viewer facial expression image by a facial analysis module.
The method (150) also includes analysing the viewer facial expression image by one or more emotion detection techniques to identify the acceptance level of the dispersed odour in step 220. In one embodiment, analysing the viewer facial expression image by the one or more emotion detection techniques to identify the acceptance level of the dispersed odour includes analysing the viewer facial expression image by the one or more emotion detection techniques to identify the acceptance level of the dispersed odour by the facial analysis module.
The method (150) also includes recalibrating intensity of the dispersed odour in step 230. In one embodiment, recalibrating the intensity of the dispersed odour includes recalibrating the intensity of the dispersed odour by the odour stimuli module. In another embodiment, recalibrating the intensity of the dispersed odour includes recalibrating intensity of the dispersed odour in accordance of acceptance level. The method (150) also includes analysing the viewer facial expression image by the one or more emotion detection techniques to detect maximum interaction instant with respect to the video scene of interest in step 240. In one embodiment, analysing the viewer facial expression image by the one or more emotion detection techniques to detect maximum interaction instant with respect to the video scene of interest includes analysing the viewer facial expression image by the one or more emotion detection techniques to detect maximum interaction instant with respect to the video scene of interest by the facial analysis module.
In another embodiment, analysing the viewer facial expression image by the one or more emotion detection techniques to detect maximum interaction instant with respect
to the video scene of interest includes analysing the viewer facial expression image continuously for detecting maximum interaction instant.
The method (150) also includes creating a holographic simulation of analysed video scene of interest based on detected maximum interaction instant of the viewer in step 250. In one embodiment, creating the holographic simulation of the analysed video scene of interest based on the detected maximum interaction instant of the viewer includes creating the holographic simulation of the analysed video scene of interest based on the detected maximum interaction instant of the viewer by a holographic simulation module.
The method (150) also includes presenting created holographic simulation for the viewer in real time along with the video scene in step 260. In one embodiment, presenting the created holographic simulation for the viewer in real time along with the video scene includes presenting the created holographic simulation for the viewer in real time along with the video scene by the holographic simulation module. In another embodiment, presenting the created holographic simulation for the viewer in real time along with the video scene includes presenting the created holographic simulation via one or more simulated holographic devices.
Present disclosure enables creation of sensory stimuli events in accordance to multimedia content. The system provides interactive digital entertainment ecosystem in association with augmented reality, virtual reality, Internet of Things, Holographic Technique and the like with artificial intelligence. The present invention solves the existing issues in the smell dispersion techniques implemented in the present entertainment systems. Furthermore, it add one more dimension of experience in present entertainment systems in the form of real time holographic projections.
While specific language has been used to describe the disclosure, any limitations arising on account of the same are not intended. As would be apparent to a person skilled in the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein.
The figures and the foregoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be
split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, order of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts need to be necessarily performed. Also, those acts that are not dependant on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples.
Claims
WE CLAIM:
1. A system (10) to create sensory stimuli events, comprising: a video scene analysing module (20) comprising a first set of image capturing devices (60) communicatively coupled with at least one IoT device (70), wherein the video scene analysing module is configured to: capture an image frame corresponding to a video scene of interest; and analyse captured image frame for odour evaluation, wherein the captured image frame is analysed by object detection technique to realize the odour consistent with the captured image frame; an odour stimuli module (30) operable by the at least one IoT device (70) and operatively coupled to the video scene analysing module (20), wherein the odour stimuli module (30) is configured to: select an odour from a pre-stored set of odours in-accordance to evaluated odour, wherein the pre stored set of odours are stored in one or more canisters, wherein selection of odour is realized by an odour matching technique; disperse a selected odour from the one or more canisters, wherein the dispersion of the selected odour is triggered in real time in-accordance to the captured image frame; disperse an odour neutralizer from one or more odour neutralizer canisters, wherein the dispersion of the odour neutralizer is triggered after a pre-determined time interval with respect to the dispersion of the selected odour; and a facial analysis module (40) comprising a second set of image capturing devices (80) communicatively coupled with the at least one IoT device (70), wherein the facial analysis module (40) is configured to: capture a viewer facial expression image;
analyse the viewer facial expression image by one or more emotion detection techniques to identify acceptance level of the dispersed odour, wherein the identification of the acceptance level enables recalibration of intensity of the dispersed odour by the odour stimuli module (30); and analyse the viewer facial expression image by the one or more emotion detection techniques to detect maximum interaction instant with respect to the video scene of interest, wherein the viewer facial expression is analysed continuously for detecting maximum interaction instant; a holographic simulation module (50) operable by the at least one IoT device (70) and operatively coupled to the facial analysis module (40), wherein the holographic simulation module (50) is configured to: create a holographic simulation of analysed video scene of interest based on detected maximum interaction instant of the viewer; and present created holographic simulation for the viewer in real time along with the video scene, wherein the created holographic simulation is presented via one or more simulated holographic devices.
2. The system (10) as claimed in claim 1, wherein the captured image frame comprises of a set of static image frames and textual information.
3. The system (10) as claimed in claim 1, wherein the first set of image capturing devices (60) and the second set of image capturing devices (80) are affixed around a video displaying frame.
4. The system (10) as claimed in claim 1, wherein the one or more canisters and the one or more odour neutralizer canisters are affixed around the video displaying frame.
5. The system (10) as claimed in claim 1, wherein the odour matching technique includes selecting of odour after matching pre- stored details of the pre stored set of odours in the one or more canisters.
6. The system (10) as claimed in claim 1, wherein the maximum interaction instant is evaluated by analysing the viewer facial expression data sets.
7. The system (10) as claimed in claim 1, wherein the one or more simulated holographic devices are affixed around a video displaying frame.
8. A method (150) for creating sensory stimuli events, comprising: capturing, by a video scene analysing module, an image frame corresponding to a video scene of interest (160); analysing, by the video scene analysing module, a captured image frame for odour evaluation (170); selecting, by an odour stimuli module, an odour from a pre-stored set of odour in-accordance to evaluated odour (180); dispersing, by the odour stimuli module, a selected odour from the one or more canisters (190); dispersing, by the odour stimuli module, an odour neutralizer from one or more odour neutralizer canisters (200); capturing, by a facial analysis module, a viewer facial expression image
(210); analysing, by the facial analysis module, the viewer facial expression image by one or more emotion detection techniques to identify the acceptance level of the dispersed odour (220); recalibrating, by the odour stimuli module, intensity of the dispersed odour (230); analysing, by the facial analysis module, the viewer facial expression image by the one or more emotion detection techniques to detect maximum interaction instant with respect to the video scene of interest (240);
creating, by a holographic simulation module, a holographic simulation of analysed video scene of interest based on detected maximum interaction instant of the viewer (250); and presenting, by the holographic simulation module, created holographic simulation for the viewer in real time along with the video scene (260).
9. The method (150) as claimed in claim 8, wherein analysing, by the video scene analysing module, the captured image frame by object detection technique to realize the odour consistent with the captured image frame.
10. The method (150) as claimed in claim 8, wherein selecting, by the odour stimuli module, the odour from the pre-stored set of odours by an odour matching technique.
11. The method (150) as claimed in claim 9, wherein selecting, by the odour stimuli module, the odour by the odour matching technique comprising matching of pre-stored details of the pre-stored set of odours in one or more canisters. 12. The method (150) as claimed in claim 8, wherein dispersing, by the odour stimuli module, the selected odour in real time in-accordance to the captured image frame.
13. The method (150) as claimed in claim 8, wherein dispersing, by the odour stimuli module, odour neutralizer after a pre-determined time interval with respect to the dispersion of the selected odour.
14. The method (150) as claimed in claim 8, wherein recalibrating, by the odour stimuli module, intensity of the dispersed odour in accordance of acceptance level.
15. The method (150) as claimed in claim 8, wherein analysing, by the facial analysis module, the viewer facial expression image continuously for detecting maximum interaction instant.
16. The method (150) as claimed in claim 8, wherein presenting, by the holographic simulation, the created holographic simulation via one or more simulated holographic devices.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IN202021022556 | 2020-05-29 | ||
IN202021022556 | 2020-05-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021240226A1 true WO2021240226A1 (en) | 2021-12-02 |
Family
ID=78744203
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2020/057326 WO2021240226A1 (en) | 2020-05-29 | 2020-08-03 | System and method to create sensory stimuli events |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2021240226A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115576250A (en) * | 2022-10-25 | 2023-01-06 | 杭州气味王国科技有限公司 | Remote odor generation device control system based on intelligent device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016007300A (en) * | 2014-06-24 | 2016-01-18 | 株式会社日立メディコ | Biological light measurement device and biological light measurement method |
CN106502075A (en) * | 2016-11-09 | 2017-03-15 | 微美光速资本投资管理(北京)有限公司 | A kind of holographic projection methods |
KR20190007771A (en) * | 2017-07-13 | 2019-01-23 | 한국전자통신연구원 | Apparatus and method for generation of olfactory information related to multimedia contents |
-
2020
- 2020-08-03 WO PCT/IB2020/057326 patent/WO2021240226A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016007300A (en) * | 2014-06-24 | 2016-01-18 | 株式会社日立メディコ | Biological light measurement device and biological light measurement method |
CN106502075A (en) * | 2016-11-09 | 2017-03-15 | 微美光速资本投资管理(北京)有限公司 | A kind of holographic projection methods |
KR20190007771A (en) * | 2017-07-13 | 2019-01-23 | 한국전자통신연구원 | Apparatus and method for generation of olfactory information related to multimedia contents |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115576250A (en) * | 2022-10-25 | 2023-01-06 | 杭州气味王国科技有限公司 | Remote odor generation device control system based on intelligent device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Loschky et al. | What would Jaws do? The tyranny of film and the relationship between gaze and higher-level narrative film comprehension | |
KR101773018B1 (en) | Augmented reality based on imaged object characteristics | |
Ademoye et al. | Information recall task impact in olfaction-enhanced multimedia | |
JP2022166078A (en) | Composing and realizing viewer's interaction with digital media | |
US20150332515A1 (en) | Augmented reality system | |
Murray et al. | The impact of scent type on olfaction-enhanced multimedia quality of experience | |
US20200320795A1 (en) | System and layering method for fast input-driven composition and live-generation of mixed digital content | |
Chalmers et al. | Level of realism for serious games | |
US8218811B2 (en) | Method and system for video interaction based on motion swarms | |
WO2021240226A1 (en) | System and method to create sensory stimuli events | |
Seymour et al. | Beyond deep fakes | |
Blascovich et al. | Immersive virtual environments and education simulations | |
Brkic et al. | Cross-modal affects of smell on the real-time rendering of grass | |
Katti et al. | Online estimation of evolving human visual interest | |
Cater et al. | Varying rendering fidelity by exploiting human change blindness | |
US20230135254A1 (en) | A system and a method for personalized content presentation | |
Ramic et al. | Selective rendering in a multi-modal environment: Scent and graphics | |
Seo et al. | Analysis of Virtual Reality Movies: Focusing on the Effect of Virtual Reality Movie’s Distinction on User Experience | |
Pornpanomchai et al. | SubSmell: Multimedia with a simple olfactory display | |
Huff et al. | Edit blindness is not related to immersion and presence in Hollywood movies. | |
Nandal | Smell-O-Vision Device | |
Pornpanomchai et al. | Ad-Smell: Advertising movie with a simple olfactory display | |
Pedersen | A study in perceived believability | |
Huang | A method of evaluating user visual attention to moving objects in head mounted virtual reality | |
Ritter | The intersection of art and interactivity |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20937940 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20937940 Country of ref document: EP Kind code of ref document: A1 |