WO2023018309A1 - Procédé et appareil de génération de vibrations localisées - Google Patents

Procédé et appareil de génération de vibrations localisées Download PDF

Info

Publication number
WO2023018309A1
WO2023018309A1 PCT/KR2022/012144 KR2022012144W WO2023018309A1 WO 2023018309 A1 WO2023018309 A1 WO 2023018309A1 KR 2022012144 W KR2022012144 W KR 2022012144W WO 2023018309 A1 WO2023018309 A1 WO 2023018309A1
Authority
WO
WIPO (PCT)
Prior art keywords
identified
contents
vibration frequency
electronic device
audio
Prior art date
Application number
PCT/KR2022/012144
Other languages
English (en)
Inventor
Gaurav SIKARWAR
Baljeet KUMAR
Abhishek Mishra
Vipul Krishan DHUNNA
Ashutosh Gupta
Original Assignee
Samsung Electronics Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co., Ltd. filed Critical Samsung Electronics Co., Ltd.
Publication of WO2023018309A1 publication Critical patent/WO2023018309A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1656Details related to functional adaptations of the enclosure, e.g. to provide protection against EMI, shock, water, or to host detachable peripherals like a mouse or removable expansions units like PCMCIA cards, or to provide access to internal components for maintenance or to removable storage supports like CDs or DVDs, or to mechanically mount accessories
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus

Definitions

  • the disclosure relates to a method and an apparatus for generating localized vibrations.
  • the disclosure relates to a method and an apparatus for generating a localized haptic feedback effect in an electronic device.
  • vibration motor to generate vibrations to notify users in response an event.
  • the vibration generated by these electronic devices is of very high intensity. Vibrations are used for tactile feedback in touch-based electronic devices.
  • touch-based smart phones users have not been familiar with tapping on a smooth display instead of pushing physical buttons. For this purpose, small vibration feedback was added during the tapping. Further technological use of vibrations began to be applied relatively recently.
  • the disclosure provides the method and the apparatus for generating the localized haptic feedback effect in the electronic device.
  • the method includes identifying, by a processor, one or more contents on a display screen of the electronic device, dividing the display screen into a plurality of grids, determining one or more haptic event locations associated with the identified one or more contents on the plurality of grids, classifying the one or more haptic events with respect to a level of haptic feedback to be generated at the one or more haptic event locations, determining an optimized vibration frequency based on the classified level of haptic feedback to be generated at the one or more haptic event locations for the identified one or more contents based on at least one of a plurality of environmental parameters and a capability of generation of sound source by one or more audio sources of the electronic device, and generating, based on the determined optimized vibration frequency, a calibrated optimized vibration frequency by calibration of the optimized vibration frequency and an amplitude for the generation of the localized haptic feedback effect on the determined one or more haptic event locations by using the one or more
  • FIGS. 1A, 1B, 1C, and 1D illustrate an example illustration of a vibration effect
  • FIG. 2 illustrates a block diagram of a device for generating a localized haptic feedback effect, in accordance with an embodiment of the disclosure
  • FIG. 3 illustrates a flowchart for generating a localized haptic feedback effect, in accordance with an embodiment of the disclosure
  • FIG. 4 illustrates a detailed architecture of the device including operational processes, in accordance with an embodiment of the disclosure
  • FIGS. 5A, 5B, 5C, and 5D illustrate examples of a content identification process, in accordance with an embodiment of the disclosure
  • FIG. 6A, 6B, 6C and 6D illustrate examples of detecting environmental parameters, in accordance with an embodiment of the disclosure
  • FIGS. 7A and 7B illustrates an example of a triangulation technique for calculating audio source coordinates, in accordance with an embodiment of the disclosure
  • FIGS. 8A and 8B illustrates an example of features extraction process, in accordance with an embodiment of the disclosure
  • FIG. 9A illustrates a process of operations performed by the Frequency Mapping Unit, in accordance with an embodiment of the disclosure
  • FIG. 9B illustrates a process of operations performed by the Size based Scaling Unit, in accordance with an embodiment of the disclosure
  • FIG. 10A illustrates an example process of operations performed by the Amplitude Calculation Unit, in accordance with an embodiment of the disclosure
  • FIG. 10B illustrates an example frequency identification by the Frequency Calculation Unit, in accordance with an embodiment of the disclosure
  • FIG. 11 illustrates an example process of operations performed by Coordinate & Frequency Alignment Unit, in accordance with an embodiment of the disclosure
  • FIGS. 12A and 12B illustrate an example process of operations performed by the Coordinate & Frequency Alignment Unit, in accordance with an embodiment of the disclosure
  • FIGS. 13A and 13B illustrate an example of amplitude calibration process performed by the Calibration Engine 406, in accordance with an embodiment of the disclosure
  • FIGS. 14A and 14B illustrate an example process of generating sound wavelets by Production Unit, in accordance with an embodiment of the disclosure
  • FIG. 15 illustrates another example of generating sound wavelets by Production Unit, in accordance with an embodiment of the disclosure
  • FIGS. 16A, 16B, 16C, 16D, 16E, and 16F illustrate an example process of obtaining a unique vibration frequency, in accordance with an embodiment of the disclosure
  • FIGS. 17A, 17B, and 17C illustrate examples of the generating a localized haptic feedback effect on the determined one or more haptic event locations, in accordance with an embodiment of the disclosure
  • FIG. 18 illustrates a first use case of providing a real-time localized haptic feedback effect to a user, in accordance with an embodiment of the disclosure
  • FIG. 19 illustrates a second use case of providing a real-time localized haptic feedback effect to the user, in accordance with an embodiment of the disclosure
  • FIG. 20 illustrates a third use case of providing a real-time localized haptic feedback effect to the user, in accordance with an embodiment of the disclosure.
  • FIG. 21 illustrates a block diagram of an electronic device that executes the processes of FIG. 2 and FIG. 4, in accordance with an embodiment of the disclosure.
  • any terms used herein such as but not limited to “includes,” “comprises,” “has,” “consists,” and grammatical variants thereof do NOT specify an exact limitation or restriction and certainly do NOT exclude the possible addition of one or more features or elements, unless otherwise stated, and furthermore must NOT be taken to exclude the possible removal of one or more of the listed features and elements, unless otherwise stated with the limiting language “MUST comprise” or “NEEDS TO include.”
  • the vibrations used for tactile feedback are being produced using various motors including a DC motor that is effective in creating a buzzing and motional sensation in a device.
  • the vibrations produced by the DC motor are usually centralized on a touch-based display of electronic devices and have a damping effect from the place where the DC motor is positioned in the touch-based electronic devices.
  • the DC motor produces the vibration, the whole electronic device gets a vibration effect and the vibration effect is not specific to a particular location on the electronic device.
  • DC motors bear some extra cost and utilize the power of the electronic device to produce the desired effect.
  • the entire phone vibrates when the DC motor produced the vibration. It clearly shows an absence of localized vibration.
  • the vibrations produced by DC motors during an incoming call and video playback are centralized.
  • the electronic device vibrates at multiple points due to an effect of the vibrations produced by the DC motors.
  • the disclosure provides the method and the apparatus for generating the localized haptic feedback effect in the electronic device
  • FIG. 2 illustrates a block diagram of a device for generating a localized haptic feedback effect, in accordance with an embodiment of the disclosure.
  • FIG. 2 illustrates a electronic device 2000 to generate the localized haptic feedback effect.
  • the electronic device 2000 includes an Application Framework 2101, a Hardware Layer 2300, and a processor 2200 which further includes an Identification Engine 2202, an Approximation Engine 2204, a Calibration Engine 2206, and a Generation Engine 2208.
  • the aforementioned components of the electronic device are coupled with each other.
  • Each of the Identification Engine 202, the Approximation Engine 204, the Calibration Engine 206, and the Generation Engine 208 are communicatively coupled to the Hardware Layer 212 and the Application Framework 210.
  • the Identification Engine 202, the Approximation Engine 204, the Calibration Engine 206, and the Generation Engine 208 are implemented as the processor 2200.
  • the Hardware Layer 2300 of the electronic device 2000 includes a display 2310, an audio unit 2320, and a sensor 2330.
  • the display 2310 includes a touch screen panel 2311 and a graphics engine 2312 coupled with the touch screen panel 2311.
  • the display 2310 displays an image or video content. Examples of the display 2310 may include but not limited to, a television screen, a smartphone screen, a smart television screen, and a tablet screen.
  • the display 2310 may be Light Emitting Diode (LED), Liquid Crystal Display (LCD), Organic Light Emitting Diode (OLED), Active-Matrix Organic Light Emitting Diode (AMOLED), or Super Active-Matrix Organic Light Emitting Diode (SAMOLED) screen.
  • the display 2310 may have varied resolutions. It will be understood to a person of ordinary skill in the art that the disclosure is not limited to any type or any resolution of the display 2310.
  • the audio unit 2320 includes a speaker unit 2321 including one or more speakers, and at least one microphone 2322.
  • the sensor 2330 may include one or more sensors.
  • the sensor 2330 includes, but is not limited to, a grip sensor 2331, an accelerometer 2332, a gyroscope sensor 2333, and a humidity sensor 2339.
  • the sensor 2330 may include sensors different from those described above.
  • the Identification Engine 2202 identifies one or more contents displayed on the display 2310. After identifying the one or more contents displayed on the display 2310, the Identification Engine 2202 divides the display screen into a plurality of grids and determines occurrences and corresponding locations of one or more haptic events on the plurality of grids. The locations of one or more haptic events are associated with the identified one or more contents.
  • the one or more haptic event locations can also be referred to as "one or more vibration locations" without deviating from the scope of the disclosure.
  • the Approximation Engine 2204 classifies one or more haptic events associated with the identified content.
  • the Approximation Engine 2204 classifies the one or more haptic events with respect to a level of haptic feedback to be generated at the one or more haptic event locations.
  • the one or more haptic events can also be referred to as "one or more vibration events” without deviating from the scope of the disclosure
  • the level of haptic feedback can also be referred to as "a frequency level of the one or more vibration events” without deviating from the scope of the disclosure.
  • the Calibration Engine 2206 determines an optimized vibration frequency according to the classified level of haptic feedback to be generated at the one or more haptic event locations for the identified one or more contents.
  • the Calibration Engine 2206 determines the optimized vibration frequency based on at least one of a plurality of environmental parameters and a capability of generating sound by at least one of the speaker unit 2321, and the at least one microphone 2322 of the audio unit 2320.
  • the Generation Engine 2208 generates a calibrated optimized vibration frequency based on the determined optimized vibration frequency by calibration of the optimized vibration frequency and amplitude for the generation of the localized haptic feedback effect on the determined one or more haptic event locations.
  • the Generation Engine 2208 generates the calibrated optimized vibration frequency by using at least one of the one or more speakers of the speaker unit 2321 and/or the at least one microphone 2322 included in the audio unit 2320.
  • FIG. 3 illustrates a flowchart for generating a localized haptic feedback effect, in accordance with an embodiment of the disclosure. The operations of FIG. 3 may be performed by the processor 2200 of the electronic device 2000.
  • the processor 2200 may identify one or more contents displayed on a display screen of the display 2310 in operation 302.
  • the processor 2200 may divide the display screen into a plurality of grids.
  • the processor 2200 may determine one or more haptic events locations on the plurality of grids.
  • the one or more haptic event locations are associated with the identified one or more contents.
  • the Identification Engine 2202 of the processor 2200 may perform each of the operations 302, 304, and 306 of the method 300.
  • the method 300 comprises classifying one or more haptic events associated with the identified content with respect to a level of haptic feedback to be generated at the one or more haptic event locations.
  • the Approximation Engine 2204 of the processor 2200 performs the operation 308.
  • the method 300 comprises determining an optimized vibration frequency according to the classified level of haptic feedback to be generated at the one or more haptic event locations for the identified one or more contents, based on at least one of a plurality of environmental parameters and a capability of generating sound by at least one of the speaker unit 2321, and the at least one microphone 2322 of the audio unit 2320 included in the electronic device 2000.
  • the Calibration Engine 206 of the processor 2200 may perform the operation 310.
  • the method 300 comprises generating a calibrated optimized vibration frequency based on the determined optimized vibration frequency by calibration of the optimized vibration frequency and amplitude for the generation of the localized haptic feedback effect on the determined one or more haptic event locations, using at least one of the speaker unit 2321, and the at least one microphone 2322 of the audio unit 2320 included in the electronic device 2000.
  • FIG. 4 illustrates a detailed architecture of the device including operational processes, in accordance with an embodiment of the disclosure.
  • the electronic device 400 includes an Identification Engine 402, an Approximation Engine 404, a Calibration Engine 406, a Generation Engine 408.
  • Each of the Identification Engine 402, the Approximation Engine 404, the Calibration Engine 406, and the Generation Engine 408 corresponds to the Identification Engine 2202, the Approximation Engine 2204, the Calibration Engine 2206, and the Generation Engine 2208 of the electronic device 400, respectively.
  • Each of the Identification Engine 402, the Approximation Engine 404, the Calibration Engine 406, and the Generation Engine 408 performs operations similar to operations performed by the processor 2200 or the Identification Engine 2202, the Approximation Engine 2204, the Calibration Engine 2206, and the Generation Engine 2208 as described above, respectively.
  • the Identification Engine 402, the Approximation Engine 404, the Calibration Engine 406, and the Generation Engine 408 may be implemented as the processor 2200.
  • the electronic device 400 corresponds to the electronic device 2000 of FIG. 2.
  • FIGS. 4 through 17C For a detailed description of the operations performed by the Identification Engine 402, the Approximation Engine 404, the Calibration Engine 406, and the Generation Engine 408, a detailed explanation will be made with reference to FIGS. 4 through 17C.
  • the Identification Engine 402 of the electronic device 400 includes a Mode Selection Unit 402A, a Fast Region-Based Convolutional Neural Networks (R-CNN) based Object detection unit 402B, and an Environment Identification unit 402C.
  • a Mode Selection Unit 402A a Fast Region-Based Convolutional Neural Networks (R-CNN) based Object detection unit 402B
  • an Environment Identification unit 402C an Environment Identification unit 402C.
  • the Mode Selection Unit 402A of the Identification Engine 402 determines whether a current mode of the electronic device 400 is a multimedia mode and checks a requirement of identification of the one or more contents in case it is determined that the current mode is the multimedia mode based on application data of the electronic device 400 and the one or more contents displayed on the display screen. In an embodiment, the Mode Selection Unit 402A of the Identification Engine 402 identifies whether the one or more contents displayed on the display screen is at least one of multimedia content, image, and a user interface (UI) element.
  • UI user interface
  • the R-CNN based Object detection unit 402B of the Identification Engine 202 identifies one or more objects based on a result of the identification of the one or more contents displayed on the display screen. In order to identify the one or more objects, firstly the R-CNN based Object detection unit 402B divides an input frame of the multimedia mode that is displayed on the display screen into the plurality of grids. Secondly, the R-CNN based Object detection unit 402B determines the one or more haptic event locations on the divided plurality of grids. The one or more haptic event locations are associated with the identified one or more contents.
  • the R-CNN based Object detection unit 402B determines a content position of the identified one or more contents based on determined one or more haptic event locations.
  • the one or more haptic event locations may correspond to a location of objects in a multimedia scene, a location of on object in a content displayed on the displayed screen, a location on the display screen on which a tap operation is performed by a user, a location of a focused view object identified on the display screen.
  • the aforementioned examples of the one or more haptic event locations are not limited to these as described above.
  • the one or more haptic event locations can be a location on the display screen other than the above-described examples.
  • the R-CNN based Object detection unit 402B obtains corresponding coordinates of the identified one or more objects based on the content position of the identified one or more contents.
  • the one or more haptic event locations corresponds to the obtained coordinates and can be defined as the coordinates for the haptic event locations.
  • FIG. 5A, 5B, 5C and 5D illustrates examples of a content identification process, in accordance with an embodiment of the disclosure.
  • FIG. 5A, 5B, 5C and 5D illustrate examples of the determination of the current mode by the Mode Selection Unit 402A and the identification of the one or more contents by the R-CNN based Object detection unit 402B of FIG. 4, in accordance with an embodiment of the disclosure.
  • each of a first terminal device 500, a second terminal device 514, and a third terminal device 516 is displaying a first multimedia scene 502, a second multimedia scene 518, and a chatting window with Keyboard 520, respectively.
  • a focused view display screen 522 is also shown in FIG. 5D.
  • the Mode Selection Unit 402A of the Identification Engine 402 determines that the current mode of the first terminal device 500 is the multimedia mode based on the display of the multimedia scene on the display screen. Then, the R-CNN based Object detection unit 402B identifies a set of objects 506 in the multimedia scene displayed on the display screen. Further, the R-CNN based Object detection unit 402B divides the multimedia scene into grids 504 and determines the one or more haptic event locations on the divided grids 504. Furthermore, the R-CNN based Object detection unit 402B determines a respective position of the identified set of objects 506 on the divided grids 504 as locations of haptic events.
  • the respective position of the identified set of objects 506 determined by the R-CNN based Object detection unit 402B can be represented in the form of object coordinates.
  • Table 1 illustrates an example representation of the object coordinates determined by the R-CNN based Object detection unit 402B.
  • the region of interests (ROI) corresponds to the set of objects 506 (i.e., three vehicles (Jeep) as shown in FIG. 5A).
  • the right side of table 1 indicates bounding box coordinates corresponding to the coordinates of the set of objects 506.
  • the aforementioned Table 1 is merely an example and not limited to the above-described example. It can include different data based on the multimedia mode of the electronic device 400 and can be arranged in any other format based on system requirements.
  • the R-CNN based Object detection unit 402B may identify an object 508 of the second Multimedia scene 518 displayed on the second terminal device 514 in FIG. 5B, a tap input location 510 corresponding to a tap operation in the chatting window with Keyboard 520 displayed on the third terminal device 516 in FIG. 5C, and a focused view location 512 in the focused view display screen 522 in FIG. 5D.
  • the Mode Selection Unit 402A of the Identification Engine 402 may also determine touch coordinates of a user tap input on the display screen and transfers the touch coordinates to the Approximation Engine 404.
  • the Environment Identification unit 402C of the Identification Engine 402 detects a plurality of environmental parameters of the electronic device 400.
  • the Environment Identification unit 402C receives, from the one or more sensors, sensed data - environmental data - detected by the one or more sensors.
  • the Environment Identification unit 402C may receive sensed data from at least one of the grip sensor 2311, the accelerometer 2332, the gyroscope sensor 2333, or the humidity sensor 2339.
  • the Environment Identification unit 402C detects the plurality of environmental parameters that correspond to at least one of a state of the electronic device 400, a surface on which the electronic device 400 is placed, and an orientation of the electronic device 400.
  • the Environment Identification unit 402C receives the sensor data from the sensor 2330 and calculates surface values associated with the surface on which the electronic device 400 is placed based on the received sensor data.
  • An example of such sensor data is shown below in Table 2.
  • Table 2 illustrates the sensor data for detection of the plurality of environmental surfaces on which the electronic device 400 is placed.
  • the aforementioned Table 2 is merely an example and not limited to the above-described example. It can include sensor data different from the sensor data of table 2 and can be arranged in any other format based on the requirement by the electronic device 400. Further, the Environment Identification unit 402C transfers the detected plurality of environmental parameters to the Calibration Engine 406.
  • FIG. 6A, 6B, 6C, and 6D illustrate examples of detecting environmental parameters, in accordance with an embodiment of the disclosure.
  • FIG. 6A illustrates an example of the environmental parameters detected by Environment Identification unit 402C of FIG. 4, in accordance with an embodiment of the disclosure. Depicted are a location of a phone in water in FIG. 6A, a phone on a hard surface in FIG. 6B, and a phone in a hand of a user in FIG. 6C.
  • the Environment Identification unit 402C receives the sensor data including information about at least one of the locations of the phone in the water, on the hard surface, or in the hand of the user.
  • the Environment Identification unit 402C also identifies a current mode of the electronic device 400.
  • the Environment Identification unit 402C device identifies one of a vibration mode, a ringing mode, or a silent mode of the electronic device 400 using the sensor data. Further, subsequent to the reception of the sensor data and the identification of the current mode, the Environment Identification unit 402C calculates the environmental parameters based on sensor data and the current mode of the electronic device 400.
  • the Approximation Engine 404 of the electronic device 400 includes an Audio Processing Engine 404A, a speaker position determination unit 404B, a scene understanding unit 404D, a Frequency Mapping Unit 404E, a Size based Scaling Unit 404F.
  • the Audio Processing Engine 404A identifies one or more audio sources associated with the identified one or more contents and determines an audio source position of the identified one or more audio sources based on audio sources available in the electronic device 400. In order to determine the audio source position of the identified one or more audio sources, firstly the Audio Processing Engine 404A extracts built-in audio sources information including speaker information and microphone information associated with the speaker unit 2321 and the at least one microphone 2322.
  • the built-in audio sources information may include information associated with application programming interfaces (APIs) and hardware IDs of the electronic device 400.
  • the speaker information includes coordinates of the one or more speakers included in the speaker unit 2321 and features of the one or more speakers. As an example, the features of the one or more speakers may correspond to a surround sound feature.
  • the Audio Processing Engine 404A determines whether a position or coordinates of at least one of the one or more speakers or the at least one microphone 2322 is present in the extracted built-in audio sources information.
  • the Audio Processing Engine 404A transfers the positions or the coordinates of at least one of the one or more speakers or the at least one microphone 2322 to the Calibration Engine 406. If a result of the determination at the block 404C is No, then the speaker position determination unit 404B calculates the corresponding coordinates by a triangulation technique. The corresponding coordinates correspond to the audio source position of the identified one or more audio sources.
  • FIGS. 7A and 7B illustrate an example of a triangulation technique for calculating audio source coordinates, in accordance with an embodiment of the disclosure.
  • FIG. 7A illustrates an example of information associated with the one or more audio sources available in the electronic device 400.
  • the syntax TYPE_BUILTIN_EARPIECE indicates an earpiece source (Sa) and the syntax TYPE_BUILTIN_SPEAKER indicates a speaker source (Sb).
  • respective syntax TYPE_BUILTIN_MIC in FIG. 7A indicates a first microphone source (a) and a second microphone source (b).
  • the speaker position determination unit 404B calculates corresponding coordinates of the speaker source, the earpiece source, the first microphone source, and the second microphone source in a 2-dimensional (2D) coordinate system by a triangulation technique.
  • the speaker position determination unit 404B may also calculate the corresponding coordinates in a 3-dimensional (3D) coordinate system.
  • the speaker position determination unit 404B calculates a specific coordinate of each of the speaker source, the earpiece source, the first microphone source, and the second microphone source on the X-Y axis of the 2D coordinate system axis using the triangulation technique described in FIG. 7B.
  • two speaker coordinates (-s,0) and (S,0) is on the X-axis of the 2D coordinate system. Also, it is disclosed a coordinate (0, m) of a reference microphone on the Y-axis of the 2D coordinate system.
  • the speaker position determination unit 404B may use the below equations (1) and (2) to calculate a specific coordinate (x, y) of a target microphone in the 2D coordinate system.
  • R1 and R2 correspond to a radial distance from each the speaker and the earpiece to the target microphone.
  • the speaker position determination unit 404B may also store the calculated coordinates in a database.
  • An example of the calculated coordinates is described below in Table 3.
  • Table 3 illustrates an example of a sample table including coordinates information of the audio sources.In Table 3, Speaker 1 may correspond to the earpiece and Speaker 2 to the speaker, respectively.
  • the speaker position determination unit 404B transfers the calculated coordinates of the one or more audio sources to the Calibration Engine 406.
  • the Scene Understanding Unit 404D may acquire audio information and video information associated with the multimedia scene and extracts a plurality of features of the identified one or more contents to classify the one or more haptic events associated with the identified one or more contents into a plurality of action classes.
  • the plurality of features of the identified one or more contents is extracted from the acquired audio information and video information associated with the multimedia scene.
  • the plurality of features of the identified one or more contents includes at least one of a size of the identified one or more objects and an audio amplitude of the identified one or more objects.
  • the Scene Understanding Unit 404D extracts the plurality of features of the identified one or more contents by use of a Convolutional Neural Network (CNN). An example of such extraction will be described with reference to FIGS. 8A and 8B.
  • CNN Convolutional Neural Network
  • FIGS. 8A and 8B illustrates an example of features extraction process, in accordance with an embodiment of the disclosure.
  • FIG. 8A discloses the multimedia scene including three objects (i.e., Jeep 1, Jeep 2, Jeep 3).
  • the Scene Understanding Unit 404D analyzes each of the video frames included in the acquired video information and also analyzes each of the audio frames included in the acquired audio information. Thereafter, the Scene Understanding Unit 404D determine feature vectors for actions in the video frames using the CNN network.
  • each of a plurality of video content (video 1, video 2, ⁇ ., video m) is sampled into a plurality of frames. Each of the sampled frames is analyzed to identify a presence of objects using the CNN network and thereafter feature vectors for actions are recognized and averaged as per action categories.
  • K in FIG. 8B corresponds to a number of the action categories.
  • the Frequency Mapping Unit 404E estimates a scene intensity with respect to each of the identified one or more contents based on at least one of the audio amplitude of the identified one or more objects, the size of the identified one or more objects, and a mapping of the identified one or more contents over the plurality of action classes of the one or more contents.
  • the Frequency Mapping Unit 404E estimates intensity of vibrations associated with the objects in multimedia scenes in the video information based on at least one of an intensity of audio associated with the identified one or more objects and mapping of frequency of the objects in the multimedia scenes with their action classes.
  • the scene intensity corresponds to the level of haptic feedback to be generated at the one or more haptic event locations. Accordingly, the Frequency Mapping Unit 404E classifies the one or more haptic events with respect to the level of haptic feedback based on the estimated scene intensity.
  • the one or more haptic events is associated with the identified set of objects 506, and the plurality of action classes is associated with a class of action corresponding to each of the identified set of objects 506.
  • the operations performed by the Frequency Mapping Unit 404E will now be described with the help of an example with reference to FIG. 8A.
  • the multimedia scene including three objects (i.e., Jeep 1, Jeep 2, Jeep 3).
  • the Scene Understanding Unit 404D determines feature vectors for actions in the video frames using the CNN network. Therefore, as an exemplary jeeps in FIG. 8 with blast will have higher intensity than jeeps moving normally in the multimedia scene. Accordingly, the Frequency Mapping Unit 404E maps the frequencies of the objects in the multimedia scenes with their action classes based on an amplitude of the audio related to objects in the multimedia scenes and further estimates the intensity of the vibrations of the objects in multimedia scenes.
  • the Frequency Mapping Unit 404E sets default vibration intensity values.
  • the default vibration intensity values are intensity values that are included in the settings of the electronic device 400. For example, any button or checkbox in a display content that needs to be highlighted should have device default vibration intensity values.
  • the Frequency Mapping Unit 404E categorizes the estimated scene intensity with respect to each of the identified one or more contents as one of a high, a medium, a low, or a default scene intensity value.
  • Table 4 illustrates a categorization example of the estimated scene intensity.
  • FIG. 9A illustrates a process of operations performed by the Frequency Mapping Unit, in accordance with an embodiment of the disclosure.
  • the process starts with the operation 900, the Frequency Mapping Unit 404E checks for the presence of at least one object in the multimedia scene, the location of the object based on the coordinates corresponding to the at least one object, and any audio information associated with the at least one object using data from the Mode Selection Unit 402A and the R-CNN based Object detection unit 402B.
  • the Frequency Mapping Unit 404E checks for a surface on which the electronic device 400 is placed and locations of the one or more audio sources available in the electronic device 400 using the sensor data and the coordinates of the audio sources stored in a database 908.
  • the Frequency Mapping Unit 404E determines whether the vibration is needed for the at least one object recognized in operation 900. In case the result of the determination in operation 904 is No, then the operation of the Frequency Mapping Unit 404E is stopped. Further, in case a result of the determination at the step 904 is Yes(vibration needed), the Frequency Mapping Unit 404E maps the frequency of the at least one object recognized in step 900 with a variable frequency associated with an action class of the at least one object.
  • the Frequency Mapping Unit 404E stores each of the mapped frequencies values, estimated scene intensity values, and their coordinates in the database.
  • An example of the estimated scene intensity values is shown in Table 5 with reference to FIG. 8A.
  • the Size based Scaling Unit 404F scales the intensity estimated by the Frequency Mapping Unit 404E.
  • the Size based Scaling Unit 404F scales the estimated intensity based on the size of the identified one or more objects.
  • the Size based Scaling Unit 404F classifies a size of each of the one or more objects identified by the Identification Engine 402. For example, if an object identified by the Identification Engine 402 has a large size, then the Size based Scaling Unit 404F classifies the object as having a greater intensity and scales an audio frequency of the object according to the size of the object identified by the Identification Engine 402.
  • FIG. 9B illustrates a process of operations performed by the Size based Scaling Unit 404F of FIG. 4, in accordance with an embodiment of the disclosure.
  • the process starts with the operation 910, the Size based Scaling Unit 404F checks for the presence of at least one object in the multimedia scene, the location of the at least one object based on the coordinates corresponding to the at least one object, and any audio information associated with the at least one object using data from the Mode Selection Unit 402A and the R-CNN based Object detection unit 402B.
  • the Size based Scaling Unit 404F matches the audio frequency of the at least one object recognized in operation 910 with a size of the at least one object recognized in operation 910.
  • the Size based Scaling Unit 404F determines whether frequency scaling of the audio associated with the at least one object recognized at the step 910 is required based on a result of the match process in operation 912. In case if the result of the determination in operation 914 is No, then the operation of the Size based Scaling Unit 404F is stopped. Further in a case if a result of the determination in operation 914 is Yes, then the Size based Scaling Unit 404F scales the audio frequency of the at least one object recognized in operation 910 based on the size of the at least one object on the display screen.
  • the Size based Scaling Unit 404F generates a list of the scaled audio frequency values of the one or more objects by performing the operations 910 through 916 and stores the scaled audio frequency values in the database in operation 908.
  • An example of the scaled audio frequency values with reference to FIG. 8A is shown below in Table 6. As shown in Table 6 the frequency column includes scaled audio frequency values of the one or more objects (jeeps).
  • the Calibration Engine 406 of the electronic device 400 includes an Amplitude Calculation Unit 406A, a Frequency Calculation Unit 406B, a Coordinate & Frequency Alignment Unit 406C, a Source Frequency Selection Unit 406E, a Noise Estimation Unit 406F, an Amplitude Rectification unit 406G, and a Sound Source Amplitude Selection Unit 406H.
  • the Amplitude Calculation Unit 406A calculates an audio amplitude of each of the identified one or more objects by using a Fast Fourier Transform (FFT).
  • the audio amplitude of the identified one or more objects is calculated based on the audio information corresponding to the identified one or more contents.
  • the Amplitude Calculation Unit 406A calculates the amplitude of a sound wave related to the identified one or more objects.
  • the amplitude for a smartphone can vary between 0-90 Db(decibel). This is only an example, and not limited to the described example only, amplitude can vary between any other ranges, too.
  • the audio amplitudes calculated by the Amplitude Calculation Unit 406A can be stored in the database in operation 908. An example of the stored data is shown below in Table 7 with reference to FIG. 8A.
  • the fourth column indicates the calculated audio amplitude values of the one or more objects (Jeep 1, Jeep 2, and Jeep 3 of FIG. 8A).
  • the audio amplitude values of the one or more objects may vary based on a type of object and the action class associated with the object in the multimedia scene.
  • the Amplitude Calculation Unit 406A sets default amplitude values for the objects for which no action class, scenes, and audio information is present in the displayed multimedia scene or any other display content to be displayed.
  • the default amplitude values correspond to the amplitude values included in the settings of the electronic device 400.
  • the Amplitude Calculation Unit 406A determines a sound intensity range of each of the identified one or more objects based on the calculated audio amplitude and maps the sound intensity range of each of the identified one or more objects with a predefined intensity frequency range. Now functions and operations of the Amplitude Calculation Unit 406A will be explained with reference to FIG. 10A of the drawings.
  • FIG. 10A illustrates an example process of operations performed by the Amplitude Calculation Unit, in accordance with an embodiment of the disclosure.
  • the process starts with the operation 1000, the Amplitude Calculation Unit 406A checks a level of the estimated intensity of the vibration of respective one or more objects.
  • the Amplitude Calculation Unit 406A identifies the coordinates of the one or more objects using the output of the Identification Engine 402.
  • the Amplitude Calculation Unit 406A determines the sound intensity range of each of the identified one or more objects based on the level of the estimated intensity of the vibration of the respective one or more objects. In operation 1006, after the determination of the sound intensity range, the Amplitude Calculation Unit 406A maps the sound intensity range of each of the identified one or more objects with a predefined intensity frequency range stored in an intensity frequency database (DB) in operation 1008.
  • the intensity frequency DB includes information including the intensity frequency range corresponding to a plurality of levels of the estimated intensity of the vibration. As an example, the information included in the intensity frequency DB is shown in Table 8 below.
  • Intensity Range Frequency Range ⁇ Low 0 ⁇ 200 kHz Low to Medium 200 ⁇ 300 kHz Medium to High 300 ⁇ 500 kHz
  • the left column of the Table 8 indicates the sound intensity range of each of the identified one or more objects and the right column of the Table 8 indicates the predefined intensity frequency range corresponding to the respective sound intensity range.
  • the Frequency Calculation Unit 406B determines a vibration frequency of each of the one or more objects based on the audio amplitude of each of the one or more objects and the estimated scene intensity associated with the one or more objects. As an example, the Frequency Calculation Unit 406B determines the vibration frequency for each of the one or more objects in the multimedia scene based on the mapping of the sound intensity range of each of the one or more objects over the corresponding predefined intensity frequency range stored in the intensity frequency DB.
  • the Frequency Calculation Unit 406B generates a list of vibration frequencies of the one or more objects based on the determined vibration frequency for each of the one or more objects.
  • An example list of the vibration frequencies of the one or more objects is shown below as an example Table 9 with reference to objects shown in FIG. 10B.
  • FIG. 10B illustrates an example frequency identification by the Frequency Calculation Unit, in accordance with an embodiment of the disclosure.
  • the amplitude of the respective objects (jeep 1, jeep 2, and jeep 3) are 78Db, 70Db, and 28Db, respectively.
  • the Frequency Calculation Unit 406B determines the vibration frequencies of the jeep 1, the jeep 2, and the jeep 3 based on corresponding audio amplitudes of the jeep 1, the jeep 2, and the jeep 3 in the multimedia scene and the mapping of the sound intensity range of each of amplitudes the jeep 1, the jeep 2, and the jeep 3 over the corresponding predefined intensity frequency range stored in the intensity frequency DB 1008. Therefore, as a result, desired vibration frequencies fa, fb, and fc are determined for the jeep 1, the jeep 2, and the jeep 3, respectively.
  • the desired vibration frequency fa for the jeep 1 is determined as 420 kHz
  • the desired vibration frequency fb for the jeep 2 is determined as 366 kHz
  • the desired vibration frequency fc for the jeep 3 is determined as 200 kHz.
  • the example for determining the vibration frequency of each of the one or more objects is only not limited to the aforementioned example.
  • the vibration frequency will be determined according to a change in the multimedia scenes and a display of the one or more contents on the display screen.
  • the Coordinate & Frequency Alignment Unit 406C determines a capability of generating the determined vibration frequency of each of the identified objects from the one or more audio sources available in the electronic device 400 based on at least one of the determined audio source position, identified object position, and the detected plurality of environmental parameters, thereby determining the capability of generation of sound source by the one or more audio sources available in the electronic device 400.
  • the process of determining the capability of generating the determined vibration frequency will be described with reference to FIG. 11.
  • FIG. 11 illustrates an example process of operations performed by Coordinate & Frequency Alignment Unit, in accordance with an embodiment of the disclosure.
  • the process of determining the capability of generating the determined vibration frequency of each of the identified objects starts with the operation 1100, the Coordinate & Frequency Alignment Unit 406C calculates constructive and destructive interference parameters using the coordinates of the one or more objects and their frequencies in the multimedia scene.
  • the Coordinate & Frequency Alignment Unit 406C calculates a number of audio sources (i.e., speakers or mic) and their coordinates needed for generating the determined vibration frequency using coordinates of the one or more objects identified by the Identification Engine 402.
  • the Coordinate & Frequency Alignment Unit 406C determines whether the audio sources available in the electronic device 400 can generate the determined vibration frequency. In a case, if a result of the determination in operation 1104 is Yes, then the process flow proceeds now to the operation 1106. In operation 1106, the Coordinate & Frequency Alignment Unit 406C associates the determined vibration frequency to desired audio sources for the generation of the determined vibration frequency.
  • the Coordinate & Frequency alignment unit 406C determines an optimized vibration frequency for each of the identified one or more objects based on the audio amplitude for each of the identified one or more objects and the estimated scene intensity. Now, an example of the process of determining the optimized vibration frequency will be explained with reference to FIGS. 12A and 12B.
  • FIGS. 12A and 12B illustrate an example process of operations performed by the Coordinate & Frequency Alignment Unit, in accordance with an embodiment of the disclosure.
  • FIG. 12A discloses two vibration frequencies f1 and f2 and a resultant frequency fa.
  • the vibration frequency f1 corresponds to a frequency desired to be generated by an audio source s1 and the vibration frequency f2 corresponds to a frequency desired to be generated by an audio source s2.
  • the resultant frequency fa corresponds to a vibration frequency desired to be produced by the audio sources s1 and s2.
  • the Coordinate & Frequency Alignment Unit 406C may determine that there are two audio sources available to generate the resultant frequency fa.
  • the Coordinate & Frequency Alignment Unit 406C calculates the vibration frequencies (f1 and f2) of the available two audio sources (s1 and s2). In operation 1204, the Coordinate & Frequency Alignment Unit 406C calibrates the vibration frequencies of the one or more objects such that the audio sources available in the electronic device 400 such that the average of the calibrated frequencies lies around the desired frequency.
  • the vibration frequencies of the one or more objects are calibrated using coordinates of the audio sources available in the electronic device 400 and a regressive learning mechanism generated by a learning and feedback engine to be described later. The calibration of the vibration frequencies of the one or more objects will now be explained with an example.
  • the Coordinate & Frequency Alignment Unit 406C determines whether the audio sources available in the system can generate the calibrated frequency. If the result of the determination in operation 1206 is Yes, then the process of determining the optimized vibration frequency will come to an end. However in a case, if a result of the determination in operation 1206 is No, then the Coordinate & Frequency Alignment Unit 406C recalibrates the determined vibration frequency such that the audio sources available in the electronic device 400 can generate the desired frequency for vibration at the coordinates of each of the identified one or more objects identified by the Identification Engine 402.
  • the Coordinate & Frequency Alignment Unit 406C determines a list of the optimized vibration frequency for the identified one or more objects.
  • An example of the list of the optimized vibration frequency with refrerence to FIG. 10B and 12A is shown in Table 10 below.
  • the Source Frequency Selection Unit 406E selects one optimized vibration frequency from the list of the optimized vibration frequency using the regressive learning mechanism. After the selection of the optimized vibration frequency, the Source Frequency Selection Unit 406E determines a required vibration frequency required by at least one audio source of the one or more audio sources so as to generate the selected optimized vibration frequency at the coordinates of the one or more objects identified by the Identification Engine 402.
  • the Source Frequency Selection Unit 406E uses a wave mechanism so as to generate the selected optimized vibration frequency at the coordinates of the one or more objects.
  • a resultant sound wave x 2Xcos( ⁇ fBt)cos(2 ⁇ favgt) has a frequency of f avg , where f avg is equal to an average frequency of the one or more audio sources available in the electronic device 400.
  • the vibration frequency f1 of the audio source s1 will be 400kHz and the vibration frequency f2 of the audio source s2 will be 348kHz.
  • the Noise estimation Unit 406F calculates the amplitude value for the determined required vibration frequency based on a comparison of amplitude values of audio content with vibration and amplitude values of actual audio content.
  • An example of amplitude values calculated for the determined required vibration is shown below in Table 11.
  • the Noise estimation Unit 406F further determines extra noise in the audio content with the determined required vibration frequency and compares the determined extra noise with a predefined noise threshold value.
  • the Amplitude Rectification unit 406G calibrates an amplitude of the required vibration frequency in a case if the determined extra noise is greater than the predefined noise threshold value as a result of the comparison.
  • the amplitude of the required vibration frequency is calibrated with respect to a required frequency to minimize the extra noise due to the determined required vibration frequency.
  • the Amplitude Rectification unit 406G generates a list of calibrated amplitude values respective to the one or more objects identified by the Identification Engine 402. An example of the calibrated amplitude values is shown below in Table 12 with reference to FIGS. 10B and 12A.
  • FIGS. 13A and 13B illustrate an example of amplitude calibration process performed by the Calibration Engine, in accordance with an embodiment of the disclosure.
  • FIG. 13B discloses a plurality of amplitude values A1, A2, A3, ... , An corresponding to which some extra noise is present with reference to the determined required vibration frequency.
  • the Amplitude Rectification unit 406G calibrates each of the amplitude values A1, A2, A3, ... , An such that the extra noises at A1, A2, A3, ... , An is removed.
  • the amplitude values A1, A2, A3, ... , An are calibrated without any change in the determined required vibration frequency.
  • the Amplitude Rectification unit 406G sets a final optimal amplitude value for the determined required vibration frequency having the minimized extra noise.
  • the Sound Source Amplitude Selection Unit 406H selects calibrated amplitude values corresponding to the available audio sources from the list of the calibrated amplitude values generated by the Amplitude Rectification unit 406G. As an example with reference to the Table 12 and FIG. 10B, the Sound Source Amplitude Selection Unit 406H selects a calibrated amplitude value (15 Db) corresponding to the determined required vibration frequency for the jeep 1 and selects a calibrated amplitude value (10 Db) corresponding to the determined required vibration frequency for the jeep 3.
  • the Generation Engine 408 of the electronic device 400 includes a Source Selection & initialization unit 408A, a Frequency and Amplitude Assignment Unit 408B, a Production Unit 408C, and a Feedback & Learning Engine 408D.
  • the Source Selection & initialization unit 408A selects at least one audio source associated with the at least one content from the identified one or more contents.
  • the Source Selection & initialization unit 408A selects the at least one audio source from the audio sources available in the electronic device 400 for generation of the calibrated optimized vibration frequency.
  • the Frequency and Amplitude Assignment Unit 408B assigns the required frequency and the calibrated amplitude to the selected at least one audio source associated with the at least one content.
  • the required frequency can also be referred to as "the optimized vibration frequency” without deviating from the scope of the disclosure.
  • the Production Unit 408C generates first sound wavelets from the selected at least one audio source associated with the at least one content to generate the calibrated optimized vibration frequency at the content position of the identified one or more contents for the generation of the localized haptic feedback effect by using the one or more audio sources.
  • the first sound wavelets from the selected at least one audio source are generated by the Production Unit 408 using a constructive interference technique.
  • An example of the constructive interference technique will be explained with reference to FIGS. 14A and 14B.
  • FIGS. 14A and 14B illustrate an example process of generating sound wavelets by Production Unit, in accordance with an embodiment of the disclosure.
  • a plurality of sound wavelets is produced from the audio source s1 and another plurality of sound wavelets is produced from the audio source s2 in order to form a point of intersection to generate the localized haptic feedback effect at the requisite vibration point 1400.
  • the similar process of generating the sound wavelets can be performed for the generation of localized haptic feedback effect at the coordinates of the one or more objects. For example, as shown in FIG.
  • a plurality of sound wavelets having wavelengths ( ⁇ 1, ⁇ 2, and ⁇ 3) is generated respectively using three audio sources s1, s2, and s3 for the generation of the localized haptic feedback effect at the coordinates (x1,y1), (x2,y2), and (x3,y3) of the balloons displayed on the display screen corresponding to the locations 1, 2, and 3.
  • a sound wave produced from the audio source s1 should interfere constructively with sound wavelets produced by the audio source s2. Therefore, for a constructive interference at the requisite vibration point 1400, it is required a single wavelet of length similar to the requisite vibration point 1400 from the audio source s1 and multiple wavelets of small wavelengths from the audio source s2.
  • the Production Unit 408C may also generate second sound wavelets from the selected at least one audio source associated with the at least one content to generate the optimized vibration frequency at the content position of the identified one or more contents to cancel multiple coherence points of vibration due to the generation of the first sound wavelets for the generation of the localized haptic feedback effect.
  • the second sound wavelets from the selected at least one audio source is generated by the Production Unit 408 using a destructive interference technique.
  • FIG. 15 illustrates another example of generating sound wavelets by Production Unit, in accordance with an embodiment of the disclosure.
  • FIG. 15 there are two sound wavelets 1500 and 1502.
  • there are only two coherence points required (required surface intersection points 1506) for the generation of the localized haptic feedback effect.
  • other coherence points Extra vibration points 1504 are also present that can generate the localized haptic feedback effect which is not necessary or required.
  • the Production Unit 408C To eliminate or cancel the effect of the extra vibration points 1504, the Production Unit 408C generates sound wavelets different from these sound wavelets using such that to cancel the unnecessary the extra vibration points 1504 such that the localized haptic feedback effect is only generated at the required surface intersection points 1506.
  • the Approximation Engine 404 extracts audio content from the identified one or more contents identified by the Identification engine 408. After the extraction of the audio content, the Generation Engine 408 generates a vibration pattern based on the required frequency and the calibrated amplitude. After the generation of the vibration pattern, the Generation Engine 408 calculates a point of coherence from the generated vibration pattern and further removes the calculated point of coherence from the generated vibration pattern.
  • the Generation Engine 408 obtains a unique vibration frequency wave based on the removal of the calculated point of coherence and merges the obtained vibration frequency with a current sound wave. Finally, the Generation Engine 408 generates the determined optimized vibration frequency at the content position of the identified one or more contents by merging of the obtained unique vibration frequency wave with the audio content from the identified one or more contents.
  • FIGS. 16A, 16B, 16C, 16D, 16E, and 16F illustrate an example process of obtaining a unique vibration frequency, in accordance with an embodiment of the disclosure.
  • FIG. 16A discloses an example of a normal sound wave for the audio content and
  • FIG. 16B discloses an example of a low-frequency sound wave for the generation of the vibration pattern.
  • FIG. 16C discloses a digital representation of the normal sound wave for the audio content and
  • FIG. 16D discloses a digital representation of the low-frequency sound wave.
  • the Approximation Engine 404 extracts the normal sound wave for the audio content. Further using the low-frequency sound wave, the Generation Engine 408 generates the vibration pattern based on the required frequency, the calibrated amplitude, and scene analysis of the one or more contents displayed on the display screen.
  • FIG. 16E discloses points of coherence calculated by the Generation Engine 408 using merged digital waveform of the normal sound wave and the low-frequency sound wave. After calculating the points of coherence, the Generation Engine 408 updates the merged digital waveform to remove the calculated point of coherence from the merged digital waveform as shown in FIG. 16F. Accordingly, as an outcome of the updated digital waveform, the Generation Engine 408 obtains the unique vibration frequency wave.
  • the localized haptic feedback effect on the determined one or more haptic event locations will be generated by the Generation Engine 408 using outputs of the Identification Engine 402, the Approximation Engine 402, and the Calibration Engine 406 by the process explained above with reference to the absence of the audio content.
  • the Feedback & Learning Engine 408D monitors over a predetermined period of time, the generated generation of the localized haptic feedback effect on the determined one or more haptic event locations.
  • the Feedback & Learning Engine 408D may also perform an intelligent learning and feedback process to understand any optimization in the amplitude values to reduce the extra noise or frequency correction to accurately locate one or more haptic event locations.
  • the Feedback & Learning Engine 408D may also send feedback to the Calibration Engine 406 based on the intelligent learning and feedback process to calibrate frequency and amplitude values.
  • FIGS. 17A, 17B, and 17C illustrate examples of the generating a localized haptic feedback effect on the determined one or more haptic event locations, in accordance with an embodiment of the disclosure.
  • FIG. 17A discloses a first example of the generation of the localized haptic feedback effect.
  • three localized haptic feedback effects (Localized vibration 1700, Localized vibration 1702, and Localized vibration 1704) are generated with varied intensity in accordance with the embodiments of the disclosure.
  • Each of the Localized Vibrations 1700, 1702, and 1704 has an intensity in accordance with the action classes of corresponding objects in the displayed multimedia scene. Further as shown in FIG.
  • FIG. 17B a second example of the generation of the localized haptic feedback effect are generated.
  • the localized vibrations are produced using the audio content associated with the one or more contents using the process explained above with reference to FIGS. 16A through 16F.
  • FIG. 17C discloses a third example of the generation of the localized haptic feedback effect.
  • the localized vibration 1706 is generated using audio source 1708.
  • the localized vibrations can be produced at multiple points of the one or more contents displayed on the display screen without any DC motor.
  • the electronic device 400 of the disclosure results in the generation of the localized haptic feedback effect due to which vibration occurs only at a specific event location.
  • the localized haptic feedback effect generated by the electronic device 400 can also help a clumsy or incompetent to grasp event locations regarding an event.
  • the localized haptic feedback effect generated by the electronic device 400 may also enhance the user experience by providing a real-time localized haptic feedback effect to the user.
  • the method 200 and the electronic device 2000 or 400 can enhance the user experience.
  • the localized vibration effect can provide Dolby Sound comparable vibration effect for users and hence can be a beloved user experience.
  • Another exemplary advantage of the electronic device 2000 or 400 of the disclosure is cost-reduction or cost-effective due to the removal of the DC motor. Since the application of the DC motor is removed, the power of the system can also be optimized.
  • the localized haptic feedback effect generated by the method 200 and the electronic device 2000 or 400 provides intelligent accessibility of the one or more contents displayed on the display screen.
  • FIG. 18 illustrates a first use case of providing a real-time localized haptic feedback effect to a user, in accordance with an embodiment of the disclosure.
  • the device in FIG. 18 has two sound sources S1 and S2 as the available audio sources with the frequencies (f1 & f2 and amplitudes (wavelength ⁇ 1 & ⁇ 2), respectively.
  • Sound source S1 emits a wave of wavelength ⁇ 1
  • sound source S2 emits a wave of wavelength ⁇ 2 with respective phases such that they interfere at the desired location in the respective region of interests.
  • each time the resultant sound waves x1 and x2 are calculated for the audio sources S1 and S2 when there is a change in the content displayed on the display screen.
  • the respective calculated resultant sound waves x1 and x2 are used for generating vibrations at specific coordinates of the display screen with reference to the objects (man 1800, trolley 1802, and Swipe arrows 1804(depicted as black-colored-arrow)).
  • the calibrated frequency value required for the vibration is 470kHz
  • such real-time vibrations may notify the user regarding an input operation by generating the vibrations at the specific coordinates of the display screen while playing the game and may enhance the experience of the user while playing the game.
  • FIG. 19 illustrates a second use case of providing a real-time localized haptic feedback effect to the user, in accordance with an embodiment of the disclosure.
  • FIG. 19 discloses an example use case for generating real-time localized haptic feedback effect while watching video content.
  • the Generation Engine 408 when a car blast scene is displayed over the display screen then the Generation Engine 408 generates real-time localized haptic feedback effect (Localized Vibration 1900) at the location of the car in the car blast scene with a high vibration intensity such that the user can experience the blast along with the reproduction of the video content. Accordingly, such real-time localized haptic feedback effect may enhance the user experience while watching the video content.
  • real-time localized haptic feedback effect may enhance the user experience while watching the video content.
  • FIG. 20 illustrates a third use case of providing a real-time localized haptic feedback effect to the user, in accordance with an embodiment of the disclosure.
  • FIG. 20 discloses an example use case for generating real-time localized haptic feedback effect during online payment.
  • a localized vibration 2000 is generated at the location of the objects (copy icon 2002 and delete icon 2004) using sound wavelets of sound sources S1 and S2. Accordingly, at a time of online payments user can get notified with real-time vibration effect that at which icon the user should refer to perform the further operations for doing the online payment, and hence the user experience will be enhanced with such vibration effect.
  • the electronic device 400 of the disclosure can also generate multidimensional vibrations during online book reading or online document reading to specify or highlight a specific location of the content. Accordingly, the user experience during online reading can be enhanced.
  • the disclosure can also be applied to display devices used in vehicles.
  • the disclosure is not limited to the use case examples described above with regard to display-based electronic devices. It can also be applied to other technological fields, for example, health care systems, audio devices, and any other electronic devices including audio sources.
  • FIG. 21 illustrates a block diagram of an electronic device that executes the processes of FIG. 2 and FIG. 4, in accordance with an embodiment of the disclosure.
  • a Central Processing Unit (CPU) 2102 a Central Processing Unit (CPU) 2102, a ROM (Read Only Memory) 2104, and a RAM (Random Access Memory) 2106 are connected by a Bus 2122.
  • the CPU 2102 may be implemented as a processor.
  • the input unit 2110 includes a keyboard, a mouse, a microphone, and the like.
  • the output unit 2112 includes a display, a speaker, and the like.
  • the storage unit 2114 includes a nonvolatile memory and the like.
  • the communication unit 2116 includes a network interface or the like.
  • the drive 2118 drives a removable medium 2120 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like.
  • the CPU 2102, the ROM 2104, and the RAM 2106 are communicatively coupled with the input unit 2110, the output unit 2112, the storage unit 2114, the communication unit 2116, and the drive 2118 via the input/output interface 2108.
  • the electronic device 2100 may also include one or more processors to execute the series of processes described hereinabove with reference to the electronic device 200 and the electronic device 400.
  • the disclosure refers to a method for generating a localized haptic feedback effect in an electronic device.
  • the method comprises identifying, by an identification engine, one or more contents on a display screen of the electronic device and dividing, by the identification engine, the display screen into a plurality of grids.
  • the method further comprises determining, by the identification engine, one or more haptic event locations, associated with the identified one or more contents, on the plurality of grids, and classifying, by an approximation engine, one or more haptic events associated with the identified content with respect to a level of haptic feedback to be generated at the one or more haptic event locations.
  • the method comprises determining, by a calibration engine, an optimized vibration frequency according to the classified level of haptic feedback to be generated at the one or more haptic event locations for the identified one or more contents based on at least one of a plurality of environmental parameters and a capability of generation of sound by one or more audio sources available in the electronic device. Additionally, after the determination of the optimized vibration frequency, the method comprises generating, by a generation engine, a calibrated optimized vibration frequency based on the determined optimized vibration frequency by calibration of the optimized vibration frequency and amplitude for the generation of the localized haptic feedback effect on the determined one or more haptic event locations by using the one or more audio sources available in the electronic device.
  • the disclosure refers to an electronic device for generating a localized haptic feedback effect in an electronic device.
  • the electronic device includes an identification engine that identifies one or more contents on a display screen of the electronic device, divides the display screen into a plurality of grids, and determines one or more haptic event locations associated with the identified one or more contents on the plurality of grids.
  • the electronic device further includes an approximation engine that classifies one or more haptic events associated with the identified content with respect to a level of haptic feedback to be generated at the one or more haptic event locations.
  • the electronic device further includes a calibration engine that determines an optimized vibration frequency according to the classified level of haptic feedback to be generated at the one or more haptic event locations for the identified one or more contents based on at least one of a plurality of environmental parameters and a capability of generation of sound by one or more audio sources available in the electronic device.
  • the electronic device further includes a generation engine that generates a calibrated optimized vibration frequency based on the determined optimized vibration frequency by calibration of the optimized vibration frequency and amplitude for the generation of the localized haptic feedback effect on the determined at least one haptic event location by using the one or more audio sources available in the electronic device.
  • Some example embodiments disclosed herein may be implemented using processing circuitry.
  • some example embodiments disclosed herein may be implemented using at least one software program running on at least one hardware device and performing network management functions to control the elements.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un procédé et un dispositif électronique destinés à générer un effet de rétroaction haptique localisé dans un dispositif électronique. Le dispositif électronique comprend un affichage servant à afficher un ou plusieurs contenus et un processeur qui identifie un ou plusieurs contenus sur l'affichage du dispositif électronique, divise l'écran de l'affichage en une pluralité de grilles, et détermine un ou plusieurs emplacements d'événements haptiques associés au(x) contenu(s) identifié(s) sur la pluralité de grilles. Le dispositif électronique classifie en outre un ou plusieurs événements haptiques associés au contenu identifié par rapport à un niveau de rétroaction haptique à générer à l'emplacement ou aux emplacements d'événements haptiques, détermine une fréquence de vibration optimisée selon le niveau classifié de rétroaction haptique à générer à l'emplacement ou aux emplacements d'événements haptiques pour le ou les contenus identifiés d'après au moins un paramètre parmi une pluralité de paramètres environnementaux et un capacité de génération de son par une ou plusieurs sources audio disponibles dans le dispositif électronique, génère une fréquence de vibration optimisée étalonnée d'après la fréquence de vibration optimisée déterminée, par étalonnage de la fréquence et de l'amplitude de vibration optimisées pour la génération de l'effet de rétroaction haptique localisé sur l'emplacement ou les emplacements déterminés d'événements haptiques en utilisant la ou les sources audio disponibles dans le dispositif électronique.
PCT/KR2022/012144 2021-08-13 2022-08-12 Procédé et appareil de génération de vibrations localisées WO2023018309A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202111036812 2021-08-13
IN202111036812 2021-08-13

Publications (1)

Publication Number Publication Date
WO2023018309A1 true WO2023018309A1 (fr) 2023-02-16

Family

ID=85200148

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2022/012144 WO2023018309A1 (fr) 2021-08-13 2022-08-12 Procédé et appareil de génération de vibrations localisées

Country Status (1)

Country Link
WO (1) WO2023018309A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110049416A (ko) * 2009-11-05 2011-05-12 주식회사 팬택 진동 피드백 제공 단말 및 그 방법
US20130038603A1 (en) * 2011-08-09 2013-02-14 Sungho Bae Apparatus and method for generating sensory vibration
US20150348378A1 (en) * 2014-05-30 2015-12-03 Obana Kazutoshi Information processing system, information processing apparatus, storage medium having stored therein information processing program, and information processing method
US20200128236A1 (en) * 2013-03-15 2020-04-23 Immersion Corporation Method and apparatus for encoding and decoding haptic information in multi-media files
US20210186219A1 (en) * 2018-09-11 2021-06-24 Sony Corporation Information processing device, information processing method, and recording medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110049416A (ko) * 2009-11-05 2011-05-12 주식회사 팬택 진동 피드백 제공 단말 및 그 방법
US20130038603A1 (en) * 2011-08-09 2013-02-14 Sungho Bae Apparatus and method for generating sensory vibration
US20200128236A1 (en) * 2013-03-15 2020-04-23 Immersion Corporation Method and apparatus for encoding and decoding haptic information in multi-media files
US20150348378A1 (en) * 2014-05-30 2015-12-03 Obana Kazutoshi Information processing system, information processing apparatus, storage medium having stored therein information processing program, and information processing method
US20210186219A1 (en) * 2018-09-11 2021-06-24 Sony Corporation Information processing device, information processing method, and recording medium

Similar Documents

Publication Publication Date Title
WO2011096694A2 (fr) Procédé et appareil de fourniture d'interface utilisateur utilisant un signal acoustique, et dispositif comprenant une interface utilisateur
WO2018088806A1 (fr) Appareil de traitement d'image et procédé de traitement d'image
WO2019059505A1 (fr) Procédé et appareil de reconnaissance d'objet
WO2010050693A2 (fr) Appareil d'interface pour générer une commande de régulation par toucher et déplacement, système d'interface comprenant l'appareil d'interface et procédé d'interface consistant à utiliser cet appareil
WO2017034116A1 (fr) Terminal mobile et procédé de commande de celui-ci
WO2013191484A1 (fr) Appareil de commande à distance et procédé de commande associé
WO2015199280A1 (fr) Terminal mobile et son procédé de commande
EP2946562A1 (fr) Appareil d'affichage et procédé de reconnaissance de mouvement associé
WO2018070624A2 (fr) Terminal mobile et son procédé de commande
WO2019124963A1 (fr) Dispositif et procédé de reconnaissance vocale
WO2014038824A1 (fr) Procédé de modification de la position d'un objet et dispositif électronique à cet effet
WO2016114432A1 (fr) Procédé de traitement de sons sur la base d'informations d'image, et dispositif correspondant
WO2016182361A1 (fr) Procédé de reconnaissance de gestes, dispositif informatique et dispositif de commande
WO2014157757A1 (fr) Dispositif de saisie mobile et procédé de saisie utilisant ce dernier
WO2021054589A1 (fr) Appareil électronique et procédé de commande associé
WO2018030567A1 (fr) Hmd et son procédé de commande
WO2015125993A1 (fr) Terminal mobile et son procédé de commande
WO2021118225A1 (fr) Dispositif d'affichage et son procédé de fonctionnement
WO2019135553A1 (fr) Dispositif électronique, son procédé de commande et support d'enregistrement lisible par ordinateur
WO2020149600A1 (fr) Dispositif électronique et procédé de fonctionnement associé
WO2016111588A1 (fr) Dispositif électronique et son procédé de représentation de contenu web
WO2016195197A1 (fr) Terminal à stylet et procédé de commande associé
WO2016080662A1 (fr) Procédé et dispositif de saisie de caractères coréens sur la base du mouvement des doigts d'un utilisateur
WO2023018309A1 (fr) Procédé et appareil de génération de vibrations localisées
WO2021040201A1 (fr) Dispositif électronique et procédé de commande de celui-ci

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22856302

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE