WO2023090831A1 - Dispositif électronique destiné à émettre un son sur la base d'une entrée d'utilisateur et son procédé de fonctionnement - Google Patents
Dispositif électronique destiné à émettre un son sur la base d'une entrée d'utilisateur et son procédé de fonctionnement Download PDFInfo
- Publication number
- WO2023090831A1 WO2023090831A1 PCT/KR2022/018030 KR2022018030W WO2023090831A1 WO 2023090831 A1 WO2023090831 A1 WO 2023090831A1 KR 2022018030 W KR2022018030 W KR 2022018030W WO 2023090831 A1 WO2023090831 A1 WO 2023090831A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- electronic device
- various embodiments
- sound
- audio
- input
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 230000000007 visual effect Effects 0.000 claims abstract description 196
- 238000001228 spectrum Methods 0.000 claims abstract description 56
- 230000033001 locomotion Effects 0.000 claims description 169
- 230000000694 effects Effects 0.000 claims description 113
- 230000008859 change Effects 0.000 claims description 69
- 230000009471 action Effects 0.000 claims description 21
- 238000006243 chemical reaction Methods 0.000 claims description 8
- 230000003993 interaction Effects 0.000 description 129
- 238000010586 diagram Methods 0.000 description 123
- 230000032258 transport Effects 0.000 description 100
- 230000001276 controlling effect Effects 0.000 description 77
- 238000004891 communication Methods 0.000 description 39
- 230000006870 function Effects 0.000 description 35
- 230000002452 interceptive effect Effects 0.000 description 33
- 238000013528 artificial neural network Methods 0.000 description 21
- 238000000926 separation method Methods 0.000 description 20
- 238000003860 storage Methods 0.000 description 12
- 230000001755 vocal effect Effects 0.000 description 11
- 230000015654 memory Effects 0.000 description 10
- 230000004913 activation Effects 0.000 description 9
- 230000006399 behavior Effects 0.000 description 9
- 238000009826 distribution Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 9
- 238000013527 convolutional neural network Methods 0.000 description 8
- 230000000306 recurrent effect Effects 0.000 description 8
- 230000033764 rhythmic process Effects 0.000 description 8
- 238000012545 processing Methods 0.000 description 7
- 238000013473 artificial intelligence Methods 0.000 description 5
- 239000012530 fluid Substances 0.000 description 5
- 238000007654 immersion Methods 0.000 description 5
- 241001342895 Chorus Species 0.000 description 4
- 230000003213 activating effect Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- HAORKNGNJCEJBX-UHFFFAOYSA-N cyprodinil Chemical compound N=1C(C)=CC(C2CC2)=NC=1NC1=CC=CC=C1 HAORKNGNJCEJBX-UHFFFAOYSA-N 0.000 description 4
- 238000010801 machine learning Methods 0.000 description 4
- 230000002787 reinforcement Effects 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 239000003086 colorant Substances 0.000 description 3
- 238000011017 operating method Methods 0.000 description 3
- 230000006403 short-term memory Effects 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000001939 inductive effect Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000002441 reversible effect Effects 0.000 description 2
- 238000005096 rolling process Methods 0.000 description 2
- 125000000391 vinyl group Chemical group [H]C([*])=C([H])[H] 0.000 description 2
- 229920002554 vinyl polymer Polymers 0.000 description 2
- 241001347978 Major minor Species 0.000 description 1
- 241000935974 Paralichthys dentatus Species 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000010267 cellular communication Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000001337 psychedelic effect Effects 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
- 235000019615 sensations Nutrition 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04847—Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
Definitions
- the present disclosure relates to an electronic device that provides sound based on a user input and an operating method thereof.
- an object is to provide a method for reproducing an immersive audio source for giving a natural musical liveliness to a user.
- an object is to provide an immersive audio source reproducing method for imparting liveliness to music based on user interaction.
- an object is to provide a method for reproducing an immersive audio source to provide various musical experiences to users.
- an object is to provide a method for reproducing an immersive audio source to provide a proactive and diverse musical experience to a user.
- one task is to provide an audio source playback method for non-linear playback of a set of audio sources.
- a method of operating an electronic device includes obtaining music files corresponding to a plurality of audio sources; outputting a first sound having a first frequency spectrum corresponding to a first playback time associated with the music file while displaying a first screen including at least one object; identifying a user's input to the at least one object while outputting the first sound;
- the first condition by the identified input is satisfied: When the identified input is the first input, outputting a second sound having a second frequency spectrum while displaying a second screen to which a first visual effect is applied. ; and if the identified input is the second input, outputting a third sound having a third frequency spectrum while displaying a third screen to which a second visual effect is applied.
- an operation of outputting a second sound having a second frequency spectrum corresponding to a second playback time associated with the music file an operating method may be provided.
- a method of operating an electronic device includes obtaining music files corresponding to a plurality of audio sources; outputting a first sound having a first frequency spectrum based on the music file while displaying a first screen including at least one first object; identifying a specific type of event while outputting the first sound and not receiving a user input for the at least one object;
- the type of the identified event is the first type for controlling sound
- a first auditory property of the first sound is changed, and the first auditory property of the first object is changed based on the change of the first auditory property.
- An operation method may be provided, including; an operation of changing the second auditory property.
- the solution to the problem is not limited to the above-described solution, and solutions not mentioned can be provided to those skilled in the art from this specification and the accompanying drawings. will be clearly understood.
- a realistic audio source reproducing method for inducing a natural musical change to a user within a limited time and space based on an interaction with the user and allowing the free musical change to give a sense of liveliness may be provided.
- a selective change to an audio source is provided based on a user interaction to create a leading musical experience for the user, but a non-selective change to the audio source is provided to provide various musical experiences to the user.
- a realistic audio source reproduction method may be provided.
- the possibility of continuous change to the audio source is given to the user to create a leading musical experience for the user, but the possibility of arbitrary change to the audio source is given to create various musical experiences to the user.
- a realistic audio source reproduction method for giving may be provided.
- an audio source reproducing method for non-linearly reproducing at least one audio source set based on a user interaction may be provided.
- FIG. 1 is a diagram for explaining an example of an interactive music listening system according to various embodiments.
- 2A is a diagram for explaining an example of a configuration of an electronic device according to various embodiments.
- 2B is a diagram for explaining an example of an operation of an electronic device according to execution of an application according to various embodiments.
- FIG. 3 is a diagram for explaining an example of an audio source set (or audio source) and content for each stage according to various embodiments.
- FIG. 4 is a diagram for explaining an example of distribution of an interactive music listening file according to various embodiments.
- FIG. 5 is a diagram for explaining an example of an operation of an electronic device according to various embodiments.
- 6A is a flowchart illustrating an example of an operation of an electronic device according to various embodiments.
- 6B is a diagram for explaining an example of an operation of controlling contents when a user input of an electronic device is received, according to various embodiments.
- FIG. 7A is a flowchart illustrating an example of an operation of an electronic device according to various embodiments.
- 7B is a diagram for explaining an example of an operation of controlling contents when information other than a user input of an electronic device is received, according to various embodiments.
- 8A is a flowchart illustrating an example of an operation of an electronic device according to various embodiments.
- 8B is a diagram for explaining an example of a mode switching operation of an electronic device according to various embodiments.
- FIG. 9 is a diagram for explaining an example of an operation of an electronic device according to various embodiments.
- FIG. 10 is a flowchart illustrating an example of an operation of an electronic device according to various embodiments.
- 11A is a diagram for explaining an example of visual content provided by an electronic device according to various embodiments.
- 11B is a diagram for explaining another example of visual content provided by an electronic device according to various embodiments.
- 11C is a diagram for explaining another example of visual content provided by an electronic device according to various embodiments.
- 12A is a flowchart illustrating an example of an operation of an electronic device according to various embodiments.
- 12B is a diagram for explaining an example of an operation of setting the number of main objects of an electronic device according to various embodiments.
- 12C is a diagram for explaining an example of an operation of controlling a property of an audio source according to movement of a main object of an electronic device, according to various embodiments.
- 13A is a flowchart illustrating an example of an operation of an electronic device according to various embodiments.
- 13B is a diagram for explaining an example of an operation of generating a new object of an electronic device, according to various embodiments.
- 13C is a diagram for explaining an example of an operation of creating a new object of an electronic device, according to various embodiments.
- 14A is a flowchart illustrating an example of an operation of an electronic device according to various embodiments.
- 14B is a diagram for explaining an example of an operation of controlling a location of an object when a user's input of an electronic device is received on a background screen, according to various embodiments.
- 15 is a flowchart illustrating an example of an operation of an electronic device according to various embodiments.
- 16 is a flowchart illustrating an example of an operation of an electronic device according to various embodiments.
- 17 is a flowchart illustrating an example of an operation of an electronic device according to various embodiments.
- 18 is a diagram for explaining an example of an operation of assigning an audio property to each main graphic object of an electronic device according to various embodiments.
- 19 is a diagram for explaining an example of an operation of an electronic device according to various embodiments.
- 20 is a flowchart illustrating an example of an operation of an electronic device according to various embodiments.
- 21A is a diagram for explaining an example of an operation of providing visual content of an electronic device according to various embodiments.
- 21B is a diagram for explaining an example of a chain interaction of electronic devices, according to various embodiments.
- 22 is a flowchart illustrating an example of an operation of an electronic device according to various embodiments.
- 23A is a diagram for explaining an example of an operation of providing an avatar for control based on a distance between users of electronic devices, according to various embodiments.
- 23B is a diagram for explaining another example of an operation of providing an avatar for control based on a distance between users of electronic devices, according to various embodiments.
- 24 is a flowchart illustrating an example of an operation of an electronic device according to various embodiments.
- 25 is a diagram for explaining an example of a screen switching operation of an electronic device according to various embodiments.
- 26 is a flowchart illustrating an example of an operation of an electronic device according to various embodiments.
- 27 is a diagram for explaining examples of types of user interaction, according to various embodiments.
- 28 is a flowchart illustrating an example of an operation of an electronic device according to various embodiments.
- 29 is a diagram for explaining an example of an operation of controlling visual content and auditory content based on the type of an identified object of an electronic device, according to various embodiments.
- FIG. 30 is a flowchart for describing an example of an operation of an electronic device according to various embodiments.
- 31 is a diagram for explaining an example of an operation of controlling visual content and auditory content based on the type of an identified object of an electronic device, according to various embodiments.
- 32 is a flowchart for describing an example of an operation of an electronic device according to various embodiments.
- 33 is a diagram for explaining an example of a switching operation between a user interaction mode and a user non-interaction mode of an electronic device, according to various embodiments.
- 34 is a diagram for explaining an example of an operation of an electronic device, according to various embodiments.
- 35 is a diagram for explaining an example of content provided by an electronic device according to various embodiments.
- 36 is a flowchart for describing an example of an operation of an electronic device according to various embodiments.
- FIG. 37 is a diagram for explaining an example of an event occurrence point and a time interval identified based on a musical unit of an electronic device, according to various embodiments.
- 38A is a diagram for explaining an example of an operation of providing a visual effect to guide a time section of an electronic device, according to various embodiments.
- 38B is a diagram for explaining an example of an operation of providing a visual effect to guide a time section of an electronic device, according to various embodiments.
- 38C is a diagram for explaining an example of an operation of providing an effect based on a user's input for each time interval for a main character of an electronic device, according to various embodiments.
- 39 is a flowchart for describing an example of an operation of an electronic device according to various embodiments.
- 40 is a diagram for explaining an example of a creation operation of a main graphic object of an electronic device, according to various embodiments.
- 41A is a diagram for explaining an example of a motion generation operation of an electronic device, according to various embodiments.
- 41B is a diagram for explaining an example of a motion control operation of an electronic device, according to various embodiments.
- FIG. 42 is a flowchart illustrating an example of an operation of an electronic device according to various embodiments.
- 43 is a diagram for explaining an example of an operation of providing a motion based on a musical unit of auditory content of an electronic device, according to various embodiments.
- 44 is a flowchart illustrating an example of an operation of an electronic device according to various embodiments.
- 45 is a diagram for explaining an example of activating a sub-object based on a location of a main character in an electronic device, according to various embodiments.
- 46 is a flowchart for describing an example of an operation of an electronic device according to various embodiments.
- 47 is a diagram for explaining an example of activating audio source playback according to sub-object acquisition of an electronic device, according to various embodiments.
- 48 is a flowchart for explaining an example of an operation of an electronic device, according to various embodiments.
- FIG. 49 is a diagram for explaining examples of reproduction of auditory content according to movement of the main character and movement of the main character according to reverse reproduction of the auditory content of an electronic device, according to various embodiments.
- 50 is a flowchart for describing an example of an operation of an electronic device according to various embodiments.
- 51 is a flowchart for describing an example of an operation of an electronic device according to various embodiments.
- 52 is a flowchart illustrating an example of an operation of an electronic device and a server, according to various embodiments.
- 53 is a diagram for explaining an example of a multi-play scenario of an electronic device, according to various embodiments.
- 54 is a diagram for explaining an example of a player (or application) including a source pool, an input pool, and a plurality of transports of an electronic field, according to various embodiments.
- 55A is a diagram for explaining an example of an operation of one transport, according to various embodiments.
- 55B is a diagram for explaining an example of an operation of one transport, according to various embodiments.
- 56 is a flowchart illustrating an example of an operation of an electronic device and a server, according to various embodiments.
- 57A is a diagram for explaining an example of a conversion operation between a basic transport and an event transport of an electronic device, according to various embodiments.
- 57B is a diagram for explaining an example of an operation according to movement of a main character on visual content provided based on event transport of an electronic device, according to various embodiments.
- 57C is a diagram for explaining another example of a conversion operation between a basic transport and an event transport of an electronic device, according to various embodiments.
- 58 is a flowchart for explaining an example of an operation of an electronic device and a server, according to various embodiments.
- 59A is a diagram for explaining an example of a conversion operation between a basic transport and an event transport of an electronic device, according to various embodiments.
- 59B is a diagram for explaining an example of an operation according to movement of a main character on visual content provided based on event transport of an electronic device, according to various embodiments.
- 60 is a flowchart illustrating an example of an operation of an electronic device and a server, according to various embodiments.
- 61 is a flowchart for describing an example of an operation of an electronic device and a server, according to various embodiments.
- 62A is a diagram for explaining an example of a time point at which graphic data is provided and a time point at which an audio source is reproduced, according to various embodiments.
- 62B is a diagram for explaining an example of an activation period according to various embodiments.
- 63 is a flowchart for describing an example of an operation of an electronic device and a server, according to various embodiments.
- 64 is a diagram for explaining an example of a platform file exchanged by a server, according to various embodiments.
- 65A is a diagram for explaining an example of an interface provided by a server, according to various embodiments.
- 65B is a diagram for explaining an example of information for each array acquired by a server, according to various embodiments.
- 66 is a flowchart for describing an example of an operation of an electronic device according to various embodiments.
- 67 is a flowchart illustrating an example of operations of a plurality of electronic devices (eg, an electronic device and an external electronic device) and a server, according to various embodiments.
- a plurality of electronic devices eg, an electronic device and an external electronic device
- a server e.g., a server
- 68 is a flowchart for explaining an example of an operation of an electronic device, according to various embodiments.
- 69 is a diagram for explaining another example of a platform file according to various embodiments.
- 70A is a diagram for explaining an example of visual content provided by a server, according to various embodiments.
- 70B is a diagram for explaining an example of visual content provided by a server, according to various embodiments.
- 71 is a diagram for explaining an example of a messenger function provided by a server, according to various embodiments.
- 72 is a diagram for explaining an example of visual content that can be provided by an interactive music listening system according to various embodiments.
- Electronic devices may be devices of various types.
- the electronic device may include, for example, a portable communication device (eg, a smart phone), a computer device, a portable multimedia device, a portable medical device, a camera, a wearable device, or a home appliance.
- a portable communication device eg, a smart phone
- a computer device e.g., a smart phone
- a portable multimedia device e.g., a portable medical device
- a camera e.g., a portable medical device
- a camera e.g., a portable medical device
- a camera e.g., a portable medical device
- a camera e.g., a camera
- a wearable device e.g., a smart bracelet
- first, second, or first or secondary may simply be used to distinguish that component from other corresponding components, and may refer to that component in other respects (eg, importance or order) is not limited.
- a (eg, first) component is said to be “coupled” or “connected” to another (eg, second) component, with or without the terms “functionally” or “communicatively.”
- the certain component may be connected to the other component directly (eg by wire), wirelessly, or through a third component.
- module used in various embodiments of this document may include a unit implemented in hardware, software, or firmware, and is interchangeable with terms such as, for example, logic, logical blocks, parts, or circuits.
- a module may be an integrally constructed component or a minimal unit of components or a portion thereof that performs one or more functions.
- the module may be implemented in the form of an application-specific integrated circuit (ASIC).
- ASIC application-specific integrated circuit
- a processor eg, a processor of a device (eg, an electronic device) may call at least one command among one or more instructions stored from a storage medium and execute it. This enables the device to be operated to perform at least one function according to the at least one command invoked.
- the one or more instructions may include code generated by a compiler or code executable by an interpreter.
- the device-readable storage medium may be provided in the form of a non-transitory storage medium.
- the storage medium is a tangible device and does not contain a signal (e.g. electromagnetic wave), and this term refers to the case where data is stored semi-permanently in the storage medium. It does not discriminate when it is temporarily stored.
- a signal e.g. electromagnetic wave
- the method according to various embodiments disclosed in this document may be provided by being included in a computer program product.
- Computer program products may be traded between sellers and buyers as commodities.
- a computer program product is distributed in the form of a device-readable storage medium (eg compact disc read only memory (CD-ROM)), or through an application store (eg Play Store TM ) or on two user devices ( It can be distributed (eg downloaded or uploaded) online, directly between smart phones.
- a device-readable storage medium eg compact disc read only memory (CD-ROM)
- an application store eg Play Store TM
- It can be distributed (eg downloaded or uploaded) online, directly between smart phones.
- at least part of the computer program product may be temporarily stored or temporarily created in a device-readable storage medium such as a manufacturer's server, an application store server, or a relay server's memory.
- each component (eg, module or program) of the above-described components may include a single object or a plurality of entities, and some of the plurality of entities may be separately disposed in other components. there is.
- one or more components or operations among the aforementioned corresponding components may be omitted, or one or more other components or operations may be added.
- a plurality of components eg modules or programs
- the integrated component may perform one or more functions of each of the plurality of components identically or similarly to those performed by a corresponding component of the plurality of components prior to the integration. .
- the actions performed by a module, program, or other component are executed sequentially, in parallel, iteratively, or heuristically, or one or more of the actions are executed in a different order, or omitted. or one or more other actions may be added.
- a method of operating an electronic device includes obtaining music files corresponding to a plurality of audio sources; outputting a first sound having a first frequency spectrum corresponding to a first playback time associated with the music file while displaying a first screen including at least one object; identifying a user's input to the at least one object while outputting the first sound;
- the first condition by the identified input is satisfied: When the identified input is the first input, outputting a second sound having a second frequency spectrum while displaying a second screen to which a first visual effect is applied. ; and if the identified input is the second input, outputting a third sound having a third frequency spectrum while displaying a third screen to which a second visual effect is applied.
- an operation of outputting a second sound having a second frequency spectrum corresponding to a second playback time associated with the music file an operating method may be provided.
- the operation of obtaining the plurality of audio sources based on the music file including, wherein the plurality of audio sources is a first audio source set corresponding to a first playback time associated with the music file outputting a first sound having the first frequency spectrum based on the first set of audio sources, including a second audio source set corresponding to a second playback time associated with the music file; and outputting the second sound having the second frequency spectrum based on the second audio source set.
- the first condition by the identified input when the first condition by the identified input is satisfied: based on outputting a specific audio source and/or applying a specific audio effect, a first signal having the first frequency spectrum outputting sound;
- the second condition by the identified input when the second condition by the identified input is satisfied: the second sound having the second frequency spectrum based on changing the playing audio source set from the first audio source set to the second audio source set
- An operation method may be provided, including; an operation of outputting .
- the operation of determining whether the first condition or the second condition is satisfied by the identified input may include: identifying the number of times of the identified input for the at least one object; comparing the number of times of the identified input with a preset threshold number; and determining whether the first condition or the second condition is satisfied based on the comparison result, based on the identified input.
- the preset threshold number when the number of the identified inputs is less than the preset threshold number, determining that the first condition is satisfied; and determining that the second condition is satisfied when the number of the identified inputs is greater than the preset threshold number.
- a method of operating an electronic device includes obtaining music files corresponding to a plurality of audio sources; outputting a first sound having a first frequency spectrum based on the music file while displaying a first screen including at least one first object; identifying a specific type of event while outputting the first sound and not receiving a user input for the at least one object;
- the type of the identified event is the first type for controlling sound
- a first auditory property of the first sound is changed, and the first auditory property of the first object is changed based on the change of the first auditory property.
- An operation method may be provided, including; an operation of changing the second auditory property.
- identifying information related to a time of outputting the first sound while outputting the first sound, identifying information related to a time of outputting the first sound; and identifying the specific type of event when the information associated with the time satisfies a specific condition.
- an operation of identifying the specific type of event may be provided.
- a method of operating an electronic device includes obtaining music files corresponding to a plurality of audio sources; outputting a first sound having a first frequency spectrum corresponding to a first playback time associated with the music file while displaying a first screen including at least one first object;
- the interactive music listening system 1 may be defined as a system implemented to provide (or perform) functions (or services) for enjoying music under user-led.
- a user using the interactive music listening system 1 is provided by the interactive music listening system 1 By actively reconstructing the music, you can enjoy the music dynamically. Accordingly, through the interactive music listening system 1, the user can more actively enjoy music, escaping from the conventional music listening behavior of simply passively listening to music, and the sense of immersion in music appreciation can be further increased.
- Examples of functions provided (or performed) by the interactive music enjoyment system 1 will be described with reference to various embodiments below.
- FIG. 1 is a diagram for explaining an example of an interactive music listening system 1 according to various embodiments.
- the interactive music listening system 1 may include an electronic device 110 and/or a server 120 .
- the electronic device 110 may be an electronic device implemented to provide predetermined content 100 .
- the electronic device 110 includes at least one electronic component (or hardware) (eg, a display) for providing visual content 100a (eg, a graphic user interface (GUI)), and hearing It may be a device implemented to include at least one electronic component (or hardware) (eg, a speaker) for providing the enemy content 100b (eg, music).
- the electronic device 110 may include a smart phone, a television (TV), a wearable device, a head mounted display (HMD) device, etc., but is not limited to the described example and provides visual content 100a to the user. And it may include various types of electronic devices capable of providing the auditory content (100b).
- the visual content (100a) and the auditory content (100b) provided by the electronic device 110 may be provided in a form associated with predetermined time information (100c) (eg, time after entering a specific stage).
- predetermined time information (100c) eg, time after entering a specific stage.
- the content 100 provided by the electronic device 110 may be dynamically controlled by a user.
- the electronic device 110 controls and/or changes based on at least some properties of the content 100 (eg, visual content 100a, auditory content 100b, time information 100c).
- the attributes of the remaining part of the content 100 may be controlled and/or changed.
- the electronic device 110 based on an input for controlling at least some properties of the visual content 100a received from the user while providing the auditory content 100b, the auditory content 100b ) It is possible to control at least some properties (or characteristics) of.
- the electronic device 110 provides the user with the visual content 100a capable of performing a function as an object that can be recognized and controlled while enjoying the auditory content 100b to the user, thereby providing the user with a dynamic auditory It is possible to control and enjoy the content 100b.
- the server 120 may be various types of external electronic devices implemented outside the electronic device 110 .
- the server 120 is a distribution server for providing an application (or program) implemented to provide an interactive music listening function to the electronic device 110, or information between the electronic device 110 and other external electronic devices ( Or data or file) may include at least one of a platform server for sharing.
- the interactive music listening system 1 may be implemented to include more devices and/or fewer devices.
- the interactive music listening system 1 is operatively connected (or communicatively connected) to the electronic device 110 to provide information (eg, images, various types of sensor information) about the external environment.
- An external electronic device may be further included.
- the interactive music listening system 1 may include only the electronic device 110 .
- At least some operations of the electronic device 110 according to various embodiments described below may be performed by the server 120, and at least some operations of the server 120 according to various embodiments may be performed. Some operations may be performed by the electronic device 110 and may be understood by those skilled in the art. In other words, an on-device type in which all operations are performed by the electronic device 110, a server type in which all operations are performed by the server 120, or the electronic device 110 and the server 120 A hybrid type interactive music listening system 1 in which operations are performed by each may be implemented.
- FIG. 2A is a diagram for explaining an example of a configuration of an electronic device 110 according to various embodiments. Hereinafter, FIG. 2A will be further described with reference to FIG. 2B.
- FIG. 2B is a diagram for explaining an example of an operation of the electronic device 110 according to execution of an application 251 according to various embodiments. An example of content for each stage 200 provided according to the execution of the application 251 of FIG. 2B will be described with reference to FIG. 3 .
- 3 is a diagram for explaining an example of content for each audio source set (or audio source) and stage 200 according to various embodiments.
- the electronic device 110 may include a processor 210, a communication circuit 220, an output device 230, an input device 240, and a memory 250. Without being limited to what is shown and/or described in FIG. 2A , the electronic device 110 may be implemented to include more components or fewer components. Meanwhile, as described above, at least some of the components of the electronic device 110 are implemented in another external electronic device (eg, the server 120), so that at least some of the operations according to the various embodiments described below are implemented in an external electronic device. It is obvious to those skilled in the art that it can be performed by an electronic device.
- the server 120 another external electronic device
- the processor 210 may include at least one processor, at least some of which are implemented to provide different functions.
- at least one other component eg, hardware or software component
- the processor 210 may be controlled by executing software (eg, a program), and various data processing or calculations can be performed.
- the processor 210 receives instructions or data from other components (eg, communication circuit 220, output device 230, input device 240). may be stored in the memory 250 (eg, volatile memory), commands or data stored in the volatile memory may be processed, and resultant data may be stored in a non-volatile memory.
- the processor 210 may include a main processor (eg, a central processing unit or an application processor) or a secondary processor (eg, a graphic processing unit, a neural processing unit (NPU)) that may operate independently or together therewith. , image signal processor, sensor hub processor, or communication processor).
- a main processor eg, a central processing unit or an application processor
- a secondary processor eg, a graphic processing unit, a neural processing unit (NPU)
- image signal processor e.g., sensor hub processor, or communication processor
- the auxiliary processor may use less power than the main processor or may be set to be specialized for a designated function.
- a secondary processor may be implemented separately from, or as part of, the main processor.
- a secondary processor is an electronic device, for example, on behalf of the main processor while the main processor is in an inactive (eg sleep) state, or together with the main processor while the main processor is in an active (eg application execution) state.
- At least some of functions or states related to at least one of the components of 110 may be controlled.
- the co-processor eg image signal processor or communication processor
- the co-processor is part of other functionally related components (eg communication circuit 220, output device 230, input device 240).
- an auxiliary processor eg, a neural network processing device
- AI models can be created through machine learning.
- Such learning may be performed, for example, in the electronic device 110 itself where the artificial intelligence model is performed, or may be performed through a separate server (eg, the server 120).
- the learning algorithm may include, for example, supervised learning, unsupervised learning, semi-supervised learning or reinforcement learning, but in the above example Not limited.
- the artificial intelligence model may include a plurality of artificial neural network layers. Artificial neural networks include deep neural networks (DNNs), convolutional neural networks (CNNs), recurrent neural networks (RNNs), restricted boltzmann machines (RBMs), deep belief networks (DBNs), bidirectional recurrent deep neural networks (BRDNNs), It may be one of deep Q-networks or a combination of two or more of the foregoing, but is not limited to the foregoing examples.
- the artificial intelligence model may include, in addition or alternatively, software structures in addition to hardware structures.
- the communication circuit 220 establishes a direct (eg, wired) communication channel or a wireless communication channel between the electronic device 110 and an external electronic device (eg, the server 120 of FIG. 1), and Communication through an established communication channel may be supported.
- the communication circuit 220 may include one or more communication processors that operate independently of the processor 210 (eg, an application processor) and support direct (eg, wired) communication or wireless communication.
- the communication circuit 220 may be a wireless communication module (eg, a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module) or a wired communication module (eg, a local area network (LAN)). ) communication module, or power line communication module).
- GNSS global navigation satellite system
- a corresponding communication module is a first network (eg, a short-range communication network such as Bluetooth, wireless fidelity (WiFi) direct, or infrared data association (IrDA)) or a second network (eg, a legacy cellular network, a 5G network, It is possible to communicate with an external electronic device (eg, server 140) through a next-generation communication network, the Internet, or a long-distance communication network such as a computer network (eg, LAN or WAN).
- a next-generation communication network eg, the Internet
- a long-distance communication network such as a computer network (eg, LAN or WAN).
- These various types of communication modules may be integrated as one component (eg, a single chip) or implemented as a plurality of separate components (eg, multiple chips).
- the wireless communication module may identify or authenticate the electronic device 110 within a communication network such as the first network or the second network using subscriber information (eg, International Mobile Subscriber Identifier (IMSI)) stored in the subscriber identification module. .
- the wireless communication module may support a 5G network after a 4G network and a next-generation communication technology, for example, NR access technology (new radio access technology).
- NR access technologies include high-speed transmission of high-capacity data (enhanced mobile broadband (eMBB)), minimization of terminal power and access of multiple terminals (massive machine type communications (mMTC)), or high reliability and low latency (ultra-reliable and low latency (URLLC)). -latency communications)) can be supported.
- the wireless communication module may support a high frequency band (eg, mmWave band) to achieve a high data rate, for example.
- the wireless communication module uses various technologies for securing performance in a high frequency band, for example, beamforming, massive multiple-input and multiple-output (MIMO), and full-dimensional multiple-output (FD). Technologies such as full dimensional MIMO (MIMO), array antenna, analog beam-forming, or large scale antenna may be supported.
- MIMO massive multiple-input and multiple-output
- FD full-dimensional multiple-output
- the wireless communication module may support various requirements regulated by the electronic device 110, an external electronic device (eg, the server 120), or a network system.
- the wireless communication module may be used to realize peak data rate (eg, 20 Gbps or more) for realizing eMBB, loss coverage (eg, 164 dB or less) for realizing mMTC, or U-plane latency (eg, down) for realizing URLLC.
- peak data rate eg, 20 Gbps or more
- loss coverage eg, 164 dB or less
- U-plane latency eg, down
- link (DL) and uplink (UL) each of 0.5 ms or less, or round trip 1 ms or less) may be supported.
- the output device 230 may include electronic components (or hardware) implemented to provide the visual content 100a and/or the auditory content 100b to the outside of the electronic device 110.
- the output device 230 may include electronic components implemented to provide a tactile sensation, such as a display 231, a speaker 233, and a haptic device 235, but is not limited to the described examples and various types of Electronic components may be further included. Since electronic components described as examples of the output device 230 are obvious to those skilled in the art, a detailed description thereof will be omitted.
- the input device 240 may include an electronic component (or hardware) implemented to acquire information from the outside of the electronic device 110 .
- the input device 240 may include a camera 241, a microphone 243, and a sensor 245, but is not limited to the described example and may further include various types of electronic components. Since electronic components described as examples of the input device 240 are obvious to those skilled in the art, detailed descriptions thereof are omitted.
- the memory 250 may be implemented to store the application 251 and the database 253 .
- the application 251 (or program) may be implemented to provide functions for interactive music enjoyment.
- the database 253 includes various types of files (eg, graphic element set 211a) for providing visual content 100a and/or auditory content 100b by execution of the application 251. (or graphic element), an audio source set 231a (or audio source), and a database 253 (or rule information 260).
- the database 253 (or rule information 260) is shown to be implemented separately from the application 251, at least a part of it may be implemented within the application 251 without being limited to the illustrated example.
- the database 253 may be defined as a source pool (source pool).
- the application 251 (or program) and/or the database 253 may be received (or downloaded) from the server 120 (eg, a distribution server) to the electronic device 110, but is not limited to the described example. It may be previously stored in the electronic device 110 .
- the application 251 is not only an application, but also a file (eg, a music file), as well as a program and/or computer code, which can be executed by an electronic device and can be stored in a storable form.
- a file eg, a music file
- a program and/or computer code which can be executed by an electronic device and can be stored in a storable form.
- the processor 210 converts the visual content 100a and/or the auditory content 100b to predetermined time information 100c (eg, stage 200a). , ?, 200n), and/or an operation of controlling electronic components (or hardware) (eg, the output device 230) to provide data according to the timeline dt).
- the following operations of the electronic device 110 eg, the processor 210) may be understood as operations performed based on the execution of the application 251.
- the electronic device 110 obtains graphic information/audio information from the database 253 based on the execution of the application 251, and outputs the visual content 100a to the output device 230 (eg : Output (or display) through the display 231), and/or the auditory content 100b may be output through the output device 230 (eg, the speaker 233).
- the output device 230 eg : Output (or display) through the display 231
- the auditory content 100b may be output through the output device 230 (eg, the speaker 233).
- the visual content 100a may include a graphic element set 211a.
- the graphic element set 211a may include a plurality of graphic elements.
- the graphic elements may mean graphic objects, images, and texts that can be displayed on the display 231, and are not limited to the examples described above, and electronic information (eg, visual effects) that can be displayed on the display 231 )
- the visual content 100a may be a concept corresponding to a visual set of the source pool 5420 to be described later.
- the auditory content 100b may include an audio source set 231a.
- the audio source set 231a may include a plurality of audio sources.
- the auditory content 100b may be a concept corresponding to an audio set of a source pool 5420 to be described later.
- the application 251 may include an audio source set 231a, a graphic element set 211a, and rule information 260 for each of the plurality of stages 301a and 301n. At least one of the audio source set 231a, the graphic element set 211a, and the rule information 260 included in each of the plurality of stages 301a and 301n may be different from each other. Accordingly, for example, when the application 251 is executed, as described above, different parts of music (e.g., verses ( The auditory content 100b corresponding to the music composition) may be provided. Also, for example, when the application 251 is executed, visual content 100a including different graphic element sets 211a may be displayed.
- control operations may be performed, or when different user interactions are input for each of the plurality of stages 301a and 301n, control operations corresponding to each other may be performed.
- the plurality of stages 300a and 300b may correspond to different parts of the sound source file 300 .
- each of the plurality of audio source sets 300a, 300b, and 300c may be obtained from different parts of the sound source file 300.
- each of the plurality of stages 301a and 301n may be implemented to provide (or include) different audio source sets 301a and 301n.
- the electronic device 110 can provide different parts of the sound source file 300 for each of the plurality of stages 301a and 301n.
- the plurality of stages 300a and 300b may be understood as a timeline dt in a sense corresponding to a specific time of the sound source file 300 . Switching between the plurality of stages 300a and 300b may be performed when a condition associated with a user's input is satisfied, which will be described later with reference to each embodiment.
- the maintained times dt1 and dt2 in a state of entering a specific stage may be understood as a timeline.
- the electronic device 110 stores rule information 260 corresponding to the maintained times dt1 and dt2, and modifies the visual content 100a and the auditory content 100b based on the stored rule information 260. can be provided.
- FIG. 3 is a diagram for explaining an example of an audio source according to various embodiments.
- an audio source may mean electronic data (or file, or information) generating sound. That is, a specific type of sound may be output according to the reproduction of the audio source.
- the audio source is stored or generated in the form of a CD record stored so that sound can be output using a turntable, a CD record stored so that sound can be output using a computing device and a speaker connected thereto, and a sound wave.
- the audio source may correspond to recorded audio data, audio data stored or generated in an analog-signal format, audio data stored or generated in a digital-signal format, and the like, but is not limited thereto.
- the audio source may correspond to audio data stored in a data format to which audio compression techniques are applied, such as MP3 or FLAC, but is not limited thereto.
- an audio source may include at least one or more sound data.
- the audio source may include at least one piece of first sound data that is reproduced when the application 251 is executed.
- an “audio source set” in the sense of including a plurality of audio sources.
- the audio source can be understood as a concept including a single audio source as well as a concept of an audio source set including a plurality of audio sources.
- the at least one piece of first sound data may be sound data that is reproduced regardless of a user's interaction.
- the audio source may include at least one or more specific stem files (or stems) having corresponding (or similar) time lengths (or reproduction lengths).
- the stem may include time-sequential sound sources constituting a song, such as vocals, musical instruments (eg, guitars, drums, pianos, cymbals, plucks, turn-tables, kicks), synths, and the like. Accordingly, when the application 251 is executed, mixed sounds may be provided as sounds based on each stem file are output during the same playing time.
- musical instruments eg, guitars, drums, pianos, cymbals, plucks, turn-tables, kicks
- synths synths
- the audio source may include at least one second sound data set to be output based on user interaction.
- a reproduction length of the at least one piece of second sound data may be shorter than that of the aforementioned stem file.
- the at least one second sound data outputs a specific type of sound (eg, sound effect, voice of a specific word (eg, "what"), sound corresponding to a specific instrument (eg, piano, drum, symbol, etc.)) It may be data implemented to The second sound data may be sound data implemented separately from an audio source.
- a musical meter may refer to a unit for classifying and describing music, and includes beat, bar, motive, small phrase, and period. etc. may be included.
- the beat is a basic unit of time and may mean one beat of a reference beat, and may be understood as a beat that a person skilled in the art can understand.
- the bar may mean a musical unit including the reference number of beats of an audio source, and may mean a minimum unit of a piece of music divided into vertical lines in a sheet music, and a bar that can be understood by a person skilled in the art. can be understood as That is, different audio sources may have different musical units.
- the at least one musical instrument may include a piano, guitar, drum, bass, etc., but is not limited thereto, and may include both machines and instruments for generating sound.
- the recording data may be recorded data recorded by piano, guitar, drum, and bass in concert, and recorded data in which the performance of piano, guitar, and drum bass is recorded and re-outputted through an output device such as a speaker. It may be, but is not limited thereto.
- a source separation technique may be used to obtain an audio source.
- an audio source may be obtained by separating an audio source recorded through an ensemble described above into audio sources corresponding to each musical instrument using a source separation model, but is not limited thereto.
- the separated audio sources may physically correspond to each musical instrument, but are not limited thereto and semantically may include cases in which they are regarded as corresponding to each musical instrument.
- the source separation model may be implemented using a machine learning method.
- the source separation model may be a model implemented through supervised learning, but is not limited thereto, and may be a model implemented through unsupervised learning, semi-supervised learning, reinforcement learning, and the like.
- the source separation model may be implemented as an artificial neural network (ANN).
- ANN artificial neural network
- the source separation model may be implemented as a feedforward neural network, a radial basis function network, or a kohonen self-organizing network, but is not limited thereto.
- the source separation model may be implemented as a deep neural network (DNN).
- DNN deep neural network
- the source separation model may be implemented with a Convolutional Neural Network (CNN), a Recurrent Neural Network (RNN), a Long Short Term Memory Network (LSTM), or Gated Recurrent Units (GRUs). may, but is not limited thereto.
- CNN Convolutional Neural Network
- RNN Recurrent Neural Network
- LSTM Long Short Term Memory Network
- GRUs Gated Recurrent Units
- data input to the source separation model may be an audio source itself obtained by recording, or may be preprocessed audio source data.
- structure separation techniques may be used to obtain an audio source according to an embodiment.
- the audio source recorded through the above-described ensemble may be separated into at least two or more music sections using a structural separation model to obtain the audio source, but is not limited thereto.
- the structural separation model may be implemented by a machine learning method.
- the structural separation model may be a model implemented through supervised learning, but is not limited thereto, and may be a model implemented through unsupervised learning, semi-supervised learning, reinforcement learning, and the like.
- the structure separation model may be implemented as an artificial neural network (ANN).
- ANN artificial neural network
- the structural separation model may be implemented as a feedforward neural network, a radial basis function network, or a kohonen self-organizing network, but is not limited thereto.
- the structure separation model may be implemented as a deep neural network (DNN).
- DNN deep neural network
- the structural separation model may be implemented as a convolutional neural network (CNN), a recurrent neural network (RNN), a long short term memory network (LSTM), or a gated recurrent units (GRUs). may, but is not limited thereto.
- CNN convolutional neural network
- RNN recurrent neural network
- LSTM long short term memory network
- GRUs gated recurrent units
- data input to the structural separation model may be an audio source itself obtained by recording, or may be preprocessed audio source data.
- recording data in which performances of at least one musical instrument and vocal are recorded may be used, but is not limited thereto.
- the at least one musical instrument may include a piano, guitar, drum, bass, etc., but is not limited thereto, and may include both machines and instruments for generating sound.
- the recording data may be recorded data recorded by playing the piano, guitar, drums, and bass, respectively, and the sound re-outputted through an output device such as a speaker after each performance of the piano, guitar, drums, and bass is recorded. recorded data, but is not limited thereto.
- At least one audio source may be generated to obtain an audio source.
- At least one audio source may be generated based on a sound type, melody, genre, vocal, etc. to obtain an audio source according to an embodiment, but is not limited thereto.
- an audio source generation model may be used to obtain an audio source.
- the audio source generation model may be implemented using a machine learning method.
- the sound generation model may be a model implemented through supervised learning, but is not limited thereto, and may be a model implemented through unsupervised learning, semi-supervised learning, reinforcement learning, and the like.
- the audio source generation model may be implemented as an artificial neural network (ANN).
- ANN artificial neural network
- the sound generation model may be implemented as a feedforward neural network, a radial basis function network, or a kohonen self-organizing network, but is not limited thereto.
- the audio source generation model may be implemented as a deep neural network (DNN).
- DNN deep neural network
- the sound generation model may be implemented with a Convolutional Neural Network (CNN), a Recurrent Neural Network (RNN), a Long Short Term Memory Network (LSTM), or Gated Recurrent Units (GRUs). may, but is not limited thereto.
- CNN Convolutional Neural Network
- RNN Recurrent Neural Network
- LSTM Long Short Term Memory Network
- GRUs Gated Recurrent Units
- an audio source generation model may be applied to the obtained audio source.
- an audio source translated into English may be obtained by applying an audio source generation model to an audio source recorded in Korean, but is not limited thereto.
- an audio source that can be determined as a female voice may be obtained by applying an audio source generation model to an audio source recorded as a male voice, but is not limited thereto.
- an audio source may have (or include) various types of properties that can be controlled (or adjusted, or changed). It may include auditory attributes such as height, type, size, and playback speed, and may further include various types of attributes related to hearing without being limited to the described examples.
- whether to output may indicate whether sound data included in the audio source (eg, first sound data and/or second sound data) is output.
- the pitch represents the pitch of sound data (eg, the first sound data and/or the second sound data) included in the audio source
- the loudness represents the loudness of the sound data
- the reproduction speed represents the sound It can indicate the reproduction speed of data.
- the type may indicate a type (eg, voice, mechanical sound) of sound data (eg, first sound data and/or second sound data) included in the audio source.
- the properties of the audio source may be changed by a user's interaction, which will be described in detail later.
- a frequency spectrum of sound output based on the audio source may be determined according to the property of the audio source. That is, when the property of the audio source is changed, the frequency spectrum of the output sound may be changed.
- the frequency spectrum may mean intensity (or energy) of each frequency of sound.
- the graphic element set 201a may mean electronic data that can be displayed on a display of an electronic device.
- the graphic element set 201a may refer to graphic objects displayed on the execution screen.
- the properties of the graphic element set 201a may be changed by a user's input (eg, touch or drag). Based on the property of the graphic element set 201a being changed, the property of the corresponding audio source may be changed, which will be described in the table of contents of the rule information 260.
- the graphic element set 201a may include graphic objects outputtable on the display 231 . Since the graphic objects may include graphic objects of various shapes, description will be made with reference to respective embodiments. Also, for example, the graphic element set 201a may include text (not shown). Also, for example, the graphic element set 201a may include a type of graphic object (not shown) (or visual effect) set to be displayed based on a user's interaction.
- the graphic element set 201a may have (or include) various types of properties that can be controlled (or adjusted or changed).
- the graphic element set 201a may include visual properties such as color, shape, size, and number, and may further include various types of visual properties without being limited to the described examples.
- the properties of the graphic element set 201a may be changed by a user's interaction, which will be described in detail later.
- the rule information 260 (or interaction rule) is a response to a user's interaction (eg, a touch input or a drag input), and attributes of the audio source set 203a and graphic objects. It may refer to information set to perform a control operation for changing properties and/or timelines.
- the rule information 260 controls properties of an audio source to be reproduced by performing a specific control operation (eg, applying a first audio effect) when a user's touch input to a specific graphic object is obtained. It may be information to be changed, but is not limited thereto.
- control operation may include operations for controlling a property of an audio source.
- control operation may include applying an audio effect to an audio source, replacing or changing a reproduced audio source, adding an audio source different from the reproduced audio source, and controlling at least some of the reproduced audio source.
- a non-reproducing operation may be included, but is not limited thereto.
- the audio effects include tempo control for an audio source, low pass filter, high pass filter, pitch control, vocal audio output, playing instrument change, sound source stop, instrument sound output, sound source reconstruction, beat Beat repeater, sound output of specific voice only, voice output level control, echo, sound section movement, psychedelic mode, panning control, flanger chorus, reverberation effect (reverb), delay, etc. It may include, but is not limited to.
- control operation may include operations for controlling properties of the graphic element set 201a.
- control operation may include an operation of applying a visual effect to a displayed screen.
- control operation may include, but is not limited to, changing a background color, changing a shape, size, or color of at least one graphic object, displaying lyrics for a vocal among audio sources, and the like. .
- the rule information 260 includes at least one item corresponding to at least one of a type (or property) of a controlled graphic object (hereinafter, an interaction target) or a type of user interaction (hereinafter, an input type). It can be implemented to perform the control operation of. For example, at least one first control operation may be performed when a first graphical object is controlled by a first user interaction, and at least one second control operation may be performed when a second graphical object is controlled by a second user interaction. A control operation may be performed.
- the user varies the interaction in the new digital single file (eg, by varying the type of interaction and/or by varying the type (or property) of the graphic object being interacted with), resulting in output Audible contents and/or visual contents can be controlled and experienced in various forms.
- control operation may be expressed as a scheme (or method, or method), and the description of performing different control operations refers to controlling the properties of the audio source and/or the graphic element set 201a with different schemes. can be understood as
- the rule information 260 may include properties of the aforementioned visual content 100a (eg, graphic element set 201a) and auditory content 100b (eg, audio) based on user interaction. Control (or set, or change) at least a part of the attributes of the source set 203a and/or the time information 100c (e.g., the stage 200, and/or the timelines dt, dt1, dt2) It may be information for
- the electronic device 110 provides rule information corresponding to specific time information (eg, the stage 200a and/or timelines dt, dt1, and dt2) among a plurality of pieces of rule information and the type of user interaction.
- To provide visual content (100a) and auditory content (100b) based on identifying and acquiring specific information among graphic elements stored in the database 253 and specific information among audio sources based on the identified rule information can be implemented
- the information is based on the user's interaction, it may be defined as an interaction rule.
- the object of the interaction may mean an object of an input obtained from a user.
- the object of the interaction may mean an object of an input obtained from a user.
- it may be an object that physically obtains an input from a user, may be an object displayed through a user's terminal to obtain an input from a user, and may mean a specific area for obtaining an input from a user.
- It may be an object displayed through the user's terminal to guide a specific area for obtaining an input from the user, but is not limited thereto, directly and/or indirectly obtains an input from the user, or specifies the obtained input. It can mean things to do.
- the object of the interaction may include at least one object displayed through the user's terminal.
- the object of the interaction may include a first object and a second object, but is not limited thereto.
- the object of the interaction may include a background object displayed through the user's terminal.
- the interaction target is a background object
- it may be designed to perform at least one operation when obtaining an interaction with respect to the background object from the user, but is not limited thereto.
- the object of the interaction may include at least one hardware device.
- the target of the interaction may include a smart phone, and may be designed to perform at least one operation when a location change interaction of a user with respect to the smart phone is acquired, but is not limited thereto.
- the at least one hardware device may include all hardware devices and peripheral devices such as a smartphone, earphone, headphone, joycon, etc., but is not limited thereto.
- the input type may refer to a type of input obtained from a user.
- the input type may include a touch input using a touch panel, a drag input, a bash input, a swap input, a specific pattern input, and the like, and may include various inputs using at least one input device, and at least Motion input using a sensor such as a camera, which can include various inputs such as shake input and swing input for one hardware device.
- In-air touch input in which the user's position and the position of an object are matched in a virtual space by tracking the user's position It may include various inputs, such as, but is not limited thereto, and may include various inputs defined as an algorithm to perform a specific operation.
- rule information 260 in which a music theme is changed based on external information such as weather information or location information rather than a user's input may be implemented.
- rule information may be provided for each stage 200 .
- the electronic device 110 may control graphic elements and audio sources. .
- stage type audio source graphic object Rule information Stage corresponding to the intro Audio source corresponding to the intro - Fluid main object in the shape of a circle - Multiple sub-objects in the shape of a cloud -
- the audio source of the first instrument e.g., symbol
- the audio source of the second instrument e.g. fluke
- the audio source of the 3rd instrument e.g.
- the music theme is randomly changed, and the visual theme is randomly changed -
- the vocal audio source corresponding to the current playback point is played at a random pitch and visual effects are applied Stage corresponding to verse-pre Audio source compatible with verse-pre - Fluid main object in a circular shape - Sub-objects in different sizes or colors in a curved long shape -
- the vocal audio source called "what" is played - Plays random sound effect audio sources and applies visual effects when touching different sub-objects Stage corresponding to the verse Audio source corresponding to the verse - Circle-shaped fluid main object- Curved long-shaped sub-object -
- the audio source of the third instrument e.g., symbol
- rule information 260 for each stage corresponding to each part of various pieces of music may be implemented.
- rule information 260 corresponding to various types of user inputs may also be implemented.
- Interaction rules (203) zoom-in Slow down the playback speed of an audio source zoom out Quickly control the playback speed of an audio source shake electronic devices Play the kick audio source. Repeat playing the audio source for a certain section (e.g. one bar).
- the stored interactive music listening files (hereinafter, files) (or platform files 6400) are stored and/or can be distributed
- the music listening file may be stored and distributed together with interaction information input by a user.
- the distribution may be performed through the server 120 or may be performed directly between the electronic devices 110 .
- FIG. 4 is a diagram for explaining an example of distribution of an interactive music listening file according to various embodiments.
- the files (A', A'', A'') include the same sound source (A) and rule information (R), but the number of distributions (n) It may include interaction information (I1, I2, I3, I4) generated by the interaction of
- the file A' may be distributed to another electronic device (eg, a second electronic device), and the second electronic device reproduces the file A' based on the first information I1 for the user interaction.
- second information I2 for the second user interaction may be additionally stored separately.
- a file further including the first information (I1) and the second information (I2) for the user interaction may be defined as A''.
- the audio source A can be stored without modification, it can be easy to track the original sound source source, and each interaction can be added, deleted, changed, etc. freely.
- each interaction can be stored time-sequentially for one sound source, can be stored time-sequentially for each of a plurality of sound source sources, and can be stored time-sequentially together with time-sequential storage of a plurality of sound source sources. may, but is not limited thereto.
- each user may share only interactions to share modified music, but is not limited thereto.
- each user may share a sound source and an interaction corresponding to the sound source in order to share the modified music, but is not limited thereto.
- each user may share predetermined information about a sound source and an interaction corresponding to the sound source in order to share modified music, but is not limited thereto.
- the following storage and reproduction technology may be applied, but is not limited thereto.
- 5 is a diagram for explaining an example of an operation of the electronic device 110 according to various embodiments.
- the electronic device 110 based on receiving the control event 500, provides visual content ( 100a) (eg, graphic element set 500a), auditory content 100b (eg, audio source set 500b), or time information 100c (eg, timeline and/or stage 500c) of At least one may be controlled, and another object (eg, the audio source set 500b) corresponding to the controlled object (eg, the graphic element set 500a) may be controlled. Since the control of the electronic device 110 may be based on the rule information 260, duplicate descriptions will be omitted.
- visual content 100a
- auditory content 100b eg, audio source set 500b
- time information 100c eg, timeline and/or stage 500c
- the control event 500 may include an event caused by a user's input and an event generated when there is no user's input, which will be described with reference to the following drawings.
- the electronic device 110 controls the visual content 100a and/or the time information 100c) ( Example: by controlling the timeline), it is possible to transform and provide the corresponding auditory content (100b).
- FIG. 6A is a flowchart illustrating an example of an operation of the electronic device 110 according to various embodiments. According to various embodiments, the operations illustrated in FIG. 6A may be performed in various orders without being limited to the illustrated order. Further, according to various embodiments, more operations than the operations illustrated in FIG. 6A may be performed, or at least one operation less than that shown in FIG. 6A may be performed. Hereinafter, FIG. 6A will be further described with reference to FIG. 6B.
- 6B is a diagram for explaining an example of an operation of controlling contents when a user input of the electronic device 110 is received, according to various embodiments.
- the electronic device 110 obtains a music file corresponding to a plurality of audio sources in operation 601 and includes at least one object in operation 603. While displaying a first screen for playing, a first sound having a first frequency spectrum corresponding to a first time interval associated with the music file may be output. For example, as shown in FIG. 6B, the electronic device 110 displays the visual content 100a including the first graphic object set 601a on the display while playing the audio source set 601b. It is possible to provide the auditory content (100b). In this case, the frequency-specific intensity (frequency spectrum) of the provided auditory content 100b may be the first frequency spectrum.
- the electronic device 110 identifies the user's input for the first object in operation 605, and when the user's input is identified (605-Yes) In operation 607, it is determined whether the first condition is satisfied, and when it is determined that the first condition is not satisfied (607 - No), in operation 613 it is determined whether the second condition is satisfied. For example, as shown in FIG. 6B , when the electronic device 110 determines that a user's input (eg, interaction A) for a specific object (eg, 601a) is received, the electronic device 110 associated with the user's input It is possible to determine whether the conditions are satisfied.
- a user's input eg, interaction A
- a specific object eg, 601a
- the first condition associated with the user's input may be satisfied when a specific type of user's input is received less than a threshold number of times during a specified time period.
- the first condition may be satisfied when a single touch input or a drag input is received for a specified period of time.
- the second condition related to the user's input may be satisfied when a specific type of user's input is received more than a threshold number of times during a specified time period. For example, when a touch input is received more than a threshold number of times during a specified time period, the second condition may be satisfied.
- rule information is pre-designated for each condition associated with a user's input, and the electronic device 110 provides content based on the specific rule information (or specific scheme) corresponding to the satisfied condition.
- specific rule information eg, visual content 100a, auditory content 100b, and visual information 100c
- operations 609, 611, and 615 of the electronic device 110 below may be performed.
- the electronic device 110 in operation 609 when the first condition is satisfied (607-yes), in case of the first input, the first visual effect is applied.
- a second sound having a second frequency spectrum is outputted while displaying two screens, and in operation 611, in case of the second input, a third sound having a third frequency spectrum is outputted while displaying a third screen to which the first visual effect is applied.
- the electronic device 110 identifies the type of input received, and provides visual content 100a based on a method (or scheme or rule) corresponding to the identified type. It is possible to control the properties of and / or the properties of the auditory content (100b).
- the electronic device 110 transforms and displays the shape of at least a part of the graphic element 601a, and displays at least one of the first audio source set 601b.
- Auditory content (100b) (or sound) of a frequency spectrum (eg, intensity (or energy) per frequency) transformed by transforming and reproducing some properties (eg, whether output, pitch, type, size, playback speed) can output Accordingly, the user can enjoy sound in a different atmosphere (eg, a different frequency spectrum) by controlling the graphic element 601a while enjoying the sound.
- the electronic device 110 moves and displays the position of at least a portion of the graphic element 601a, and displays the properties of at least a portion of the first audio source set 601b (Example: whether output, pitch, type, size, playback speed) is transformed and reproduced to output the auditory content 100b (or sound) of the transformed frequency spectrum (eg, intensity (or energy) per frequency) may be
- the frequency-specific intensity (frequency spectrum) of the provided auditory content 100b may be the second frequency spectrum.
- the intensity of sound in a high frequency band increases according to a change in the properties of a sound source, so that more heightened auditory content 100b can be provided.
- the electronic device 110 in operation 615 when the second condition is satisfied (613-yes), the second time interval associated with the music file (eg, 613-yes).
- a second sound having a second frequency spectrum corresponding to a different stage may be output.
- the electronic device 110 changes the current stage (eg, the first stage 601c) to another stage (eg, the second stage (not shown)), and Visual content 100a including a graphic element set (eg, a second graphic element set (not shown)) corresponding to a stage (eg, a second stage (not shown)) is displayed, and an audio source set (eg, a second graphic element set (not shown)) is displayed.
- a graphic element set eg, a second graphic element set (not shown)
- an audio source set eg, a second graphic element set (not shown)
- the electronic device 110 displays the visual content 100a and/or the time information 100c. It is possible to control at least a part of (eg, a timeline), and modify and provide the auditory content (100b) based on the control.
- FIG. 7A is a flowchart illustrating an example of an operation of the electronic device 110 according to various embodiments. According to various embodiments, the operations illustrated in FIG. 7A may be performed in various orders without being limited to the illustrated order. Also, according to various embodiments, more operations than the operations shown in FIG. 7A may be performed, or at least one operation less than that shown in FIG. 7A may be performed. Hereinafter, FIG. 7A will be further described with reference to FIG. 7B.
- FIG. 7B is a diagram for explaining an example of an operation of controlling contents when information other than a user's input of the electronic device 110 is received according to various embodiments.
- the electronic device 110 obtains a music file corresponding to a plurality of audio sources in operation 701 and includes at least one object in operation 703. While displaying a first screen for playing, a first sound having a first frequency spectrum corresponding to a first time associated with the music file may be output. For example, as shown in FIG. 7B, the electronic device 110 displays the visual content 100a including the first graphic object set 701a on the display while playing the audio source set 701b. It is possible to provide the auditory content (100b). In this case, the frequency-specific intensity (frequency spectrum) of the provided auditory content 100b may be the first frequency spectrum.
- the electronic device 110 determines whether an event (eg, the control event 500) has occurred, and determines whether an event (eg, the control event (eg, the control event ( 500)) occurs (705-yes), in operation 707, it is determined whether the type of event (eg, control event 500) is the first type for screen control, and if it is not the first type, (707-No) In operation 711, it may be determined whether the type of the generated event (eg, the control event 500) is the second type for sound control.
- the electronic device 110 identifies various types of control events 500 while a user's input is not received (eg, while the mode of the electronic device 110 is set to a user non-interaction mode). action can be performed. As at least part of the operation of identifying the control event 500, the electronic device 110 responds to a change in the timeline dt (eg, a change from the first timeline 701c to the second timeline 703c). A change in a property (eg, a position of a character) of the automatically changed visual content 100a (eg, the graphic element set 701a) may be identified. The properties of the visual content 100a displayed on the display of the electronic device 110 may be pre-set to randomly change over time.
- a change in the timeline dt eg, a change from the first timeline 701c to the second timeline 703c.
- a change in a property (eg, a position of a character) of the automatically changed visual content 100a eg, the graphic element set 701a
- a character displayed on a display may be pre-set to automatically move, and various types of properties of various types of graphic elements may be pre-set to be changed without being limited to the described examples.
- the electronic device 110 may identify the stage 200 that automatically changes over time as at least part of an operation of identifying the control event 500 .
- the electronic device 110 determines whether an elapsed time exceeds a threshold value after the aforementioned change of the timeline dt and/or the change of the stage 200 is identified; When the threshold value is exceeded, it may be determined that the control event 500 has occurred, but is not limited to the described example.
- the electronic device 110 may identify the type of the identified control event 500 .
- the type may include a type for controlling the property of the visual content 100a and the property of the auditory content 100b.
- the electronic device 110 identifies a change in the timeline 200 (eg, a change from the first timeline 701c to the second timeline 703c). In this case, it may be identified that the first type of event for changing the property of the visual content 100a has occurred.
- the electronic device 110 in operation 709 when the generated event is the first type for screen control (707-yes), the at least one first Operation when the first visual property of an object is changed and the first auditory property of the first sound is changed based on the change of the visual property, and the generated event is the second type for sound control (711-yes)
- a second auditory property of the first sound may be changed, and a second visual property of the first object may be changed based on the change of the auditory property.
- a specific type of event occurs to control a specific type of content (eg, visual content 100a or auditory content 100b)
- the electronic device 110 responds to the specific type of event.
- Properties of a specific type of content may be changed, and properties of other types of content may be sequentially changed based on the change in the properties of the specific type of content and the corresponding rule information 260 .
- the rule information 260 may be implemented to control the auditory content 100b in response to the control of the specific visual content 100a (eg, a graphic element set).
- the electronic device 110 controls the property of the specific visual content 100a based on the occurrence of an event (eg, a change in the timeline dt) regardless of the user's interaction (eg, a graphic
- the first element set 701a is changed to the second graphic element set 703a), and the property of the auditory content 100b is controlled serially based on the rule information (eg, the first audio source set 701b is changed). change to the second audio source set 703b).
- the rule information 260 may be implemented to control the visual content 100a in response to the specific auditory content 100b (eg, audio source set) being controlled.
- the electronic device 110 may switch a mode for identifying the control event 500 .
- the mode may include the aforementioned user interaction mode and user non-interaction mode.
- FIG. 8A is a flowchart illustrating an example of an operation of the electronic device 110 according to various embodiments. According to various embodiments, the operations illustrated in FIG. 8A may be performed in various orders without being limited to the illustrated order. Also, according to various embodiments, more operations than the operations shown in FIG. 8A may be performed, or at least one operation less than that shown in FIG. 8A may be performed. Hereinafter, FIG. 8A will be further described with reference to FIG. 8B.
- 8B is a diagram for explaining an example of a mode switching operation of the electronic device 110 according to various embodiments.
- the electronic device 110 obtains a music file corresponding to a plurality of audio sources in operation 801 and includes at least one object in operation 803. While displaying a first screen for playing, a first sound having a first frequency spectrum corresponding to a first time associated with the music file may be output.
- the electronic device 110 may, in operation 805, identify an event for determining a mode.
- identify an event for determining a mode In operation 807, it is determined whether the identified event is a first event, and if it is not the first event (807-No), in operation 809, it is determined whether the identified event is a second event.
- the electronic device 110 when the identified event is the first event (807-yes), the electronic device 110 operates in a mode may be set as a user interaction mode, and at least one operation described above in FIGS. 6A to 6B may be performed. If the identified event is the second event (809-yes), the electronic device 11 sets the mode of the electronic device 110 to the user non-interaction mode, and at least one of the above-described FIGS. 7A to 7B can perform the action of
- the electronic device 110 may perform an operation to identify time-related information (eg, FIG. 8B(a)) and/or a condition associated with distance as at least part of an operation for identifying an event for determining a mode ( Example: An operation of identifying (b) of FIG. 8B may be performed.
- the electronic device 110 determines that the first event is the first event when the time elapsed after entering a specific stage and/or running the application is less than a preset value, and the electronic device ( 110) is set to the user interaction mode, and when the preset value is exceeded, it is determined as the second event and the mode of the electronic device 110 is set to the user non-interaction mode.
- the electronic device 110 determines the first event as a first event and sets the mode of the electronic device 110 to user interaction. mode, and if it is equal to or greater than a pre-set value, it may be determined as a second event and the mode of the electronic device 110 may be set to a user non-interaction mode.
- 9 is a diagram for explaining an example of an operation of the electronic device 110 according to various embodiments.
- the electronic device 110 (eg, the processor 210) generates visual content 100a (eg, a graphic element set (eg, a set of graphic elements) based on the occurrence of a control event 500 900a)), at least a portion of the auditory content 100b (eg, the audio source set 901b), or at least one of the time information 100c (eg, the timeline and/or the stage 900c). You can control it.
- the graphic element set 900a is implemented to include a main object 900a, a sub object 903a, and an additional object 905a, and the main object 900a is a plurality of audio source sets 901b. It may be implemented to control at least some properties of the audio sources 901b and 903b.
- FIG. 10 is a flowchart 1000 for explaining an example of an operation of the electronic device 110 according to various embodiments. According to various embodiments, the operations shown in FIG. 10 are not limited to the order shown and may be performed in various orders. Also, according to various embodiments, more operations than the operations illustrated in FIG. 10 may be performed, or at least one operation less than that shown in FIG. 10 may be performed. Hereinafter, FIG. 10 will be further described with reference to FIG. 11 .
- 11A is a diagram for explaining an example of visual content provided by the electronic device 110 according to various embodiments.
- 11B is a diagram for explaining another example of visual content provided by the electronic device 110 according to various embodiments.
- 11C is a diagram for explaining another example of visual content provided by the electronic device 110 according to various embodiments.
- the electronic device 110 obtains a plurality of audio sources in operation 1001 and controls at least some properties of the plurality of audio sources in operation 1003. Displays a plurality of main graphic objects (eg, graphic object 901a of FIG. 11A) and a plurality of sub-graphic objects (eg, graphic object 903a of FIG. 11A) connecting the plurality of main graphic objects to be implemented. can do.
- Each of the main graphic objects 901a may be implemented to control different properties such as audio source pitch, playback speed, and whether or not playback is performed. For examples of the properties, “Table of Contents. 2.2.1” above, duplicate descriptions are omitted. Controllable attributes for each main graphic object 901a may be randomly assigned and/or pre-set attributes may be assigned for each location.
- the shape of the main object 901a may be implemented as a circle in (a) and a triangle in (b) of FIG. 11a, but is not limited to the examples described and/or illustrated and various types. It can be implemented in the shape of
- the main graphic object 901a may include a first type main graphic object 1101a and a second type main graphic object 1103a.
- the first type of main graphic object 1101a may be referred to as a joint object
- the second type of main graphic object 1103a may be referred to as a head object.
- the position of the joint object 1101a and the position of the head object 1103a may be set to be displayed at different points.
- the joint object 1101a may be located at a point where the sub-object 903a is connected and/or at one end of the sub-object 903a, but the head object 1103a may be located at a point where the sub-object 903a is connected.
- the size of the joint object 1101a and the size of the head object 1103a may be set to be different from each other.
- the size of the joint object 1101a may be implemented to be smaller than the size of the head object 1103a, but it may also be implemented contrary to the described example.
- the joint object 1101a and the head object 1103a may be implemented to control properties of audio sources in different ways.
- properties (eg, positions and shapes) of the main graphic object 901a and the sub graphic object 903a may be controlled.
- the position of a specific main graphic object 901a may be moved (eg, left/right).
- the shape (eg, length) of the sub graphic object 903a connecting the specific main graphic object 901a and another graphic object 901a may be changed.
- the movement of the main graphic object 901a may include a movement by a user's input and an automatically performed random movement.
- the main graphic object 901a When the main graphic object 901a is automatically and randomly moved, the main graphic object 901a may be moved in a cycle corresponding to a musical unit (eg, beat) identified based on the plurality of audio sources. . Accordingly, the shape of the overall object (eg, the main graphic object 901a and the sub graphic object 903a) is recognized by the user as a creature that moves (or dances) in accordance with the music, so that the user can enjoy more lively music. it could be possible
- a predetermined shape eg, a polygon such as a triangle or a quadrangle
- the positions of the main graphic objects 1301c, 1303c, and 1305c and the position of the sub-object 903a connecting them may be set.
- the electronic device 110 may further display at least one additional object 905a.
- the additional object 905a may be implemented in a cross shape, but it is apparent to those skilled in the art that it may be implemented in various shapes.
- the additional object 905a may be created in a background area other than the positions of the main object 901a and the sub-object 903a, and may be moved in one arbitrary direction (eg, a downward direction).
- the generation time of the additional object 905 may be determined based on a musical unit (eg, beat, measure) of the plurality of audio sources. When the additional object 905a is selected, it may provide pre-set and/or randomly set audio effects and/or visual effects.
- a rhythm note 1100b may be provided to recognize a musical unit (eg, beat, measure) based on a plurality of audio sources.
- the rhythm note 1100b is generated in the main object 901a (hereinafter referred to as a tail object) farthest from the head object 1103a, and is different from (or opposite to) the movement direction of the additional object 905a. It may be moved along the sub-object 903a in the direction of the head object 1103a. At this time, when the rhythm note 1100b reaches the head object 1103a, it disappears, and a visual effect may be provided and/or properties of the audio source may be controlled.
- a musical unit eg, beat, measure
- the rhythm note 1100b may be moved and disappear.
- the timing at which the rhythm note 1100b overlaps with the main objects 901a while moving may be determined based on a musical unit (eg, beat, measure) of the plurality of audio sources, and accordingly, the user Through the position of the rhythm note 1100b, it is possible to recognize the musical unit of music.
- a musical unit eg, beat, measure
- FIG. 12A is a flowchart 1200 for describing an example of an operation of the electronic device 110 according to various embodiments. According to various embodiments, the operations illustrated in FIG. 12A may be performed in various orders without being limited to the illustrated order. Also, according to various embodiments, more operations than the operations illustrated in FIG. 12A may be performed, or at least one operation less than that shown in FIG. 12A may be performed. 12A will be further described below with reference to FIGS. 12B and 12C.
- 12B is a diagram for explaining an example of an operation of setting the number of main objects 901a of the electronic device 110 according to various embodiments.
- 12C is a diagram for explaining an example of an operation of controlling an audio source property according to the movement of the main object 901a of the electronic device 110 according to various embodiments.
- the electronic device 110 in operation 1201, configures a plurality of audio sources (eg, the first to fourth audio sources 1200a, 12200b, and 1200c of FIG. 12B). , 1200d)), and in operation 1203, an execution screen including a plurality of main graphic objects corresponding to the number of identified musical units is displayed while the plurality of audio sources are identified.
- the first sound eg, music
- the electronic device 110 displays four main graphic objects 901a, and the main graphic objects 901a It is possible to display three sub-objects 903a connecting .
- the electronic device 110 may provide visual effects to the main object 903a based on musical units identified over time. For example, as described above, the electronic device 110 may place the rhythm note on the main object 901a according to the beat. Also, for example, the electronic device 110 may control properties (eg, size, color) of the main object 901a for each beat. Accordingly, the user's recognition rate for musical units may be improved.
- properties eg, size, color
- the electronic device 110 receives a user input (eg, a touch input, a drag) for controlling a specific main graphic object among a plurality of main graphic objects. input) can be identified.
- a user input eg, a touch input, a drag
- the electronic device 110 determines whether the movement direction is the first direction, and if it is the first direction (1207-yes), operation 1209
- a second sound is output by controlling the properties of at least some of the plurality of audio sources based on a first method (eg, applying an audio effect with a first size), and when not in the first direction (1207-No) ), in operation 1211, it is determined whether the moving direction is the second direction, and if it is the second direction (1211-yes), in operation 1213, properties of at least some of the plurality of audio sources are set in a second method (eg: A third sound may be output by controlling based on the application of an audio effect with a second size).
- the electronic device 110 can interact with each other according to the location of the main object 901a determined according to the moving direction and the moving distance.
- Other audio effects 1210a and 1210b may be applied.
- the operation of applying different audio effects may include an operation of differently applying different degree of property control, but may not be limited to the described example.
- each main object 901a may be set to a first direction and a second direction opposite to the first direction.
- it may be left and right as shown in FIG. 12c, but it may be implemented as upper and lower, diagonal right and diagonal left without being limited to the described example. Accordingly, the user can intuitively control the main object 901a to control the music he/she enjoys.
- the electronic device 110 (eg, the processor 210) provides a visual effect for the background in operation 1215 when the first and second directions are not (1211-No) (Yes). : color change, providing content of a specific color) while providing audio effects. For example, when a touch input to the main object 901a is received, the electronic device 110 may apply a different audio effect (eg, a third audio effect) than when a drag input is received.
- a visual effect for the background in operation 1215 when the first and second directions are not (1211-No) (Yes).
- color change providing content of a specific color
- the electronic device 110 may apply a different audio effect (eg, a third audio effect) than when a drag input is received.
- the electronic device 110 (eg, the processor 210) provides different visual effects (eg, different colors) and/or audio effects according to touch properties (eg, strength). can do.
- the electronic device 110 (eg, the processor 210) provides different visual effects (eg, different colors) even if the same touch input is performed according to the type of a sound source file corresponding to a plurality of audio sources. ) and/or audio effects.
- FIG. 13A is a flowchart 1300 for describing an example of an operation of the electronic device 110 according to various embodiments. According to various embodiments, the operations illustrated in FIG. 12A may be performed in various orders without being limited to the illustrated order. Also, according to various embodiments, more operations than the operations illustrated in FIG. 13A may be performed, or at least one operation less than that shown in FIG. 13A may be performed. Hereinafter, FIG. 13A will be further described with reference to FIG. 13B.
- 13B is a diagram for explaining an example of an operation of generating a new object of the electronic device 110 according to various embodiments.
- 13C is a diagram for explaining an example of an operation of generating a new object of the electronic device 110 according to various embodiments.
- the electronic device 110 may identify a user's input in operation 1301 after performing operations 1201 to 1203 described above.
- the electronic device 110 determines whether the identified user's input is an input for object creation, and determines whether the object is generated. If (1303-Yes), in operation 1305, an object for controlling a specific property of a plurality of audio sources may be further displayed. For example, as shown in FIG. 13B , when the electronic device 110 receives a user input for some of the plurality of sub-objects 903a, the main object 1300 is newly created at the location where the user input is received. ) can be displayed. In this case, the electronic device 110 may display new sub-objects 1310 and 1320 connecting the newly created main object 1300 and the existing main object 901a. Also, for example, although not shown, the electronic device 110 may display a new main object 901a when a specific type of user input is received with respect to the main object 901a.
- the electronic device 110 may change a stage and/or an audio source set when the number of main objects 901a is changed.
- the electronic device 110 when the number of main objects 901a is changed, the electronic device 110 (eg, the processor 210), based on the type of user input for the newly created main object, at least One action can be performed.
- the electronic device 110 eg, the processor 210) applies a specific audio effect and/or controls the audio source to be played when an enlargement and reduction operation is identified after the creation of the newly created main object. You can adjust the output level (e.g. increase the output level when zooming in and decrease the output level when zooming out).
- the electronic device 110 eg, the processor 210) may apply an audio effect when a user input (eg, touch) for the newly created main object is identified.
- the electronic device 110 eg, the processor 210) may move to another stage when the inside of the newly created main object is touched.
- the electronic device 110 eg, the processor 210) may provide audio and/or visual effects.
- the electronic device 110 may change a musical unit (eg, beat) of a plurality of audio sources when the number of main objects 901a is changed. .
- a musical unit eg, beat
- the electronic device 110 may change the time signature from 4/4 time to 8/8 time signature.
- the electronic device 110 may determine whether the number of main objects 901a reaches a threshold number as at least part of the operation of changing the beat.
- the electronic device 110 refrains from the time signature change operation until the number of main objects 901a increases from 4 and reaches a critical number (eg, 8), and the number of main objects 901a is the critical number.
- a time signature change operation may be performed.
- the threshold number may be determined based on the first musical unit (eg, beat).
- FIG. 14A is a flowchart 1400 for describing an example of an operation of the electronic device 110 according to various embodiments. According to various embodiments, the operations illustrated in FIG. 14A may be performed in various orders without being limited to the illustrated order. Also, according to various embodiments, more operations than the operations illustrated in FIG. 14A may be performed, or at least one operation less than that shown in FIG. 14A may be performed. Hereinafter, FIG. 14A will be further described with reference to FIG. 14B.
- 14B is a diagram for explaining an example of an operation of controlling a position of an object when a user's input of the electronic device 110 is received on a background screen, according to various embodiments.
- the electronic device 110 (eg, the processor 210), in operation 1401, may identify a user input.
- the electronic device 110 determines whether the user input is an input for an object (eg, the main graphic object 901a), and determines whether the object If it is an input for (1403-Yes), in operation 1405, a second sound is output by controlling properties of at least some of the plurality of audio sources based on a first method corresponding to the moving direction of the first graphic object. can do.
- operations 1403 to 1405 of the electronic device 110 may be performed in the same manner as operations 1205 to 1215 of the electronic device 110 described above, and thus duplicate descriptions are omitted.
- the electronic device 110 determines, in operation 1407, whether the user input is an input for a background, when the input is not an object input (1403-No). and in the case of a background input (1407-Yes), in operation 1409, at least one second graphic object is controlled to be moved based on the location of the user's input, and in operation 1411, the at least one second graphic object is moved.
- a third sound may be output by controlling properties of at least some of the plurality of audio sources based on the second method corresponding to the moving direction of the audio source. For example, as shown in FIG.
- the electronic device 110 receives a user input (eg, a touch) for areas other than the locations of graphic objects (eg, the main graphic object 901a and the sub graphic object 903a). ) can be received.
- the electronic device 110 compares the position (eg, (x1, y1)) of the user input (eg, touch) and the position (eg, (x2, y2)) of the plurality of main graphic objects 901a.
- the position of the main graphic object 901a identified as having the shortest distance may be moved.
- the electronic device 110 moves the identified main graphic object (eg, (x1, y1)) from the position (eg, (x1, y1)) of the user input (eg, touch).
- the identified main graphic object 901a may be moved in the direction of the position of 901a (eg, (x2, y2)). Accordingly, the identified main graphic object 901a may move away from the location of the user input.
- the electronic device 110 may adjust the length of the sub graphic object 903a connected to the main graphic object 901a according to the movement of the main graphic object 901a.
- FIG. 15 is a flowchart 1500 for describing an example of an operation of the electronic device 110 according to various embodiments. According to various embodiments, the operations shown in FIG. 15 are not limited to the order shown and may be performed in various orders. Also, according to various embodiments, more operations than the operations illustrated in FIG. 15 or at least one operation less than those shown in FIG. 15 may be performed.
- the electronic device 110 identifies a musical unit based on a plurality of audio sources in operation 1501, and in operation 1503, the first type of main object ( Example: A plurality of main objects 901a including a joint object 1101a of FIG. 11) and a second type of main object (eg, head object 1103a of FIG. 11) may be displayed.
- the first type of main object Example: A plurality of main objects 901a including a joint object 1101a of FIG. 11
- a second type of main object eg, head object 1103a of FIG. 11
- the electronic device 110 may, in operation 1505, identify a user input for controlling a specific main graphic object among the plurality of main graphic objects 901a. there is.
- the electronic device 110 determines whether, in operation 1507, the type of the controlled main graphic object is the first type (eg, the joint object 1101a of FIG. 11). and if it is the first type (eg, the joint object 1101a of FIG. 11) (1507-yes), in operation 1509, the first method corresponding to the moving direction of the graphic object (eg, the joint object 1101a) ), the second sound may be output by controlling properties of at least some of the plurality of audio sources based on the audio effect assigned to).
- Operations 1507 to 1509 of the electronic device 110 may be performed in the same manner as operations 1205 to 1215 of the electronic device 110 described above, and thus duplicate descriptions are omitted.
- the electronic device 110 determines, in operation 1511, whether the type of the controlled main graphic object is the second type (eg, the head object 1103a of FIG. 11 ). and if it is the second type (eg, the head object 1103a of FIG. 11) (1511-yes), in operation 1513, based on the second method, attributes of at least some of the plurality of audio sources are determined. By controlling, the third sound can be output. For example, when the head object 1103a is selected, the electronic device 110 may move from the current stage to another stage and reproduce an audio source set corresponding to the moved stage to provide a different sound from the previous one. there is. Also, for example, when the head object 1103a is selected, the electronic device 110 performs an operation of changing at least a part of the current audio source set to an audio source of a different type and/or property while maintaining the stage. can be done
- FIG. 16 is a flowchart 1600 for describing an example of an operation of the electronic device 110 according to various embodiments. According to various embodiments, the operations illustrated in FIG. 16 may be performed in various orders without being limited to the illustrated order. Also, according to various embodiments, more operations than the operations shown in FIG. 16 or at least one operation less may be performed.
- the electronic device 110 identifies a musical unit based on a plurality of audio sources in operation 1601 and, in operation 1603, performs a musical unit corresponding to the identified musical unit.
- a first sound may be output based on the plurality of audio sources while displaying an execution screen including a plurality of main graphic objects 901a.
- the electronic device 110 in operation 1605, when a specific time elapses, at least one additional object (eg, the additional object 905a of FIG. 11 ) is displayed, and in operation 1607, when at least one additional object (eg, the additional object 905a of FIG. 11) is selected, a specific audio effect is displayed while displaying a second execution screen based on providing a specific visual effect. Based on the provision, the second sound may be output.
- the electronic device 110 may create an additional object 905a in the background area of the upper part of the execution screen and move it downward at a period set based on a musical unit (eg, beat, measure). .
- a musical unit eg, beat, measure
- FIG. 17 is a flowchart 1700 for explaining an example of an operation of the electronic device 110 according to various embodiments. According to various embodiments, the operations shown in FIG. 17 are not limited to the order shown and may be performed in various orders. Also, according to various embodiments, more operations than the operations shown in FIG. 17 may be performed, or at least one less operation may be performed. Below, with reference to FIG. 18, FIG. 17 is further demonstrated.
- 18 is a diagram for explaining an example of an operation of assigning an audio property to each main graphic object 901a of the electronic device 110 according to various embodiments.
- the electronic device 110 identifies a musical unit based on a plurality of audio sources in operation 1701 and, in operation 1703, performs a musical unit corresponding to the identified musical unit.
- a first sound may be output based on the plurality of audio sources while displaying an execution screen including a plurality of main graphic objects 901a.
- the electronic device 110 as at least part of an operation of displaying an execution screen including a plurality of main graphic objects 901a, based on the degree of association between audio effects given to each of the plurality of main graphic objects 901a, The order and/or position of the graphic objects 901a may be determined.
- the degree of relevance may be defined as the extent to which one audio effect affects another audio effect audibly, and the electronic device 110 pre-stores information on the degree of relevance for each audio effect, and based on the pre-stored information, A degree of association between effects can be identified, but is not limited to the described implementation example.
- the electronic device 110 when the electronic device 110 has a high correlation between the first audio effect applied to the first main object 1800a and the second audio effect applied to the second main object 1800b. (eg, if higher than the threshold value), the first main object 1800a and the second main object 1800b may be positioned adjacent to each other.
- the electronic device 110 may set the first main object 1800a and the second main object 1800b to be connected as one sub-object 1800c.
- the electronic device 110 may, in operation 1705, identify a user input for controlling a specific main graphic object among the plurality of main graphic objects 901a. there is.
- the electronic device 110 controls the specific main graphic object 901a to be moved within a movement range associated with the main graphic object 901a
- a second sound may be output by controlling properties of at least some of the plurality of audio sources based on a method corresponding to the movement direction of the graphic object. For example, based on the position of one main object (eg, the second main object 1800b) among main objects 1800a and 1800b adjacent to each other, another main object (eg, the first main object 1800a) )
- the movement range (MD) of the position may be set (or determined, or limited). Accordingly, the length of the sub-object 1800c may also be limited. Accordingly, when there are musical effects with a high degree of correlation, excessive control of a specific audio effect can be prevented, thereby improving the quality of music enjoyment.
- 19 is a diagram for explaining an example of an operation of the electronic device 110 according to various embodiments.
- an electronic device 110 (eg, a processor 210) includes an input device 240 (eg, a camera 241, a microphone 243, and a sensor 245). It is possible to identify the occurrence of the control event 500 and control at least one of at least a portion of the graphic element set 1900a, at least a portion of the audio source set 1900b, or a timeline and/or a stage 1900c. there is.
- the electronic device 110 may identify the occurrence of the control event 500 based on recognizing a user's gesture and/or motion using the input device 240 .
- FIG. 20 is a flowchart 2000 for describing an example of an operation of the electronic device 110 according to various embodiments. According to various embodiments, the operations shown in FIG. 20 may be performed in various orders without being limited to the order shown. Also, according to various embodiments, more operations than the operations shown in FIG. 20 may be performed, or at least one operation less than that shown in FIG. 20 may be performed. 20 will be further described below with reference to FIGS. 21A and 21B.
- 21A is a diagram for explaining an example of an operation of providing the visual content 101a of the electronic device 110 according to various embodiments.
- 21B is a diagram for explaining an example of a chain interaction of the electronic device 110 according to various embodiments.
- the electronic device 110 in operation 2001, based on executing a program, creates a plurality of objects (eg, the plurality of objects 2101 of FIG. 21A ). , 2103, 2105)), the first sound may be output based on a plurality of audio sources while displaying an execution screen.
- the electronic device 110 reproduces a plurality of audio sources while displaying a plurality of objects 2101, 2103, and 2105 on the execution screen of the application.
- a first sound may be output.
- At least one audio effect and/or at least one controllable audio property may be set for each of the plurality of objects 2101, 2103, and 2105.
- objects 2101, 2103, and 2105 are set to move in a first axis direction (eg, a horizontal direction (X)) and/or a second axis direction (eg, a vertical direction (Y)). It can be.
- the types of controllable audio sources and/or properties of the audio sources, and the types of applied audio effects may vary according to the objects 2101, 2103, and 2105 and the direction of the movement axis.
- the type of audio source controlled for each of the objects 2101, 2103, and 2105 may be set, and the moving direction (eg, horizontal direction (X), and the vertical direction (Y), the applied effect and criterion (eg, whether the degree is changed according to the movement distance or applied according to the location criterion) may be different.
- the moving direction eg, horizontal direction (X), and the vertical direction (Y)
- the applied effect and criterion eg, whether the degree is changed according to the movement distance or applied according to the location criterion
- the electronic device 110 in operation 2003, may identify a specific object selected by a user's motion from among a plurality of objects.
- the electronic device 110 may use the input device 250 to identify motion of a user positioned in front of the electronic device 110 .
- the user's motion may include various types of motions that can be performed by the user's hands, such as a grab motion of clenching a fist, a swing motion of waving a hand, a punch motion, and the like, and are not limited to the described examples, and other body parts (e.g., It may include various types of motions using arms, feet, legs, and faces.
- the electronic device 110 may detect a change in the shape of a user's body part (eg, hand) identified by using a camera, and identify a motion corresponding to the detected change.
- the electronic device 110 detects a change in the shape of a part of the user's body (eg, a hand) using a sensing device (eg, a magnetic field sensor) other than a camera, and generates a motion corresponding to the detected change. can be identified. Since various implementation examples other than the described examples are obvious to those skilled in the art, detailed descriptions are omitted.
- the input device 250 may be provided separately from the electronic device 110 and transmit sensed information (eg, change in body part) to the electronic device 110, but as described and / or not limited to the illustrated example, the electronic device 110 may be implemented to include the input device 250.
- the electronic device 110 (eg, the processor 210) , 2005, the second sound may be output based on controlling the property of a specific audio source among the plurality of audio sources based on the movement of the specific object. For example, as shown in FIG. 21A , the electronic device 110 may select a graphic object 2105 based on the identified user's motion (GM) (eg, grab). The electronic device 110 compares the position (eg, coordinates) of the motion (GM) with the position (eg, coordinates) of the graphic object 2105, and the graphic object 2015 at a position corresponding to the motion (GM). ) can be selected.
- GM identified user's motion
- the electronic device 110 compares the position (eg, coordinates) of the motion (GM) with the position (eg, coordinates) of the graphic object 2105, and the graphic object 2015 at a position corresponding to the motion (GM).
- the electronic device 110 moves the selected graphic object 2105 in a direction corresponding to the moving direction of the motion GM, and moves the graphic object 2105
- the property of the audio source corresponding to the moving direction of can be adjusted to a degree corresponding to the moving distance.
- the electronic device 110 may provide a chain effect based on movement of graphic objects 2101, 2013, and 2105.
- the electronic device 110 determines the properties of an audio source based on chained properties (eg, position, number) of the second graphic objects 2103 being changed according to the movement of the first graphic object 2105 . can be controlled serially.
- FIG. 22 is a flowchart 2200 for explaining an example of an operation of the electronic device 110 according to various embodiments. According to various embodiments, the operations illustrated in FIG. 22 may be performed in various orders without being limited to the illustrated order. Also, according to various embodiments, more operations than the operations shown in FIG. 22 may be performed, or at least one operation less than that shown in FIG. 22 may be performed. 22 will be further described below with reference to FIGS. 23A and 23B.
- 23A is a diagram for explaining an example of an operation of providing an avatar for control based on a distance between users of the electronic device 110 according to various embodiments.
- 23B is a diagram for explaining another example of an operation of providing an avatar for control based on a distance between users of the electronic device 110 according to various embodiments.
- the electronic device 110 in operation 2201, displays an execution screen including a plurality of objects 1900a based on the execution of the program while displaying a plurality of objects 1900a.
- the first sound may be output based on the audio sources of.
- the electronic device 110 obtains information on a plurality of body parts of the user U in operation 2203, and in operation 2205, the plurality of Based on the first information about the first part (eg, the torso B and the palm P) of the body parts, it may be determined whether a specific condition is satisfied. For example, as shown in FIG. 23A , the electronic device 110 provides information related to a specific part of the user (eg, size information, shape information, etc.) obtained through the input device 250. Based on this, distances D1 and D2 between the user U and the electronic device 110 may be identified.
- a specific part of the user eg, size information, shape information, etc.
- a specific part of the user may include the torso B, which is the largest part of the user's body parts, as shown in FIG. 23A, or the palm P, as shown in FIG. 23B, Not limited to this.
- the electronic device 110 pre-stores information about distances corresponding to the sizes of body parts (eg, torso B and palm P), and images currently obtained using the camera 250 from the pre-stored information. It is possible to identify distances D1 and D2 corresponding to the sizes of body parts (eg, torso B and palm P) detected from .
- the operation of displaying the avatar may be refrained from.
- the electronic device 110 may identify the distance further based on the user's personal information as at least part of an operation of identifying the distance. For example, the sizes of body parts (eg, torso B and palm P) may be photographed differently even if they are the same distance from the electronic device 110 . Accordingly, the electronic device 110 may determine the body part (eg, torso B, palm P) based on personal information (eg, gender, age, total height) that affects the size of the body part (eg, torso (B), palm (P)). It is possible to adjust the distance corresponding to the size of the torso B and the palm P.
- personal information eg, gender, age, total height
- the electronic device 110 may have the same body parts (eg, the torso B, the palm (eg, the body B) and the palm ( The identified distance may be recognized as greater based on the size of P)
- the personal information may be input to the electronic device 110, or the electronic device 110 may analyze itself (eg, photographing through a camera). Personal information corresponding to a feature identified from the image may be identified).
- the electronic device 110 displays an avatar based on the first information in operation 2209 when it is determined that the condition is satisfied in operation 2207 (2207-yes). can do.
- the electronic device 110 determines that the identified distances D2 and D are threshold distances for controlling the plurality of graphic objects 1900a (eg, 2.5 m, but not limited), avatars 2300a and 2300b may be displayed on the display, and control authority may be granted to the user.
- the control authority may be authority allowing the user to control graphic objects 1900a. Accordingly, the user may recognize that control authority for the graphic objects 1900a is granted to the user by recognizing the avatars 2300a and 2300b.
- the avatars 2300a and 2300b may be pre-implemented avatars and/or avatars implemented based on body parts of the user photographed by the electronic device 110 .
- the electronic device 110 selects the first part (eg, the torso B and the palm P) among the plurality of body parts.
- Information on a first object (eg, object 2310) among the plurality of objects is identified based on information associated with a second part (eg, hand (H), finger (F)) different from the second part, and operation is performed.
- a specific audio source among the plurality of audio sources corresponding to the first object (eg, object 2310) is controlled while controlling a property (eg, brightness) of the first object (eg, object 2310).
- the second sound may be output based on controlling the property of the source.
- the electronic device 110 may use a body part (eg, the torso B and the palm P) and other body parts (eg, the hand H and the finger F) for identifying the distance. Based on this, an operation of recognizing a user's motion and/or gesture may be performed. In other words, after the user's distance is recognized within the threshold distance, the electronic device 110 initiates an operation of recognizing other body parts (eg, hands (H) and fingers (F)) of the user U ( or perform).
- a body part eg, the torso B and the palm P
- other body parts eg, the hand H and the finger F
- the electronic device 110 when recognizing that a plurality of users exist within a critical distance, the electronic device 110 (eg, the processor 210) may grant control authority to a user recognized as having the closest distance. there is.
- FIG. 24 is a flowchart 2400 for explaining an example of an operation of the electronic device 110 according to various embodiments. According to various embodiments, the operations illustrated in FIG. 24 may be performed in various orders without being limited to the illustrated order. Also, according to various embodiments, more operations than the operations shown in FIG. 24 or at least one operation less may be performed. Below, with reference to FIG. 25, FIG. 24 is further demonstrated.
- 25 is a diagram for explaining an example of a screen switching operation of the electronic device 110 according to various embodiments.
- the electronic device 110 in operation 2401, based on the execution of the program, creates a plurality of objects (eg, the plurality of objects 2511 of FIG. 25 ). )), a first sound is output while displaying an execution screen 2510 including, in operation 2403, information on a plurality of body parts of the user U is obtained, and in operation 2405, the plurality of body parts Based on the first information about the first part (eg, the torso B), it may be determined whether or not the first condition is satisfied (eg, entering within the first threshold distance).
- the first condition eg, entering within the first threshold distance
- the electronic device 110 may display the standby screen 2510 when the distance to the user U is outside the first threshold distance (eg, 4.5 m, but not limited thereto).
- the standby screen 2510 may include objects 2511 of various shapes. While displaying the idle screen 2510, the electronic device 110 may output sound having relatively lower energy than the other screens 2520 and 2530 (ie, lower energy per frequency).
- the electronic device 110 in operation 2407, when the first condition is satisfied (2407-yes), in operation 2409, the corresponding user U A second sound is output while displaying a first graphic object (eg, the graphic object 2512 of FIG. 25), and in operation 2411, based on first information about a first part among the plurality of body parts, It may be determined whether 2 conditions are satisfied (eg, entering within the second threshold distance). For example, when it is determined that the distance to the user U is within a first threshold distance, the electronic device 110 displays a standby screen 110 corresponding to a location of a specific body part (eg, neck) of the user U. ), the object 2512 may be displayed at a position on the image.
- the electronic device 110 may provide a second sound having a property different from that of the aforementioned first sound while providing the visual directing effect.
- the second sound is set to a sound having relatively higher energy than the first sound (ie, higher energy for at least some frequencies), so that an uplifting atmosphere can be created.
- the power consumption of the electronic device 110 (eg, the processor 210) while providing the idle screen 2510 is reduced by the power consumption of the electronic device 110 while providing the menu screen 2520. It may be low compared to the power consumption.
- the electronic device 110 eg, the processor 210) controls fewer electronic components (eg, the processor 210) compared to providing the menu screen 2520 while providing the idle screen 2510. control the rest of the electronic components in a sleep state). Accordingly, the electronic device 110 is relieved of operational burden while displaying the idle screen 2510 before the menu screen 2520 implemented to provide a predetermined function is provided, but the visual on the idle screen 2510 is reduced.
- the auditory effect of effects and sounds can make the user (U) aware that he can enter a controllable state.
- the electronic device 110 when the second condition is satisfied (2413-yes), the electronic device 110 (eg, the processor 210) outputs the third sound while performing the above operation 2415.
- At least one third graphic object for controlling the first screen may be displayed.
- the electronic device 110 may display a menu screen 2520 instead of the standby screen 2510 when it is determined that the distance to the user U is within the second threshold distance.
- the menu screen 2520 may include a menu for selecting various types of information (eg, weather information, news information) and music.
- the electronic device 110 may reproduce a plurality of audio sources corresponding to the specific music and provide an execution screen 2530 including a plurality of objects 1900a. The overlapping operation of the electronic device 110 according to the control of the plurality of objects 1900a will be omitted.
- the electronic device 110 controls the properties of the visual content 100a (eg, a graphic object) based on various types of interactions, and the auditory content 100b ) properties can be controlled.
- the visual content 100a eg, a graphic object
- the auditory content 100b the auditory content 100b
- FIG. 26 is a flowchart 2600 for explaining an example of an operation of the electronic device 110 according to various embodiments. According to various embodiments, the operations illustrated in FIG. 26 may be performed in various orders without being limited to the illustrated order. Also, according to various embodiments, more operations than the operations shown in FIG. 26 or at least one operation less may be performed. Below, with reference to FIG. 27, FIG. 26 is further demonstrated.
- 27 is a diagram for explaining examples of types of user interaction, according to various embodiments.
- the electronic device 110 obtains information about the interaction in operation 2601, and determines whether a first object event has occurred in operation 2603. .
- the type of user interaction includes not only motion but also interaction such as voice, and is not limited to the described examples and visually Interactions that can be identified from the user (eg, facial expressions, etc.) and interactions that can be audibly identified from the user (eg, hand clapping, etc.) may be further included.
- the electronic device 110 can identify various types of interactions through the input device 250 and identify objects selected by the interactions.
- An operation of identifying selection of the object by the electronic device 110 may be defined as an operation of identifying occurrence of an object event.
- the electronic device 110 eg, the processor 210) determines, in operation 2605, whether the body part associated with the event is a first part (eg, a type of interaction such as grab or voice).
- a first characteristic of the first part is identified based on the first information corresponding to the first part
- the first object Based on the first scheme corresponding to the first property and the first property, the visual property of the first object may be controlled as the first property while controlling the audio source set being output.
- the electronic device 110 eg, the processor 210
- the body part associated with the event is not the first part (2605-No)
- the body part associated with the event is a second part (eg, an interaction such as grab, voice, etc.) type of)
- the second part e.g. an interaction such as grab, voice, etc.
- a second characteristic by the second part is identified, and operation 2615 , based on the second scheme corresponding to the first object and the second characteristic, it is possible to control the visual property of the first object as the second property while controlling the number of audio source sets being output.
- the electronic device 110 stores the information of [Table 2] in advance, and when an event occurs, compares the currently identified type of interaction (or type of body part) with the previously stored information, Properties of the object corresponding to the type may be controlled. For example, the electronic device 110 controls the position of an object when a grab is identified as shown in (a) of FIG. 27, and controls the position of an object when a punch is identified as shown in (b) of FIG. 27. Controlling the number of objects and, as shown in (c) to (d) of FIG. 27, when receiving an utterance including a specific word (eg, “move”, “large”), a property corresponding to the specific word (e.g. position, size) can be controlled.
- a specific word eg, “move”, “large”
- a specific audio effect may be applied according to the type of audio source allocated to the selected graphic object and the property of the controlled graphic object. For example, when the position of the object corresponding to “instrument” is moved to Y by grab, a high-pass filter may be applied. As another example, when the size of an object corresponding to “instrument” increases due to a voice, the output level of the instrument may increase. Accordingly, energy (eg, intensity per frequency) of the auditory content 100b (eg, music) provided by the electronic device 110 may vary.
- energy eg, intensity per frequency
- the auditory content 100b eg, music
- FIG. 28 is a flowchart 2800 for explaining an example of an operation of the electronic device 110 according to various embodiments. According to various embodiments, the operations illustrated in FIG. 28 may be performed in various orders without being limited to the illustrated order. Also, according to various embodiments, more operations than the operations shown in FIG. 28 or at least one operation less may be performed. Below, with reference to FIG. 29, FIG. 28 is further demonstrated.
- 29 is a diagram for explaining an example of an operation of controlling visual content 100a and auditory content 100b based on the type of object identified by the electronic device 110 according to various embodiments.
- the electronic device 110 identifies a type of an object (eg, a stick in FIG. 29 ) associated with a part of the user's body, and in operation 2803 , the type of motion (eg, hitting in FIG. 29) by the identified object may be identified.
- the electronic device 110 may identify an object corresponding to the shape of a specific object identified using the input device 240 (eg, a camera).
- the input device 240 eg, a camera
- information on properties of controllable objects for each type of object may be stored in advance. Based on the previously stored information, the electronic device 110 may change properties of a graphic object selected by motion of the object. According to the change of the property of the object, the changed visual content 100a may be provided.
- object type motion type object properties stick sting number damage location etc other action
- the electronic device 110 determines the type and motion of the object. It is possible to control properties of visual content and/or auditory content corresponding to the type. For example, the electronic device 110 previously stores information about the type of audio source allocated to the graphic object of [Table 3] and the properties of the graphic object to be controlled, and applies an audio effect based on the information. Based on this, the energy (eg, intensity per frequency) of the auditory content 100b (eg, music) provided by the electronic device 110 may vary.
- the energy eg, intensity per frequency
- the auditory content 100b eg, music
- FIG. 30 is a flowchart 3000 for describing an example of an operation of the electronic device 110 according to various embodiments. According to various embodiments, the operations shown in FIG. 30 are not limited to the shown order and may be performed in various orders. Also, according to various embodiments, more operations than the operations shown in FIG. 30 may be performed, or at least one operation less than that shown in FIG. 30 may be performed. Below, with reference to FIG. 31, FIG. 30 is further demonstrated.
- 31 is a diagram for explaining an example of an operation of controlling visual content 100a and auditory content 100b based on the type of object identified by the electronic device 110 according to various embodiments.
- the electronic device 110 may identify the occurrence of an effect event based on at least one piece of information. For example, as shown in FIG. 31, the electronic device 110 uses the input device 240 in a state where a screen is displayed on the display, as described with reference to [Table 2] and [Table 4]. , can identify various types of interactions.
- the screen may not include graphic objects for controlling at least some of the plurality of audio sources.
- the electronic device 110 determines, in operation 3003, whether the generated effect event is the first event, and if it is the first event (3003-yes) , In operation 3005, a first effect effect may be output to the screen based on the first event, and in operation 3007, a first property corresponding to the first effect effect of the audio source set may be controlled.
- the electronic device 110 when the generated effect event is not the first event (3003-No), in operation 3009, the generated effect event is the second It is determined whether it is an event, and if it is a second event (3009-Yes), in operation 3011, a second effect is output to the screen based on the second event, and in operation 3013, the second effect of the audio source set A second attribute corresponding to the effect may be controlled. For example, as shown in (a) of FIG.
- the electronic device 110 displays a first visual effect (eg, a wave effect) based on identifying the user's swing, and ( As shown in c), a second visual effect (eg, water concentric circle effect) may be displayed based on the identification of the user's punch.
- the electronic device 110 may adjust the level of a visual effect (eg, a wave effect) according to the degree of the detected interaction, as shown in (a) of FIG. 31 .
- the electronic device 110 may apply an audio effect set to be applied when providing the visual effect based on rule information.
- FIG. 32 is a flowchart 3200 for describing an example of an operation of the electronic device 110 according to various embodiments. According to various embodiments, the operations illustrated in FIG. 32 may be performed in various orders without being limited to the illustrated order. Also, according to various embodiments, more operations than the operations shown in FIG. 32 may be performed, or at least one operation less than that shown in FIG. 32 may be performed. Below, with reference to FIG. 33, FIG. 32 is further demonstrated.
- 33 is a diagram for explaining an example of a switching operation between a user interaction mode and a user non-interaction mode of the electronic device 110 according to various embodiments.
- the electronic device 110 displays a first sound based on a plurality of audio sources while displaying an execution screen based on the execution of the program, in operation 3201.
- operation 3203 information on a plurality of body parts of the user is obtained, and in operation 3205, based on first information about a first part (eg, torso B) among the plurality of body parts.
- a specific condition eg, the distance to the user U is within a critical distance.
- the electronic device 110 determines whether a condition is satisfied in operation 3207, and if the condition is satisfied (3207-yes), the electronic device 110 (eg, the processor 210) in operation 3209
- the mode of 110 may be set to the first mode (eg, user interaction mode). For example, referring to (a) of FIG. 33 , when the distance to the user U is within a critical distance, the electronic device 110 controls the graphic object 1900a based on the user's interaction to provide auditory content. It can be dynamically controlled. Since the operation of the electronic device 110 performed while the mode of the electronic device 110 is the user interaction mode can be performed as described in “6.1 Table of Contents to 6.5 Table of Contents,” duplicate descriptions will be omitted.
- the electronic device 110 when the condition is not satisfied (3207-No), the electronic device 110 (eg, the processor 210), in operation 3211, changes the mode of the electronic device 110 to the second mode ( Example: user non-interaction mode), in operation 3213, context information is obtained, and in operation 3215, properties of an audio source and/or a screen may be controlled based on the context information.
- the electronic device 110 controls the graphic object 1900a based on context information other than the user's interaction. By doing so, it is possible to control the auditory content suitable for the surrounding environment.
- the context information may include ambient sound received through a microphone, information about objects in an image obtained using a camera, weather/temperature information received through a server, and the like. It is not limited and may include various information that can be obtained from the surroundings of the electronic device 110 .
- the electronic device 110 may control the properties of the graphic object according to the degree of context information obtained, as described in [Table 7] below. Accordingly, the electronic device 110 may provide a visual effect corresponding to the property of the graphic object controlled based on the rule information and/or control the property of the audio source.
- the electronic device 110 may further use time information as at least part of an operation of setting a mode of the electronic device 110 .
- the electronic device 110 may determine whether the distance of the user identified as within or outside the threshold distance is maintained for a threshold time period or longer, and set a mode if the distance is maintained for a threshold time period or longer.
- 34 is a diagram for explaining an example of an operation of the electronic device 110 according to various embodiments. 34 will be further described below with reference to FIG. 35 .
- 35 is a diagram for explaining an example of content provided by the electronic device 110 according to various embodiments.
- the electronic device 110 based on the occurrence of the control event 500, determines at least a part of the graphic element set 3400a and an audio source. At least a portion of the set 3400b, or at least one of the timeline and/or stage 3400c may be controlled.
- the graphic element set 3400a may include a graphic space (eg, a map 3410a), a main character 3420a that can be placed on the graphic space, and a sub-object 3430a.
- the main character 3420a may be implemented to move on a graphic space (eg, map 3410a) based on a user's input.
- the sub-object 3430a may be a graphic object pre-arranged on a graphic space (eg, map 3410a).
- the plurality of graphic objects may be configured to provide a specific audio source and/or apply an audio effect.
- a specific audio source may be provided and/or audio effects may be applied.
- the sub-object 3430a may be pre-implemented to provide a specific audio source and/or apply an audio effect according to a state.
- the sub-object 3430a performs a function of providing a specific audio source and/or applying an audio effect in an activated state, but may refrain from performing the function in an inactive state.
- the graphic space (eg, map 3410a) may include a plurality of regions (eg, first to ninth regions).
- An audio source set 3400b may be allocated for each of the plurality of regions.
- sound sources of different configurations (or themes) eg, intro, verse
- the electronic device 110 may provide auditory content 100b (eg, music) by reproducing an audio source set corresponding to the region where the main character 3420a is located.
- the electronic device 110 when a user input associated with the main character 3420a is received, attributes of the visual content 100a and/or the auditory content 100b ) properties can be controlled.
- FIG. 36 is a flowchart 3600 for explaining an example of an operation of the electronic device 110 according to various embodiments. According to various embodiments, the operations shown in FIG. 36 are not limited to the order shown and may be performed in various orders. Also, according to various embodiments, more operations than the operations shown in FIG. 36 may be performed, or at least one operation less than that shown in FIG. 36 may be performed. 36 will be further described below with reference to FIGS. 37 and 38 .
- 38A is a diagram for explaining an example of an operation of providing a visual effect to guide a time section of the electronic device 110 according to various embodiments.
- 38B is a diagram for explaining an example of an operation of providing a visual effect to guide a time section of the electronic device 110 according to various embodiments.
- 38C is a diagram for explaining an example of an operation for providing an effect based on a user's input for each time section to the main character 3420a of the electronic device 110 according to various embodiments.
- the electronic device 110 in operation 3601, displays an execution screen including a main object (eg, the main graphic object 3420a of FIGS. 34 and 35). While displaying, sound may be output based on a plurality of audio sources (eg, the audio source set 3400b of FIGS. 34 and 35).
- a main object eg, the main graphic object 3420a of FIGS. 34 and 35.
- sound may be output based on a plurality of audio sources (eg, the audio source set 3400b of FIGS. 34 and 35).
- the electronic device 110 identifies a musical unit (eg, beat, bar) based on a plurality of audio sources, and in operation 3605 , A plurality of visual event occurrence times may be identified based on the identified musical unit (eg, beat, bar), and in operation 3607, a plurality of time intervals may be determined based on the plurality of visual event occurrence times. For example, referring to FIG. 37 , when identifying 6 bars as a musical unit, the electronic device 110 sends a pre-set bar (eg, 2/6 bar, 4/6 bar, 6/6 bar) as an event. It can be identified by the time of occurrence.
- a pre-set bar eg, 2/6 bar, 4/6 bar, 6/6 bar
- the event occurrence time may be understood as a natural musical time when a specific musical effect is applied and/or a specific audio source is output.
- the electronic device 110 may set a plurality of types of time intervals based on the identified event occurrence time. For example, as shown in FIG. 37 , the electronic device 110 may set a first time interval including an event occurrence time and a second time interval other than the first time interval, which is a remaining time interval.
- the first time interval may be set to form a predetermined time interval based on the event occurrence time point.
- the electronic device 110 can improve the quality of listening to music by inducing the user to apply a musical effect to the first time interval while listening to music.
- the electronic device 110 may guide the first time section and the second time section by providing a visual effect on the main character 3420a.
- at least one object may be displayed at a location associated with the main character 3420a to guide the first time section and the second time section.
- a white circle may be provided at an associated position of the main character 3420a to guide the first time interval, and the moment when the white circle becomes the largest is the first type time interval. It may be set to correspond to, but is not limited thereto. More specifically, referring to FIG. 38A, the white circle starts small as shown in (a) of FIG.
- a time point at which the white circle becomes the largest may be set to correspond to the first time interval, but is not limited thereto.
- a red circle may be provided to guide the second time interval, and the moment when the red circle becomes the largest may be set to correspond to the second time interval. , but not limited thereto. At this time, in the case of the above example, it may be as shown in FIG. 30 . More specifically, referring to FIG. 38B, the red circle starts small as shown in (a) of FIG. 38B, and the red circle is displayed large as shown in (b) of FIG. 38B.
- a time point at which the red circle becomes the largest may be set to correspond to the second time interval, but is not limited thereto.
- a motion of a character for guiding the first time section and the second time section may be provided. More specifically, the first time interval and the second time interval may be guided by the motion of the character. For example, the first time interval may be guided by the timing of the character rolling his right foot. The second time interval may be guided by the timing of the character's hand motion, but is not limited thereto, and may include various embodiments in which the first time interval and the second time interval are guided by the motion of the character.
- the electronic device 110 obtains a user input for the main object in operation 3609 and, in operation 3611, obtains the user input during the first time interval. , and in the case of the first time interval (3611-Yes), in operation 3613, the operation of the main object (eg, the main graphic object 3420a in FIGS. 34 and 35) is controlled as the first operation, and the first operation is controlled.
- the main object eg, the main graphic object 3420a in FIGS. 34 and 35
- a second audio effect may be output while controlling the operation of the main object (eg, the main graphic object 3420a of FIGS. 34 and 35) as a second operation.
- the electronic device 110 may control the main character 3420a to perform a pre-set motion (eg, a foot rolling motion), and the The foot roll motion may be displayed to correspond to the above-described musical unit (eg, beat, measure) of the audio source, but is not limited thereto.
- the electronic device 110 eg, the processor 210) may output a visual effect when the first input is obtained from the user in the first time interval. For example, as shown in (b) of FIG. 38C, a visual effect may be output around the character, at least one character included in the audio source being played is displayed, and the main character 3420a is pre-set.
- the at least one character may be randomly generated, or may include a character identified from an audio source by applying a technology such as NLP, TTS, or voice recognition, and/or may be generated by obtaining an audio recording from a user. . Also, as the audio source is reproduced, displayed characters may sequentially disappear. Also, referring to (a) and (b) of FIG. 38C , when the first input or the second input is obtained from the user in the first time interval or the second time interval, a background space may be displayed to the user.
- a background space may be displayed when an interaction is obtained from the user to indicate that the field of view is secured as a reward for obtaining the interaction from the user, but is not limited thereto.
- the electronic device 110 eg, the processor 210) may refrain from the above-described operation when the second input is obtained from the user in the second time period. That is, the electronic device 110 may control to maintain a preset motion (eg, a foot roll motion) and control not to display text, but is not limited to the described example.
- a preset motion eg, a foot roll motion
- the motion of the main character 3420a can be set in various ways, and an example of motion generation for this will be described with reference to “7.1.2 Table of Contents”.
- FIG. 39 is a flowchart 3900 for describing an example of an operation of the electronic device 110 according to various embodiments. According to various embodiments, the operations shown in FIG. 39 are not limited to the order shown and may be performed in various orders. Also, according to various embodiments, more operations than the operations shown in FIG. 39 may be performed, or at least one operation less than that shown in FIG. 39 may be performed. 39 will be further described below with reference to FIGS. 40, 41A, and 41B.
- 40 is a diagram for explaining an example of an operation of generating a main graphic object of the electronic device 110 according to various embodiments.
- 41A is a diagram for explaining an example of a motion generating operation of the electronic device 110 according to various embodiments.
- 41B is a diagram for explaining an example of a motion control operation of the electronic device 110 according to various embodiments.
- the electronic device 110 obtains a main object (eg, the main character 3420a of FIG. 40 ) in operation 3901 and, in operation 3903, the main object (eg, at least one motion related to the main character 3420a of FIG. 40 ) may be acquired.
- a main object eg, the main character 3420a of FIG. 40
- the main object eg, at least one motion related to the main character 3420a of FIG. 40
- the electronic device 110 may create a user's own character.
- the electronic device 110 provides a pre-implemented character and edits visual properties (eg, shape for each part, color for each part, etc.) of the character.
- An interface 4011 may be provided.
- the electronic device 110 may create a main character 3420a by composing a character according to the visual attributes selected through the interface 4011 .
- the electronic device 110 uses an algorithm such as 3D modeling based on information (eg, facial features) of the user U photographed using a camera to provide information corresponding to the user U. You can also create 3D characters. That is, the interactive music listening system 1 can improve the degree of immersion in listening to music by creating an atmosphere as if one is listening to music while exploring a virtual reality space by creating one's own unique character.
- the electronic device 110 may generate a pre-realized character motion.
- the electronic device 110 may provide a pre-implemented motion provided based on a user's input.
- the pre-implemented motion may be produced in a form of connecting at least some of a plurality of obtained motions, but is not limited to the described example.
- the pre-implemented motion may include a first type of motion provided when the main character 3420a is selected and a second type of motion provided when the surroundings of the main character 3420a are selected. Accordingly, when the main character 3420a is selected, the electronic device 110
- the electronic device 110 may generate a user's own character motion and a musical effect corresponding to the motion.
- the electronic device 110 displays a plurality of joints 4121 constituting the main character 3420a as at least part of an operation for generating a character motion, and the joints ( 4121), a character motion of the main character 3420a may be generated based on a user's input for moving a selected joint (eg, a specific joint 4122).
- the electronic device 110 may identify movement of a selected joint based on a user's input during a specified time period and generate a character motion corresponding to the movement of a joint during a specified time period.
- the electronic device 110 provides an interface 4123 for selecting an audio effect so that an audio effect corresponding to the generated character motion is selected, and the generated motion and the selected audio effect are stored in a form associated with each other. can Then, when a corresponding motion is performed, the electronic device 110 may apply an audio effect associated with the motion to the auditory content 100b (eg, an audio source). Meanwhile, the electronic device 110 may generate a motion for each joint, and then provide a motion corresponding thereto through the main character 3420a based on the joint selected by the user.
- the auditory content 100b eg, an audio source
- the electronic device 110 may provide a separate graphic object 4100b for controlling the motion of the main character 3420a.
- the graphic object 4100b may be implemented to provide a function for selecting the type of motion of the main character 3420a. For example, as shown in FIG. 41B, when the graphic object 4100b is selected (eg, touched) by the user, the type of motion (eg, dance motion) to be performed by the main character 3420a.
- At least one main region (4111b, 4112b, 4113b, 4114b) for selecting a sub-motion associated with the selected type of motion eg, a motion and/or a dance motion of a part of the main character 3420a
- a part of the main character 3420a corresponding to the sub motion may include various types of parts such as an upper body part, a lower body part, arms, and legs of the main character 3420a. Referring to FIG. 41B , when a specific main area 4114b is selected by being moved while the touch to the graphic object 4100b is maintained, the electronic device 110 moves (for example, a motion corresponding to the selected main area 4114b).
- the main character 3420a may be controlled to perform a dance motion).
- a specific main area 4114b is selected and the electronic device 110 moves to another sub area 4120b while the touch is maintained, the electronic device 110 maintains the motion of the main character 3420a as the selected motion in the sub area. Only the motion of a part (eg, upper body part) of the main character 3420a corresponding to (4120b) may be controlled with a specific motion.
- the electronic device 110 may control the main character 3420a by continuously maintaining the set motion, but may stop providing the motion without being limited to the described example.
- the user can have a continuous experience of controlling the motion of the main character 3420a.
- the graphic object 4100b has been described as providing a function for controlling the movement of the main character 3420a, but similarly based on the main areas 4111b, 4112b, 4113b, 4114b It may also be implemented to provide a function of selecting the type of audio source and controlling the properties of the audio source of the selected type based on the sub-area 4120b.
- the electronic device 110 may provide user-customized character motion and/or audio effects.
- the electronic device 110 generates user-specific musical information (eg, a musical pattern) based on information on how the user controls (eg, touches) a graphic object collected during a predetermined calibration period. , musical taste), and corresponding motion and/or audio effects may be provided.
- the electronic device 110 uses information about musical information corresponding to each pre-stored piece of information controlling a user's graphic object, and/or inputs information controlling a graphic object.
- An artificial intelligence model implemented to output musical information in response to being received may be used.
- the electronic device 110 guides the user to touch the graphic object while playing music while providing a predetermined graphic object, and determines the user's musical taste by using the time at which the user's touch is received and pre-stored information. can be identified. For example, when the user performs a touch on a graphic object with a specific beat, the electronic device 110 may identify the user's musical taste that likes the beat. Based on the musical taste identified above, when the main character 3420a is selected, the electronic device 110, together with a specific motion (eg, rap motion), sounds (eg, drum sound, beat sound) that enhance the sense of beat as a musical effect. ) can be provided.
- a specific motion eg, rap motion
- sounds eg, drum sound, beat sound
- the electronic device 110 in operation 3905, displays an execution screen including a main object (eg, the main character 3420a of FIG. 41 ) while displaying a plurality of
- a user input for a main object eg, the main character 3420a of FIG. 41
- a sound is output based on the audio sources of While controlling the motion of the main character 3420a to a specific motion among at least one motion, a specific audio effect may be output.
- the electronic device 110 when receiving a user input for the main character 3420a, applies a pre-set motion and/or controls the main character to perform an action with the created motion, thereby It can be controlled to output corresponding audio effects.
- the electronic device 110 when the electronic device 110 receives a user input in an area around the main character 3420a (eg, an area separated by a preset distance from the main character 3420a), the electronic device 110 moves the main character 3420a with a preset motion.
- the character can be controlled to perform an action, and can be controlled to output an audio effect corresponding to the action.
- the corresponding motion may be a motion in which the body part of the main character 3420a moves away from the location where the user's input is received.
- the electronic device 110 may select a joint at a location closest to the location where the user's input is received, and provide a motion associated with the selected joint.
- FIG. 42 is a flowchart 4200 for describing an example of an operation of the electronic device 110 according to various embodiments. According to various embodiments, the operations illustrated in FIG. 42 may be performed in various orders without being limited to the illustrated order. Also, according to various embodiments, more operations than the operations shown in FIG. 42 may be performed, or at least one operation less than that shown in FIG. 42 may be performed. 42 will be further described below with reference to FIG. 43 .
- 43 is a diagram for explaining an example of an operation of providing a motion based on a musical unit of auditory content of the electronic device 110 according to various embodiments.
- the electronic device 110 identifies a musical unit (eg, beat, measure) based on a plurality of audio sources in operation 4201, and in operation 4203,
- a main object eg, main character 3420a including at least one joint (eg, joints 4310 and 4320 of FIG. 43 ) corresponding to the number of musical units (eg, beat, measure) may be provided.
- the electronic device 110 determines the articulation of the main character 3420a based on a musical unit (eg, beat, measure) of a currently provided audio source set (or a plurality of audio sources). The number of can be set.
- the electronic device 110 sets the number of joints of the main character 3420a to 1 for a time signature of 1/2, and sets the number of joints of the main character 3420a to 8 for a time signature of 8/8.
- the described example is merely an example, and different numbers of joints may be set for each beat.
- the positions of the joints formed in the main character 4320a may be different according to the number of the joints.
- the electronic device 110 may allocate audio source sets of different configurations (eg, an intro 4301 and a verse 4302) for each region, and the main character 3420a ) may be changed according to the location where the musical unit is located, but is not limited to the described example, and the musical unit (eg, beat, unit) may be changed according to the passage of the timeline.
- audio source sets of different configurations eg, an intro 4301 and a verse 4302
- the main character 3420a may be changed according to the location where the musical unit is located, but is not limited to the described example, and the musical unit (eg, beat, unit) may be changed according to the passage of the timeline.
- the electronic device 110 obtains a user input for a specific joint in operation 4205 and performs an operation of the main object based on the user input in operation 4207. While controlling with a specific operation, a specific audio effect can be output. For example, when receiving a user input for the main character 3420a as at least part of an operation of controlling the operation of the main object to a specific motion, the electronic device 110 selects a specific joint and responds to the selected specific joint. It is possible to control the motion of the main character (3420a) by the motion. Referring to FIG.
- the selected joint may be different according to the number of joints implemented in the main character 3420a. Accordingly, motions applied to the main character 3420a may be different, and as a result, different character motions may be provided for each musical unit, audio effects corresponding to different motions may be applied, and/or audio sources may be output. there is.
- FIG. 44 is a flowchart 4400 for describing an example of an operation of the electronic device 110 according to various embodiments. According to various embodiments, the operations illustrated in FIG. 44 may be performed in various orders without being limited to the illustrated order. Also, according to various embodiments, more operations than the operations shown in FIG. 44 may be performed, or at least one less operation may be performed. Below, with reference to FIG. 45, FIG. 44 is further demonstrated.
- 45 is a diagram for explaining an example of activating a sub-object 3430a based on a position of a main character 3420a of the electronic device 110 according to various embodiments.
- the electronic device 110 in operation 4401, outputs a first sound based on a plurality of audio sources and outputs a first object (eg, the main character 3420a). )) to the graphic space, and in operation 4403, a user's input for moving the first object (eg, the main character 3420a) may be received.
- the electronic device 110 determines, in operation 4405, whether the first sub-object (eg, the structure 3430a of FIG. 45 ) for which the distance condition is satisfied is identified. is determined, and when the first sub-object (eg, the structure 3430a of FIG. 45) is identified (4405-Yes), in operation 4407, a second sound different from the first sound is output based on the first scheme. and if the first sub-object (eg, the structure 3430a of FIG. 45) is not identified (4405-No), in operation 4409, a third sound different from the first sound is output based on the second scheme.
- the first sub-object eg, the structure 3430a of FIG. 45
- the electronic device 110 compares the position of the main character 3420a (eg, coordinates in graphic space) with the position of the sub-object 3430a (eg, coordinates in graphic space). Based on this, a distance between the main character 3420a and the sub-object 3430a may be identified.
- the electronic device 110 controls the state of the sub-object 3430a to an inactive state when the distance is greater than the threshold value, and controls the state of the sub-object 3430a to an active state when the distance is less than the threshold value.
- the electronic device 110 may apply an audio effect set to be provided by the sub-object 3430a and/or reproduce an audio source.
- FIG. 46 is a flowchart 4600 for explaining an example of an operation of the electronic device 110 according to various embodiments. According to various embodiments, the operations shown in FIG. 46 may be performed in various orders without being limited to the order shown. Also, according to various embodiments, more operations than the operations shown in FIG. 46 or at least one operation less may be performed. Below, with reference to FIG. 47, FIG. 46 is further demonstrated.
- FIG. 47 is a diagram for explaining an example of activating audio source playback according to sub-object acquisition of the electronic device 110 according to various embodiments.
- the electronic device 110 in operation 4601, outputs a first sound based on a plurality of audio sources and outputs a first object (eg, the main object of FIG. 47 ).
- the character 3420a may be provided in the graphic space, and in operation 4603, a user's input for moving the first object (eg, the main character 3420a of FIG. 47) may be received.
- the electronic device 110 in operation 4605, selects some sub-objects (eg, a drawing) among the plurality of sub-objects based on the movement of the first object. 47 sub-object 4712) is obtained, and in operation 4607, based on outputting at least one audio source selected from among the plurality of audio sources, a second sound is output, and in operation 4609, the second sound If a user input for the first object (eg, the main character 3420a of FIG. 47) is received while outputting the sound, a third sound may be output by changing at least some properties of the selected at least one audio source.
- a user input for the first object eg, the main character 3420a of FIG. 47
- a third sound may be output by changing at least some properties of the selected at least one audio source.
- the electronic device 110 may provide sub-objects 4712 obtainable by the main character 3420a in a graphic space.
- the audio effect and/or audio source assigned to each of the sub-objects may be applied and/or reproduced when obtained by the main character 3420a.
- the electronic device 110 compares the location of the main character 3420a and the location of the sub-object 4712 when the main character 3420a moves, and the sub-object at a position corresponding to the position of the main character 3420a.
- An operation of selecting 4712 may be performed.
- the electronic device 110 provides interfaces 4711 and 4721 for controlling the reproduction of audio sources corresponding to the sub-object 4712 acquired on the execution screen, and reproduces the audio sources based on the user's input based on the interface. You can play audio sources that are selected as being.
- FIG. 48 is a flowchart 4800 for describing an example of an operation of the electronic device 110 according to various embodiments. According to various embodiments, the operations illustrated in FIG. 48 may be performed in various orders without being limited to the illustrated order. Also, according to various embodiments, more operations than the operations shown in FIG. 48 may be performed, or at least one operation less than that shown in FIG. 48 may be performed. Below, with reference to FIG. 49, FIG. 48 is further demonstrated.
- 49 is a view for explaining an example of the movement of the main character according to reproduction of auditory content and reverse reproduction of auditory content according to the movement of the main character of the electronic device 110 according to various embodiments of the present disclosure; am.
- the electronic device 110 in operation 4801, the electronic device 110 (eg, the processor 210) provides a first object (eg, the main character 3420a) to the graphic space, and in operation 4803, a plurality of The first sound may be output by reproducing at least a first part of the sound sources of the .
- the electronic device 110 reproduces only an audio source corresponding to an area where the main character 3420a is currently located (eg, a first theme area) to play a sound of a specific theme ( Example: a first theme sound) may be provided.
- the audio source for providing the sound of the specific theme that is, the audio source assigned to the region
- the electronic device 110 determines, in operation 4805, whether a user input for movement is received, and if the user input is received (4805-yes), In operation 4807, at least a second part (eg, the main character 3420a) of the plurality of sound sources together with the first part (eg, first theme sound) of the plurality of sound sources is set to be output when moving. audio source) to output the second sound, and if no user input is received (4805-No), in operation 4809, the output of the first sound of the first theme (eg, the first theme sound) is maintained.
- a second part eg, the main character 3420a
- the first part eg, first theme sound
- the electronic device 110 determines, in operation 4805, whether a user input for movement is received, and if the user input is received (4805-yes)
- a specific audio source provided as the main character 4320a moves may be different from an audio source corresponding to a region where the main character 4320a is located. For example, when the main character 4320a moves, a specific audio source may be an audio source corresponding to a drum.
- the electronic device 110 may stop playing the specific audio source.
- the electronic device 110 may change to a different type of audio source and perform playback.
- the audio source corresponding to the area where the main character 3420a is located is the melody
- the audio source provided when the main character 3420a is moved is the drum. Conversely, it can also be implemented.
- the main character 3420a may automatically move to a random location in a state where a user's input is not received.
- the electronic device 110 may reproduce an audio source different from that in the case where the main character 3420a is moved by a user's input and/or reproduce the same audio source by applying a specific audio effect.
- the electronic device 110 reproduces a specific audio source corresponding to the location of the main character 3420a in the graphic space according to the movement of the main character 3420a. can identify.
- the location of the main character 3420a in the graphic space and the playback time of a specific audio source may be mapped to each other. Accordingly, referring to (b) of FIG. 49 , in the electronic device 110, when a specific audio source is played backward and the playback time of the specific audio source is reversed, the position of the main character 3420a is reversed.
- the electronic device 110 stores information in which the position of the main character 3420a in the graphic space and the playback time of a specific audio source are mapped to each other in advance, and based on the previously stored information, the main character 3420a is mapped to each other.
- the position of 3420a is reversed, it can be set as a playback time of a specific audio source corresponding to the moved position.
- FIG. 50 is a flowchart 5000 for describing an example of an operation of the electronic device 110 according to various embodiments. According to various embodiments, the operations illustrated in FIG. 50 may be performed in various orders without being limited to the illustrated order. Also, according to various embodiments, more operations than the operations shown in FIG. 50 may be performed, or at least one operation less than that shown in FIG. 50 may be performed.
- the electronic device 110 selects a specific audio effect from among a plurality of audio effects in operation 5001, and selects a specific audio effect from among the plurality of sound sources in operation 5003.
- a second sound may be output by reproducing at least a second part of the plurality of sound sources together with the at least first part.
- the electronic device 110 may select and apply a specific audio effect whenever the main character 3420a moves.
- the electronic device 110 may select a specific audio effect from among a plurality of audio effects whenever the main character 3420a moves, and apply the selected audio effect to playback of the audio source.
- FIG. 51 is a flowchart 5100 for describing an example of an operation of the electronic device 110 according to various embodiments. According to various embodiments, the operations shown in FIG. 51 are not limited to the shown order and may be performed in various orders. Also, according to various embodiments, more operations than the operations shown in FIG. 51 or at least one operation less may be performed.
- the electronic device 110 eg, the processor 210) provides a first object (eg, the main character 3420a) to a graphic space, and in operation 5103, a plurality of A first sound of a first theme corresponding to a first area in which the first object (eg, the main character 3420a) is located among the areas of may be output.
- a first object eg, the main character 3420a
- a plurality of A first sound of a first theme corresponding to a first area in which the first object (eg, the main character 3420a) is located among the areas of may be output.
- the electronic device 110 determines, in operation 5105, whether a user input for movement is received, and if the user input is received (5105-yes)
- a first sound source of a specific type is selected from among a plurality of sound sources of a specific type, and in operation 5109, the first sound source of the specific type is further output while outputting the first sound of a first theme.
- the second sound is output, and when the user input is not received (5105 - No), in operation 5111, the output of the first sound of the first theme may be maintained.
- FIG. 52 is a flowchart 5200 for describing an example of an operation of the electronic device 110 and the server 120 according to various embodiments. According to various embodiments, the operations shown in FIG. 52 are not limited to the order shown and may be performed in various orders. Also, according to various embodiments, more operations than the operations shown in FIG. 52 may be performed, or at least one operation less than that shown in FIG. 52 may be performed. 52 will be further described below with reference to FIG. 53 .
- 53 is a diagram for explaining an example of a multi-play scenario of the electronic device 110 according to various embodiments.
- the server 120 may identify a connection between a plurality of electronic devices (eg, a first electronic device 5200a and a second electronic device 5200b). there is.
- a plurality of electronic devices eg, a first electronic device 5200a and a second electronic device 5200b.
- the server 120 performs a main graphic object (eg, FIG. 53 ) for each of a plurality of electronic devices (eg, the first electronic device 5200a and the second electronic device 5200b).
- the main characters 5311 and 5312 of may be generated and provided to the graphic space.
- the server 120 may receive information of a plurality of electronic devices (eg, the first electronic device 5200a and the second electronic device 5200b) in operations 5207 and 5209. there is.
- the received information may include information for controlling the position of the main characters 5311 and 5312 (eg, user input information).
- the server 120 may move the positions of the main characters 5311 and 5312 and store information on the moved positions.
- the received information may directly include information about the location of the main characters 5311 and 5312.
- the server 120 determines, in operation 5211, a condition related to a plurality of main graphic objects (eg, main characters 5311 and 5312 of FIG. 53) based on the received information. After determining satisfaction, an operation of providing a first sound to a plurality of electronic devices (eg, the first electronic device 5200a and the second electronic device 5200b) may be performed in operations 5213 and 5215 .
- a condition related to a plurality of main graphic objects eg, main characters 5311 and 5312 of FIG. 53
- an operation of providing a first sound to a plurality of electronic devices may be performed in operations 5213 and 5215 .
- the server 120 when the main characters 5311 and 5312 are positioned adjacent to each other (eg, within a pre-set distance), the main character It is determined that the conditions associated with the characters 5311 and 5312 are satisfied, and a plurality of electronic devices (eg, the first electronic device 5200a and the second electronic device 5200b) provide auditory content of other electronic devices. function can be performed.
- the server 120 converts one electronic device (eg, the first electronic device 5200a) corresponding to one character (eg, the first main character 5311) to another character (eg, the second main character).
- an interface 5313 for enjoying auditory content provided to another electronic device (eg, the second electronic device 5200b) may be provided.
- the server 120 provides the auditory content to the first electronic device 5200a, and allows the playback time of the auditory content to be adjusted.
- the server 120 when the server 120 determines that the conditions associated with the main characters 5311 and 5312 are satisfied, it may provide a function of playing together. . As at least part of an operation of determining that a condition associated with the main characters 5311 and 5312 is satisfied, the server 120 places the main characters 5311 and 5312 adjacent to each other (eg, a position within a preset distance). location can be determined. In addition, the server 120 determines the similarity of the moving path on the graphic space of each of the main characters 5311 and 5312 determined to be adjacent to each other, and if the similarity is equal to or greater than a preset value, the above condition is deemed to be satisfied. It can be judged, but is not limited to the examples described.
- the server 120 performs an ensemble of a plurality of electronic devices (eg, the first electronic device 5200a and the second electronic device 5200b) corresponding to the main characters 5311 and 5312. You can create a party for The server 120 provides the same audio source to the plurality of electronic devices (eg, the first electronic device 5200a and the second electronic device 5200b), while providing the plurality of electronic devices (eg, the first electronic device ( The audio sources 5321 and 322 corresponding to the obtained sub-objects for each of the 5200a) and the second electronic device 5200b may be reproduced.
- a plurality of electronic devices eg, the first electronic device 5200a and the second electronic device 5200b
- a plurality of electronic devices eg, the first electronic device 5200a and the second electronic device 5200b included in the party play the same audio source and generate the sub-object as described above.
- Corresponding audio sources 5321 and 322 can be reproduced.
- 54 is a diagram for explaining an example of a player (or application) including a source pool, an input pool, and a plurality of transports of the electronic device 110, according to various embodiments. 54 will be further described below with reference to FIGS. 55A and 55B.
- 55A is a diagram for explaining an example of an operation of one transport, according to various embodiments.
- 55B is a diagram for explaining an example of an operation of one transport, according to various embodiments.
- the interactive music listening system 1 may include a player 5410, a source pool 5420, and an input pool 5430.
- the player 5410 refers to the application 251 described in FIG. 2
- the source pool 5420 and the input pool 5430 may be information stored in the database 252 described in FIG. 2 .
- the source pool 5420 may be implemented to include the type of data shown in Table 8 below. As described in [Table 8] below, as the source pool 5420 is implemented to include information for each visual set and audio set classification, the player 5410 can obtain at least one of the visual set information based on the transport 5400. At least some of the information and audio set information may be obtained.
- the input pool 5430 may refer to various types of information that triggers the control event 500, such as user interaction, context information, and information received from the server 120.
- the plurality of transports 5400 are a kind of playback module that reproduces information to provide visual content 100a and auditory content 100b, and when executed, a certain number of transports 5400 are set to provide certain functions. It may be implemented in the form of a program, computer code, API, and/or function.
- the processor 210 performs at least one operation to provide visual and/or auditory content. can do.
- the operation of the transport 5400 may be understood as an operation performed by the processor 210 when the transport 5400 is executed.
- a plurality of transports 5400 may be switched each other. For example, when a specific condition is satisfied while a function is provided based on the execution of one transport (eg, a control object 5712 described later is selected), another transport is entered, and a specific condition is not satisfied. (e.g., the control object 5712 is deselected), it may return to one transport, but is not limited to the described example.
- the information provided by the electronic device 110 Visual content and auditory content can also be changed.
- the plurality of transports 5400 may include a basic transport and an event transport, which will be described in detail later.
- a basic transport for example, as shown in Figure 54, after entering an event transport from a basic transport, another sub-event transport can be entered from an event transport.
- the condition for each transport entering another transport may be set identically and/or differently.
- a transport 5400 may include a sink module 5521, an interaction module 5523, and an output module 5525.
- the sync module 5521 may be a module that generates visual content 100a and auditory content 100b based on audio information and graphic information.
- the sync module 5521 includes an audio set acquisition module 5521a for acquiring at least a portion of the audio set 5511 from the source pool 5420, a visual set acquisition module 5521b for acquiring at least a portion of the visual set, and generating visual content 100a based on at least a part of the obtained audio set and generating auditory content 100b based on at least a part of the visual set, combining the generated contents and outputting them through the output module 5525 It may include an audio set-visual set sync module 5521c.
- the interaction module 5523 may be a module for controlling information that the sink module 5521 acquires from a source pool.
- the interaction module 5523 uses an input identification module 5523a for acquiring interaction information and an interaction system controlled by the sync module 5520 with specific graphic information based on rule information corresponding to the interaction information module 5523b (ie refer to rule information).
- the interaction module 5523 may simply be replaced with rule information, and in this case, each transport may be implemented to obtain specific graphic information and specific audio information based on specific rule information.
- the output module 5525 may be a module that transfers each content to an output device to be provided to a user.
- one transport uses the sink module 5521 to generate a source pool 5420.
- First audio information and first graphic information may be obtained from, based on this, visual content 100a and auditory content 100b may be generated and output through the output module 5525.
- the second transport controls the sync module 5521 to acquire audio information and graphic information different from those of the first transport based on information about the interaction identified based on the interaction module 5523, and based on this Thus, the visual content 100a and the auditory content 100b may be generated and output through the output module 5525.
- the electronic device 110 initiates a music listening operation based on event transport according to a user's interaction while performing a basic transport-based music listening operation, thereby providing rapid visual content (100a). ) It is possible to provide a function for controlling the auditory content (100b) with the conversion. According to the sudden change, the degree of immersion in interactive music appreciation can be encouraged.
- FIG. 56 is a flowchart 5600 for explaining an example of operations of the electronic device 110 and the server 120 according to various embodiments. According to various embodiments, the operations illustrated in FIG. 56 may be performed in various orders without being limited to the illustrated order. Also, according to various embodiments, more operations than the operations shown in FIG. 56 may be performed, or at least one operation less than that shown in FIG. 56 may be performed. 55 will be further described below with reference to FIGS. 57A and 57B.
- 57A is a diagram for explaining an example of a switching operation between a basic transport and an event transport of the electronic device 110 according to various embodiments.
- 57B is a diagram for explaining an example of an operation according to movement of a main character on visual content provided based on event transport of the electronic device 110 according to various embodiments.
- 57C is a diagram for explaining another example of a conversion operation between a basic transport and an event transport of the electronic device 110 according to various embodiments.
- the electronic device 110 may execute a program in operation 5601 and determine whether an interaction is received in operation 5603.
- the plurality of transports 5400 described above may include a basic transport and an event transport.
- the electronic device 110 provides visual and auditory content based on basic transport when an interaction is not received, and provides visual and auditory content based on event transport while an interaction is received. can provide.
- visual content (eg, a first execution screen 5701) provided based on the basic transport includes a main character 5711, a control object 5712, and other additional objects 5713.
- the electronic device 110 provides the visual content 5701 based on the basic transport, and reproduces at least one audio source (eg, a main audio source set) to provide sound (eg, music) to the user.
- the electronic device 110 controls the main character 5711 to perform a running motion along the additional object 5713, thereby improving the sense of immersion in listening to music.
- the electronic device 110 may control the main character 5711 to perform a dance motion on a planet-shaped graphic object 5713.
- the graphic object 5713 may be implemented to provide a function of controlling the motion of the main character 5711 .
- visual content (eg, a second execution screen 5703) provided based on event transport includes a main character 5711, a control object 5712, and other additional objects 5731.
- the electronic device 110 provides the visual content 5703 based on the event transport and reproduces at least one audio source (eg, a set of sub-audio sources) to provide sound (eg, music) to the user. can be provided to
- the visual content 7503 includes visual content with a different atmosphere from that of the visual content 5701, and a graphic space in which the main character 5711 can move may be provided.
- the graphic space may be provided by the additional object 5731. In this case, the provided sound may be different from the sound provided based on the basic transport.
- the electronic device 110 eg, the processor 210
- the first object including the main object and the control object.
- the first audio is played
- the state of the control object eg, the control object 5712
- the main character 5711 is changed to the main object.
- the electronic device 110 determines that an interaction (eg, the control object 5712) is received (5603-yes)
- the The state of the control object eg, the control object 5712
- a second state eg, an active state
- the main object eg, the main character 5711
- the main object eg, the main character 5711
- the second audio is reproduced, and in operation 5613, the position of the main object (eg, the main character 5711) is moved based on the control object (eg, the control object 5712) and the second execution screen (eg, the control object 5712) is reproduced.
- the property of the second audio may be controlled based on the time associated with the time of entering the second execution screen 5703).
- the electronic device 110 when an input to the control object 5712 is received while the first execution screen 5701 is provided based on the basic transport, the electronic device 110 changes the basic transport to an event transport, Based on the basic transport, while providing the second execution screen 5703, sound may be output based on an audio source (eg, a sub audio source set).
- the electronic device 110 can improve the degree of immersion in listening to music by switching from the first execution screen 5701 to the second execution screen 5703 at the moment the control object 5712 is touched.
- properties of an audio source can be controlled. do.
- the electronic device 110 may control properties of an audio source (eg, a sub audio source set) based on the position of the main character 5711 .
- the electronic device 110 moves the main character 5711 on the additional object 5731 according to the movement of the control object 5712 by the user's input, and the position 5733 of the main character 5711 (eg: Based on the x-coordinate, y-coordinate), additionally output a specific audio source, control the properties of the audio source (e.g.
- the electronic device 110 may additionally output a specific audio source or output audio when the position 5733 (eg, x coordinate, y coordinate) of the main character 5711 in the graphic space satisfies a specific condition.
- a property of a source eg, a sub-audio source set
- the specific condition may include, for example, obtaining a specific sub-object 5732 by reaching a position of the specific sub-object 5732 disposed on the graphic space, or reaching a position of a specific area. .
- the electronic device 110 controls the properties of an audio source (eg, a set of sub-audio sources) and/or controls the audio source based on the duration dt of the user's touch on the control object 5712. You can apply audio effects to a source (e.g. a set of sub-audio sources). For example, the electronic device 110 may reduce the energy of the output sound (eg, reduce the volume) when the time dt for which the touch is maintained exceeds a threshold time.
- an audio source eg, a set of sub-audio sources
- You can apply audio effects to a source e.g. a set of sub-audio sources.
- the electronic device 110 may reduce the energy of the output sound (eg, reduce the volume) when the time dt for which the touch is maintained exceeds a threshold time.
- the electronic device 110 determines, in operation 5615, whether an interaction is maintained, and if the interaction is maintained, operations 5609 to 5614 (ie, event operation based on the transport), and if the interaction is not maintained, operation 5605 (ie, returning to the basic transport, operation based on the basic transport) may be performed.
- operations 5609 to 5614 ie, event operation based on the transport
- operation 5605 ie, returning to the basic transport, operation based on the basic transport
- the electronic device 110 may perform an operation to return to the basic transport. (or It is not limited to the example described, and can be changed to other sub-event transports.)
- the electronic device 110 performs an operation based on the event transport, when a specific condition is satisfied, another event transport (eg, in FIG. 54 ) sub-event transport), and can perform operations based on other converted event transports.
- another event transport eg, in FIG. 54
- the electronic device 110 may switch to another event transport when the main character 5711 acquires a specific sub-object 5732 provided in the graphic space corresponding to the event transport.
- FIG. 58 is a flowchart 5800 for explaining an example of operations of the electronic device 110 and the server 120 according to various embodiments. According to various embodiments, the operations shown in FIG. 58 are not limited to the order shown and may be performed in various orders. Also, according to various embodiments, more operations than the operations shown in FIG. 58 may be performed, or at least one less operation may be performed. 58 will be further described below with reference to FIGS. 59A and 59B.
- 59A is a diagram for explaining an example of a switching operation between a basic transport and an event transport of the electronic device 110 according to various embodiments.
- 59B is a diagram for explaining an example of an operation according to movement of a main character on visual content provided based on event transport of the electronic device 110, according to various embodiments.
- the electronic device 110 selects a plurality of execution screens (eg, the first screen 5910a and the second screen 5910b).
- a first part of the first audio corresponding to the first execution screen (eg, the first screen 5910a) is provided together with the first execution screen (eg, the first screen 5910a), and in operation 5803, the second It may be determined whether an event for changing to an execution screen (eg, the second screen 5910b) is identified.
- the electronic device 110 provides a first screen 5910a based on graphic information acquired based on the basic transport 5900a, while providing audio information (acquired based on the basic transport 5900a).
- Example: Sound may be output based on reproducing the audio source set 5920a of FIG. 59B.
- Energy eg, map energy
- the electronic device 110 may determine whether or not a user inputs to the control object (eg, the control object 5712 of FIG. 57A or 57C) of the first screen 5910a.
- the electronic device 110 identifies an event for changing to the second execution screen (eg, the second screen 5910b) (5803-yes)
- the electronic device 110 identifies an event for changing to the second execution screen (eg, the second screen 5910b) (5803-yes)
- operation 5805 a part of the second audio corresponding to the second execution screen (eg, the second screen 5910b) is provided along with the second execution screen (eg, the second screen 5910b), and operation In 5807, it may be determined whether an event for returning to the first execution screen (eg, the first screen 5910a) is identified.
- a control object eg, the control object 5712 of FIG.
- the electronic device 110 based on the basic transport 5900a Playback of the audio source 5920a may be stopped and switched to the event transport 5900b.
- the electronic device 110 After being switched to the event transport 5900b, the electronic device 110 provides the second screen 5910b based on the graphic information acquired based on the event transport 5900b, and the event transport 5900b A sound may be output based on reproducing audio information (eg, the audio source 5920b) obtained based on . Sound energy output while the second screen 5910b is provided may be different from sound energy output while the first screen 5910a is provided.
- the electronic device 110 can control at least one audio source property while providing the second screen 5910b as described above.
- the control may be performed based on a change in the position of the main character 5711 and/or a length of time for which a user input to the control object 5712 is maintained.
- the energy of sound eg, map energy 5921b, positional energy 5923b, and timeline energy 5925b
- the energy of sound can be dynamically controlled.
- the electronic device 110 when an event for returning to the first execution screen (eg, the first screen 5910a) is identified (5807-yes) , In operation 5809, the remaining second part after the end of the first part of the first audio may be provided together with the first execution screen (eg, the first screen 5910a).
- the electronic device 110 stops playback of the audio source 5920b based on the event transport 5900b and transmits the audio to the basic transport 5900a. Playback of the based audio source 5920a may be resumed.
- the electronic device 110 may resume playback from the point where playback of the audio source 5920a is stopped. Accordingly, when the user's interaction is released, the user can return to the existing music and resume listening from the part where the music appreciation was temporarily stopped.
- FIG. 60 is a flowchart 6000 for explaining an example of operations of the electronic device 110 and the server 120 according to various embodiments. According to various embodiments, the operations shown in FIG. 60 are not limited to the order shown and may be performed in various orders. Also, according to various embodiments, more operations than the operations shown in FIG. 60 may be performed, or at least one operation less than that shown in FIG. 60 may be performed.
- the electronic device 110 in operation 6001, performs at least one first reproduction module (eg, basic transport) among a plurality of reproduction modules. 1 audio file is reproduced, and in operation 6003, it may be determined whether an event for changing to a second playback module (eg, event transport) is identified.
- first reproduction module eg, basic transport
- second playback module eg, event transport
- the electronic device 110 when an event for changing to a second playback module (eg, event transport) is identified (6003-yes), operation 6005 In , stop playback of the at least one first audio file corresponding to the first playback module (eg, basic transport) and at least one second audio file corresponding to the second playback module (eg, event transport). It may be determined whether an event for playing the audio file and returning to the first playback module (eg, basic transport) is identified in operation 6007 .
- a second playback module eg, event transport
- the electronic device 110 eg, the processor 210
- the second playback module Stop playback of the at least one second audio file corresponding to (eg, event transport) and play the at least one first audio file corresponding to the first playback module (eg, basic transport) into the first You can replay from that point on.
- FIG. 61 is a flowchart 6100 for explaining an example of operations of the electronic device 110 and the server 120 according to various embodiments. According to various embodiments, the operations shown in FIG. 61 are not limited to the shown order and may be performed in various orders. Also, according to various embodiments, more operations than the operations shown in FIG. 61 may be performed, or at least one operation less than that shown in FIG. 61 may be performed. 61 will be further described below with reference to FIGS. 62A and 62B.
- 62A is a diagram for explaining an example of a time point at which graphic data is provided and a time point at which an audio source is reproduced, according to various embodiments.
- 62B is a diagram for explaining an example of an activation period according to various embodiments.
- the electronic device 110 selects a plurality of audio sources (eg, first to fifth audio sources #1 to #5). ) is obtained, and in operation 6103, a musical unit (eg, beat, measure) is identified based on a plurality of audio sources (eg, first to fifth audio sources (audio sources #1 to #5)), In operation 6105, an event time point may be identified based on the identified musical unit (eg, beat or measure), and an activation section for acquiring a user input may be set based on the event occurrence time point.
- a plurality of audio sources eg, first to fifth audio sources #1 to #5
- a musical unit eg, beat, measure
- an activation section for acquiring a user input may be set based on the event occurrence time point.
- the electronic device 110 acquires a plurality of audio sources (eg, first to fifth audio sources #1 to #5) to be provided while a program is being executed, and obtains a plurality of audio sources. (eg, first to fifth audio sources (audio sources #1 to #5)), a musical unit (eg, beat, bar) may be identified based on at least a part.
- the electronic device 110 includes audio sources (eg, first to fifth audio sources (audio sources #1 to #5)) reproduced over the entire time period while a program is running.
- a first audio source (audio source#1) may be identified, and a musical unit (eg, beat, bar) may be identified based on the identified audio source.
- the electronic device 110 may identify an event occurrence time point based on the musical unit (eg, beat, measure). The event occurrence time may be understood as an appropriate time when a specific musical effect is applied and/or a specific audio source is output.
- the electronic device 110 may set an activation period based on the event occurrence time point.
- the activation period may include a first time period before a predetermined time based on the event occurrence time and a second time period after a predetermined time based on the event occurrence time.
- the first time interval may be set to be longer than the second time interval.
- the electronic device 110 selects a plurality of audio sources (eg, first to fifth audio sources #1 to #5). ), while displaying an execution screen including at least one graphic object (eg, graphic objects of “table of contents. Based on obtaining the user input, a time point at which the user input is obtained may be obtained.
- the electronic device 110 determines, in operation 6111, whether the acquired time corresponds to a first time period (eg, a first time period among active periods). and if it corresponds to the first time period (eg, the first time period of the activation period) (6111-Yes), in operation 6113, a specific audio source (eg, a fourth audio source (audio source#4) at the time of the event) )) and/or while controlling the properties of a specific audio source, a visual effect may be provided.
- a specific audio source eg, a fourth audio source (audio source#4) at the time of the event)
- the output fourth audio source is “table of contents. 2 to Table of Contents. It can include various types of audio sources provided based on the user's interaction described in 5”.
- the timing of providing a visual effect while outputting a specific audio source eg, a fourth audio source (audio source#4)
- a specific audio source eg, a fourth audio source (audio source#4)
- the visual content (100a) and the auditory content (100b) can be synchronized in time.
- the electronic device 110 in operation 6115, when the electronic device 110 (eg, the processor 210) does not correspond to a 1-hour period (eg, a first time period among active periods) (6111-No) , It is determined whether the obtained time period corresponds to the second time interval (eg, the second time interval among the activation intervals), and if it corresponds to the second time interval (eg, the second time interval among the activation intervals) (6115- Yes), in operation 6117, the user input may be ignored.
- a 1-hour period eg, a first time period among active periods
- At least some of the operations of the server 120 may be implemented as operations that can be provided by a predetermined application (or program) unless otherwise specified. That is, at least some of the following operations of the server 120 may be performed by the electronic device 110 when an application (or program) stored in the electronic device 110 is executed.
- FIG. 63 is a flowchart 6300 for explaining an example of operations of the electronic device 110 and the server 120 according to various embodiments. According to various embodiments, the operations shown in FIG. 63 are not limited to the shown order and may be performed in various orders. Also, according to various embodiments, more operations than the operations shown in FIG. 63 may be performed, or at least one operation less than that shown in FIG. 63 may be performed. 63 will be further described below with reference to FIGS. 64, 65A, and 65B.
- 64 is a diagram for explaining an example of a platform file exchanged by the server 120 according to various embodiments.
- 65A is a diagram for explaining an example of an interface provided by the server 120 according to various embodiments.
- 65B is a diagram for explaining an example of information for each array acquired by the server 120 according to various embodiments.
- the server 120 receives a sound source file, in operation 6303, identifies information on the uploaded sound source file, and in operation 6305, a plurality of audio files based on the sound source file
- the sources may be acquired, an execution screen for controlling at least some of the plurality of audio sources may be configured in operation 6307 , and an execution screen and the plurality of audio sources may be provided in operation 6309 .
- the server 120 may provide a predetermined graphic user interface 6500 to the electronic device 110 .
- the graphic user interface 6500 includes a first area 6510 including visual content 100a (eg, a plurality of graphic objects 6511) for controlling an audio source and/or applying an audio effect, and a selected sound source.
- a second area 6520 for providing a function for playing a file e.g., a plurality of graphic objects 6511
- a third area 6530 for selecting an audio source and/or an audio effect a function for uploading a sound source file and providing information on an uploaded sound source file
- It may include a fourth area 6540, and is not limited to the examples described and/or illustrated, and may be implemented to further include various types of graphic elements.
- the server 120 may provide a sound source file upload function.
- a sound source file may be received from the device 110 .
- the electronic device 110 may separate the received sound source file into a plurality of audio sources based on the above-described audio source separation technology, and provide the separated audio sources to the electronic device 110 .
- the electronic device 110 may provide sound to the user by reproducing at least some of the plurality of separated audio sources through the first region 6510 .
- the visual content 100a provided to the first region 6510 is “table of contents. 2 to Table of Contents. 5” may include various types of visual content 100a.
- the server 110 provides a menu screen for selecting the various types of visual content 100a, and arranges the visual content 100a selected from the menu screen in the first area 6510 to obtain an electronic device. (110).
- the server 120 may perform an operation of periodically transmitting a plurality of audio sources associated with a sound source file to the electronic device 110 .
- the server 120 may perform an operation of periodically converting a portion of a sound source file into a plurality of audio sources and transmitting the same to the electronic device 110 .
- a relay device (and/or relay program) capable of relaying the transmission period of the server 120 may be further implemented.
- the electronic device 110 controls at least some of the plurality of audio sources based on controlling objects (eg, the plurality of graphic objects 6510) included in the execution screen.
- a sound may be provided based on controlling properties
- in operation 6313 at least one file related to the provided sound may be stored, and in operation 6315, at least one file may be transmitted.
- the electronic device 110 determines the location of a plurality of graphic objects 6511a, 6511b, and 6511c based on a user's input while outputting sound based on reproducing at least some of a plurality of audio sources. and may output a specific audio source and/or apply a specific audio effect based on the location control.
- the electronic device 110 may divide the time into a plurality of viewpoints and obtain information 6500a on the user's interaction for each of the divided plurality of viewpoints (time array).
- the user interaction information may include information about locations of the plurality of graphic objects 6511a, 6511b, and 6511c.
- the time interval between the plurality of time points may be 0.2 seconds, or may be implemented at various time intervals without being limited to the described example.
- the server 120 in operation 6317, may store the received at least one file.
- the server 120 may receive, from the electronic device 110 , information on the user's interaction for each of the plurality of viewpoints (time array) described above.
- the server 110 provides information on sound source files (eg, audio source set 6410), information on visual content (eg, UI identification information 6420), or the user's interaction
- At least one of information eg, location information 6430 of each graphic object for each array
- a file of a form stored to be related may be defined as a platform file 6400.
- the interactive music experience unique to the user can be reused (eg, played again and/or shared with other users).
- the server 120 reproduces the audio source set 6410 based on the stored platform file 6400, the user's interaction information (eg, location information of graphic objects for each array ( 6430)), it is possible to recreate and provide the sound previously enjoyed by the user by reproducing a specific audio source and/or reflecting a specific audio effect.
- FIG. 66 is a flowchart 6600 for explaining an example of an operation of the electronic device 110 according to various embodiments. According to various embodiments, the operations shown in FIG. 66 are not limited to the order shown and may be performed in various orders. Also, according to various embodiments, more operations than the operations shown in FIG. 66 may be performed, or at least one operation less than that shown in FIG. 66 may be performed.
- the electronic device 110 obtains a sound source file including a plurality of audio sources in operation 6601 and, in operation 6603, at least one of the plurality of audio sources.
- An execution screen eg, the interface 6500 of FIG. 65A ) including at least one object for controlling some first properties may be displayed.
- the electronic device 110 in operation 6605, reproduces at least one part of the plurality of audio sources while at least one object (eg, diagram) for each of a plurality of times.
- a user's input for the graphic object 6511 of step 65a is obtained, and properties of at least some of the plurality of audio sources are controlled based on the user's input, and in operation 6607, the plurality of times obtained
- Information associated with the user input eg, 6500a
- a data set including the information associated with the user input obtained for each of the plurality of times may be transmitted.
- FIG. 67 is a flowchart 6300 for explaining an example of operations of a plurality of electronic devices (eg, an electronic device 6700a, an external electronic device 6700b) and a server 120, according to various embodiments. .
- the operations shown in FIG. 67 are not limited to the shown order and may be performed in various orders. Also, according to various embodiments, more operations than the operations shown in FIG. 67 may be performed, or at least one operation less than that shown in FIG. 67 may be performed.
- the electronic device 6700a may, in operation 6701, transmit access information for accessing at least one file related to the sound provided to the external electronic device 6700b.
- the electronic device 6700a may execute a function of sharing a specific platform file 6400 with other users while using an interactive music listening platform through the server 120 .
- the electronic device 6700a may provide link information to the external electronic device 6700b of another user.
- the link information may include identification information for identifying the specific platform file 6400 and a link (URI, URL) for accessing the server 120 .
- the link information may be shared through a conventional messenger application, but is not limited to the described example and may be shared using a messenger function operated in the server 120.
- the external electronic device 6700b may access the server 120 in operation 6703.
- the external electronic device 6700b may access the server 120 based on the selection of the link information.
- the server 120 obtains, in operation 6705, at least one file associated with the provided sound (eg, at least a part of the platform file 6400 of FIG. 64), and in operation 6707, at least one A plurality of audio sources are acquired based on a file of (eg, at least part of the platform file 6400 of FIG. 64), and in operation 6709, at least one file (eg, at least part of the platform file 6400 of FIG. 64). ), sound may be reproduced based on controlling at least some properties of a plurality of audio sources.
- the server 120 obtains identification information for identifying the platform file 6400 from the external electronic device 6700b, and the platform file corresponding to the identification information ( 6400) can be identified.
- the server 120 provides an interface 6500 including the visual content 100a corresponding to the UI identification information 6420 identified from the platform file 6400 in the first area 6510, while the platform file ( The audio source set 6410 identified from 6400 may be provided to the external electronic device 6700b to be reproduced.
- the server 120 and/or the external electronic device 6700b determines the location of a plurality of objects (eg, the object 6511 of FIG. 65A) within the visual content 100a based on the object location information 6430 for each array. Based on controlling, a specific audio source may be played and/or a specific audio effect may be applied.
- the server 120 transfers information for controlling the position of a plurality of objects (eg, the object 6511 of FIG. 65A) for each array based on the external electronic device 6700b to the existing platform file 6400. can be added within.
- the electronic device 110 and/or the server 120 may provide visual content 100a for obtaining various types of interactions, in addition to the user's interaction with the graphic object described above.
- visual content 100a for obtaining various types of interactions, in addition to the user's interaction with the graphic object described above.
- an operation of acquiring the user's motion of the electronic device 110 is described, but is not limited to the described example, and visual content for acquiring various types of interactions (eg, voice, object, etc.) described in this specification It is obvious to those skilled in the art that (100a) can be provided.
- FIG. 68 is a flowchart 6800 for explaining an example of an operation of the electronic device 110 according to various embodiments.
- the operations shown in FIG. 68 are not limited to the order shown and may be performed in various orders. Also, according to various embodiments, more operations than the operations shown in FIG. 68 may be performed, or at least one operation less than that shown in FIG. 68 may be performed. 68 will be further described below with reference to FIGS. 69, 70A, 70B, and 71 .
- 70a is a diagram for explaining an example of visual content 100a provided by the server 120 according to various embodiments.
- 70B is a diagram for explaining an example of visual content 100a provided by the server 120 according to various embodiments.
- 71 is a diagram for explaining an example of a messenger function provided by the server 120 according to various embodiments.
- the electronic device 110 may acquire a sound source file including a plurality of audio sources in operation 6801 and display an execution screen including a user photographed by a camera in operation 6803. there is.
- the server 120 provides a visual image including at least a part (eg, users 7010a and 7010b) of images captured by a camera to the electronic device 110.
- the auditory content (100b) will be provided based on reproducing at least some of the plurality of audio sources corresponding to the sound source file selected by the electronic device (110).
- the background of the visual content (7020a, 7020b) may be a pre-set background, and/or a background in which the user U photographed by the camera of the electronic device 110 is located.
- the electronic device 110 may further display at least one graphic object on the visual content 7020a.
- the visual content 7020a may include virtual characters 7001b and 7003b.
- the virtual characters 7001b and 7003b may correspond to artists corresponding to the selected music file, but are not limited to the described example and a virtual character selected separately from the music file may be provided.
- the virtual characters 7001b and 7003b may be set to take a predetermined motion while at least some of the plurality of audio sources are reproduced. For example, based on musical units, the virtual characters 7001b and 7003b may be implemented to perform dance movements.
- a virtual graphic object such as a graphic building may be provided on the visual content 7020a.
- the electronic device 110 obtains a user's motion for each of a plurality of times (eg, the above-described time array) while reproducing at least some of the plurality of audio sources, and Based on the obtained motion, properties of at least some of the plurality of audio sources may be controlled, and in operation 6807, information associated with the motion of the user obtained for each of the plurality of times may be obtained. That is, the electronic device 110 may obtain information about the user's motion captured by the camera instead of the location of the graphic object 6511 for each of a plurality of times (time array).
- the electronic device 110 reproduces a specific audio source corresponding to the user's motion while providing the auditory content 100b based on reproducing at least some of the plurality of audio sources, and /or you can apply specific audio effects.
- the electronic device 110 may reproduce a specific audio source for each motion 7010a, 7030a, 7010b, and 7030b of the user.
- the electronic device 110 may reflect a visual effect corresponding to the user's motion to the visual content 110a.
- the electronic device 110 may control the motion of virtual characters 7001b and 7003b included in the visual content 7020b based on the motion of the user. For example, when the motion of the user is the first motion, the electronic device 110 may control the motion of the virtual characters 7001b and 7003b as the first motion, but when the motion of the user is the second motion, the motion of the virtual characters 7001b and 7003b is controlled by the virtual The motions of the characters 7001b and 7003b can be controlled by the second motion.
- the electronic device 110 may, in operation 6809, transmit a data set including information related to the user's motion obtained for each of the plurality of times.
- the server 120 generates a platform file 6900 including at least one of an audio source set 6910, a recorded video 6920, and a user motion 6930 for each array. may be stored and/or managed.
- FIG. 72 is a diagram for explaining an example of visual content 100a that can be provided by the interactive music listening system 1 according to various embodiments.
- the interactive music listening system 1 may be implemented to provide metaverse visual content. That is, the interactive music listening system 1 is an extended reality that can be provided by the electronic device 110 for extended reality (XR) such as virtual reality (VR) and mixed reality (MR). It can be implemented to provide content.
- XR extended reality
- VR virtual reality
- MR mixed reality
- the interactive music listening system 1, similar to the aforementioned visual content 100a, a plurality of 3D graphic objects within the visual content 7210a, 7210b, and 7210c implemented for each of a plurality of stages. (7211a, 7211b) implemented, and while each of the visual content (7210a, 7210b, 7210c) is provided, the auditory content (100b) based on playing at least one audio source corresponding thereto (eg, the first sound) , the second sound) may be provided.
- the interactive music listening system 1 performs an operation of controlling the visual content 100a and/or the auditory content 100b according to the control of the above-described graphic object, an operation of switching a stage, and the like. As can be done, more redundant descriptions are omitted.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Selon divers modes de réalisation, un procédé de fonctionnement d'un dispositif électronique peut être prévu, le procédé comprenant des opérations consistant à : identifier une entrée d'un utilisateur relativement à au moins un objet; lorsqu'une première condition par l'entrée identifiée est satisfaite, émettre un deuxième son présentant un deuxième spectre de fréquence tout en affichant un deuxième écran auquel un premier effet visuel est appliqué lorsque l'entrée identifiée constitue une première entrée, et émettre un troisième son présentant un troisième spectre de fréquence tout en affichant un troisième écran auquel un second effet visuel est appliqué lorsque l'entrée identifiée constitue une seconde entrée; et, lorsqu'une seconde condition par l'entrée identifiée est satisfaite, émettre un deuxième son présentant un deuxième spectre de fréquence correspondant à une seconde durée de reproduction associée à un fichier musical.
Applications Claiming Priority (14)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20210159077 | 2021-11-18 | ||
KR10-2021-0159077 | 2021-11-18 | ||
KR20220060385 | 2022-05-17 | ||
KR10-2022-0060385 | 2022-05-17 | ||
KR10-2022-0076269 | 2022-06-22 | ||
KR20220076269 | 2022-06-22 | ||
KR10-2022-0098394 | 2022-08-08 | ||
KR1020220098395A KR20230073079A (ko) | 2021-11-18 | 2022-08-08 | 사용자 입력에 기반하여 사운드를 제공하는 전자 장치 및 그 동작 방법 |
KR1020220098394A KR20230073078A (ko) | 2021-11-18 | 2022-08-08 | 사용자 입력에 기반하여 사운드를 제공하는 전자 장치 및 그 동작 방법 |
KR10-2022-0098396 | 2022-08-08 | ||
KR10-2022-0098395 | 2022-08-08 | ||
KR10-2022-0098397 | 2022-08-08 | ||
KR1020220098396A KR20230073080A (ko) | 2021-11-18 | 2022-08-08 | 사용자 입력에 기반하여 사운드를 제공하는 전자 장치 및 그 동작 방법 |
KR1020220098397A KR20230073081A (ko) | 2021-11-18 | 2022-08-08 | 사용자 입력에 기반하여 사운드를 제공하는 전자 장치 및 그 동작 방법 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023090831A1 true WO2023090831A1 (fr) | 2023-05-25 |
Family
ID=86397317
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2022/018030 WO2023090831A1 (fr) | 2021-11-18 | 2022-11-16 | Dispositif électronique destiné à émettre un son sur la base d'une entrée d'utilisateur et son procédé de fonctionnement |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2023090831A1 (fr) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118113185A (zh) * | 2024-03-04 | 2024-05-31 | 北京数原数字化城市研究中心 | 元宇宙交互方法、装置、设备、存储介质及程序产品 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110040190A (ko) * | 2009-10-13 | 2011-04-20 | 삼성전자주식회사 | 휴대용 단말기의 음악 재생 장치 및 방법 |
KR20140011121A (ko) * | 2012-07-17 | 2014-01-28 | 문봉진 | 음원 편곡 서비스 방법 및 음원 편곡 서비스 시스템 |
US20160103656A1 (en) * | 2014-03-28 | 2016-04-14 | Spotify Ab | System and method for playback of media content with audio spinner functionality |
KR20170019242A (ko) * | 2015-08-11 | 2017-02-21 | 삼성전자주식회사 | 전자 장치에서 유저 인터페이스 제공 방법 및 장치 |
KR20210130069A (ko) * | 2020-04-21 | 2021-10-29 | (주)드림어스컴퍼니 | 사용자 선호 기반 음악 정렬 장치 및 이에 적용되는 단말기 |
-
2022
- 2022-11-16 WO PCT/KR2022/018030 patent/WO2023090831A1/fr unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110040190A (ko) * | 2009-10-13 | 2011-04-20 | 삼성전자주식회사 | 휴대용 단말기의 음악 재생 장치 및 방법 |
KR20140011121A (ko) * | 2012-07-17 | 2014-01-28 | 문봉진 | 음원 편곡 서비스 방법 및 음원 편곡 서비스 시스템 |
US20160103656A1 (en) * | 2014-03-28 | 2016-04-14 | Spotify Ab | System and method for playback of media content with audio spinner functionality |
KR20170019242A (ko) * | 2015-08-11 | 2017-02-21 | 삼성전자주식회사 | 전자 장치에서 유저 인터페이스 제공 방법 및 장치 |
KR20210130069A (ko) * | 2020-04-21 | 2021-10-29 | (주)드림어스컴퍼니 | 사용자 선호 기반 음악 정렬 장치 및 이에 적용되는 단말기 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118113185A (zh) * | 2024-03-04 | 2024-05-31 | 北京数原数字化城市研究中心 | 元宇宙交互方法、装置、设备、存储介质及程序产品 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021071115A1 (fr) | Dispositif électronique de traitement d'énoncé d'utilisateur et son procédé de fonctionnement | |
WO2015169124A1 (fr) | Système de mixage sonore de terminal et procédé de lecture | |
WO2020222444A1 (fr) | Serveur pour déterminer un dispositif cible sur la base d'une entrée vocale d'un utilisateur et pour commander un dispositif cible, et procédé de fonctionnement du serveur | |
WO2017160073A1 (fr) | Procédé et dispositif pour une lecture, une transmission et un stockage accélérés de fichiers multimédia | |
WO2010128830A2 (fr) | Système, procédé et support d'enregistrement pour contrôler un objet dans un monde virtuel | |
WO2018088794A2 (fr) | Procédé de correction d'image au moyen d'un dispositif et dispositif associé | |
WO2016114428A1 (fr) | Procédé et dispositif pour réaliser la reconnaissance vocale en utilisant un modèle grammatical | |
WO2020054945A1 (fr) | Robot et procédé de fonctionnement d'un tel robot | |
WO2023090831A1 (fr) | Dispositif électronique destiné à émettre un son sur la base d'une entrée d'utilisateur et son procédé de fonctionnement | |
WO2020085641A1 (fr) | Appareil d'affichage et son procédé de fonctionnement | |
WO2021086065A1 (fr) | Dispositif électronique et son procédé de fonctionnement | |
WO2022191435A1 (fr) | Dispositif électronique et système d'aide au mouvement d'un utilisateur | |
WO2020263016A1 (fr) | Dispositif électronique pour le traitement d'un énoncé d'utilisateur et son procédé d'opération | |
WO2021215804A1 (fr) | Dispositif et procédé de fourniture de simulation de public interactive | |
WO2020222338A1 (fr) | Système d'intelligence artificielle pour fournir des informations d'image et procédé associé | |
WO2023085679A1 (fr) | Dispositif électronique et procédé de génération automatique de vidéo éditée | |
WO2022216099A1 (fr) | Dispositif électronique pour fournir un son sur la base d'une entrée d'utilisateur, et son procédé de fonctionnement | |
WO2022154415A1 (fr) | Dispositif électronique et procédé de fonctionnement d'un service vidéo d'avatar | |
WO2022131533A1 (fr) | Procédé de commande de son ambiant et dispositif électronique correspondant | |
WO2020130662A1 (fr) | Dispositif électronique et procédé de commande du fonctionnement d'un robot sur lequel peuvent être montés des accessoires | |
WO2021241840A1 (fr) | Dispositif électronique de commande basée sur un geste et son procédé de fonctionnement | |
WO2024154971A1 (fr) | Dispositif électronique et procédé de génération de contenu lié à l'exercice l'utilisant | |
WO2024058568A1 (fr) | Procédé de fonctionnement de mode de chant et dispositif électronique le mettant en œuvre | |
WO2024155132A1 (fr) | Procédé et système de génération d'humain numérique, et dispositif de génération d'image humaine numérique et procédé de pilotage de dispositif | |
WO2023085635A1 (fr) | Procédé de fourniture d'un service de synthèse vocale et système associé |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22896037 Country of ref document: EP Kind code of ref document: A1 |