WO2016144032A1 - Music providing method and music providing system - Google Patents
Music providing method and music providing system Download PDFInfo
- Publication number
- WO2016144032A1 WO2016144032A1 PCT/KR2016/002043 KR2016002043W WO2016144032A1 WO 2016144032 A1 WO2016144032 A1 WO 2016144032A1 KR 2016002043 W KR2016002043 W KR 2016002043W WO 2016144032 A1 WO2016144032 A1 WO 2016144032A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sound source
- tag information
- user
- group
- sound
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000004590 computer program Methods 0.000 claims description 7
- 230000008929 regeneration Effects 0.000 claims 2
- 238000011069 regeneration method Methods 0.000 claims 2
- 230000008451 emotion Effects 0.000 description 17
- 230000036651 mood Effects 0.000 description 14
- 230000015654 memory Effects 0.000 description 13
- 238000004891 communication Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 4
- 230000002996 emotional effect Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000009795 derivation Methods 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- DVOODWOZJVJKQR-UHFFFAOYSA-N 5-tert-butyl-3-(2,4-dichloro-5-prop-2-ynoxyphenyl)-1,3,4-oxadiazol-2-one Chemical compound O=C1OC(C(C)(C)C)=NN1C1=CC(OCC#C)=C(Cl)C=C1Cl DVOODWOZJVJKQR-UHFFFAOYSA-N 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 238000000746 purification Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/08—Payment architectures
- G06Q20/12—Payment architectures specially adapted for electronic shopping systems
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
Definitions
- the present invention relates to a music providing method and a music providing system, and more particularly, to a method and system for allocating tag information corresponding to a sound source, generating a sound source group according to the assigned tag information, and providing the same to other users. .
- the background art described above is technical information that the inventors possess for the derivation of the present invention or acquired in the derivation process of the present invention, and is not necessarily a publicly known technique disclosed to the general public before the application of the present invention. .
- One embodiment of the present invention is to provide a sound source according to the purpose, taste, emotion and mood of the listener.
- an embodiment of the present invention has an object to provide a group of sound sources according to the purpose, taste, emotion and mood of the listener.
- an embodiment of the present invention is to provide a playlist consisting of a sound source according to the purpose, taste, emotion and mood of the listener.
- an embodiment of the present invention has an object to share the tag information, sound source group with respect to the sound source according to the purpose, taste, emotion and mood of the listener with others.
- an embodiment of the present invention is to create a new revenue structure in the music market by distributing the tag information on the sound source according to the purpose, taste, emotion and mood of the listener, the profit sharing of the sound source group with others There is a purpose.
- a music providing method comprising the steps of:
- the storage unit for matching and storing the tag information corresponding to the sound source and the playback unit for playing the sound source Disclosed is a music providing system comprising.
- a third aspect of the present invention there is also disclosed a computer program carried out by a music providing system and stored in a recording medium for performing the music providing method according to the first aspect.
- a computer readable recording medium having recorded thereon a program for performing the music providing method according to the first aspect.
- an embodiment of the present invention can provide a sound source according to the purpose, taste, emotion and mood of the listener.
- any one of the problem solving means of the present invention it is possible to provide a group of sound sources according to the purpose, taste, emotion and mood of the listener.
- any one of the problem solving means of the present invention it is possible to provide a playlist consisting of a sound source according to the purpose, taste, emotion and mood of the listener.
- any one of the problem solving means of the present invention it is possible to share the tag information and sound source group for the sound source according to the purpose, taste, emotion and mood of the listener with others.
- the new revenue structure in the music market by distributing the tag information for the sound source according to the purpose, taste, emotion and mood of the listener, and the profit sharing of the sound source group with others Can create.
- FIG. 1 is a block diagram of a music providing system according to an embodiment of the present invention.
- FIG. 2 is a block diagram showing a music providing system according to an embodiment of the present invention.
- FIG. 3 is a flowchart illustrating a music providing method according to an embodiment of the present invention.
- FIG. 1 is a block diagram illustrating a music providing system 100 according to an embodiment of the present invention.
- the music providing system 100 is an apparatus for performing a music providing method according to an exemplary embodiment of the present invention.
- the music providing system 100 may play a sound source by calling a list index of a newly created sound source group.
- the music providing system 100 may include a sound source providing server 10 for providing a sound source and a user terminal 20 for playing the sound source.
- the sound source providing server 10 may store a sound source and various data related thereto, and transmit and receive data for music reproduction with the user terminal 20 through the network N.
- the network N may be a local area network (LAN), a wide area network (WAN), a value added network (VAN), a personal local area network (PAN), or mobile. It can be implemented in all kinds of wired / wireless networks such as mobile radio communication network, Wibro (Wireless Broadband Internet), Mobile WiMAX, High Speed Downlink Packet Access (HSDPA) or satellite communication network.
- LAN local area network
- WAN wide area network
- VAN value added network
- PAN personal local area network
- mobile can be implemented in all kinds of wired / wireless networks such as mobile radio communication network, Wibro (Wireless Broadband Internet), Mobile WiMAX, High Speed Downlink Packet Access (HSDPA) or satellite communication network.
- Wibro Wireless Broadband Internet
- HSDPA High Speed Downlink Packet Access
- the music providing system 100 may include a user terminal 20.
- the user terminal 20 may be implemented as a computer, a portable terminal, a television, a wearable device, or the like, which can be connected to the sound source providing server 10 or another user terminal through the network N.
- the computer includes, for example, a laptop, desktop, laptop, etc., which is equipped with a web browser, and the portable terminal is, for example, a wireless communication device that ensures portability and mobility.
- the television may include an Internet Protocol Television (IPTV), an Internet Television (Internet Television), a terrestrial TV, a cable TV, or the like.
- IPTV Internet Protocol Television
- Internet Television Internet Television
- the wearable device is, for example, a type of information processing device that can be directly worn on a human body such as a watch, glasses, accessories, clothes, shoes, etc., and a sound source providing server at a remote place via a network (N) directly or through another information processing device. 10 may be connected to or connected to another user terminal.
- the music providing system 100 may perform embodiments of more various music providing methods by communicating with other user terminals existing separately from the user terminal 20.
- the music providing system 100 includes a storage unit 101, a tag assigning unit 102, a group generating unit 103, a group providing unit 104, a charging unit 105, and a playback unit 106. It may include, but do not necessarily include all of the above configuration at the same time, it can be implemented an embodiment of the present invention including some of the above configuration. In addition, the above configuration may be implemented in any one or both of the sound source providing server 10 and the user terminal 20 to operate one of the embodiments of the present invention.
- the storage unit 101 may store tag information corresponding to the sound source by matching the sound source.
- the tag information is a kind of keyword information for describing the sound source, and may be stored as being matched with the sound source as data separate from the sound source.
- the user can predict the mood of the music or search for the sound source according to the tag information.
- the tag information may be set and stored by the provider of the sound source before the sound source is provided to the user, which is called self tag information.
- creation tag information arbitrary tag information that a user directly adds to a sound source.
- the music providing system 100 may include a tag assignment unit 102 as a module that provides an interface for receiving creation tag information from a user.
- the tag assigning unit 102 may receive arbitrary writing tag information from the user with respect to the sound source.
- the type of information that can be received as the creation tag information from the user is not limited. That is, the user can input the tag for the sound source as he / she wants. For example, you can enter the tag information according to the purpose of emotion, such as joy, excitement, depression, rainy day, sunny day, calmness, exercise, eating, concentration, etc. Place name, person's name, etc. can be written as tag information and personal and unique tag information can be assigned to the sound source.
- the tag allocating unit 102 may allocate image tag information about a sound source by presenting an image such as an emoticon and being selected by the user.
- the tag allocator 102 recommends one or more recommendation tag information related to the created tag information received from the user, and receives the tag information by receiving an input for one or more of the recommended tag information from the user. Can be assigned.
- the recommended tag information is tag information related to the created tag information input by the user, and may include tag information having a similar notation or pronunciation or containing tag information, tag information having characteristics in common with the created tag information, and the like. have.
- tag information input of a plurality of users may be analyzed and frequently inputted tag information may be recommended as recommended tag information in view of related tag information.
- popular tag information or previously written tag information may be recommended.
- the tag allocator 102 may present the user with at least one recommendation tag information as described above to allow the user to select from a list or to receive a text input.
- the tag assignment unit 102 when the tag assignment unit 102 receives the input of the tag information, the tag assignment unit 102 converts the received tag information to the most similar tag information among other registered tag information, and registers the converted tag information as the created tag information. have.
- the tag information received by the tag allocator 102 as described above may be stored by the storage 101 to match the sound source.
- the group generation unit 103 may generate a sound source group including one or more sound sources according to the tag information.
- a sound source group including one or more sound sources may be generated according to the created tag information.
- the sound source group may be generated by searching for and grouping one or more sound sources including tag information that is the same as or related to the created tag information. can do.
- the relevant tag information may include tag information having a similar notation or pronunciation or containing tag information, tag information having a feature in common with the tag information.
- the tag information input of a plurality of users may be analyzed, and the tag information having a high frequency of continuous input may be viewed as related tag information and grouped together.
- the playback unit 106 which will be described later, may include a sound source included in the sound source group generated by the group generation unit 103 in the playlist, and play the playlist.
- the sound source group may be generated by classifying tag information including the created tag information by using a DB management technique. For example, you can use operators like and, or, if, incl., Excl., Not, than, etc. “Songs released after 2000 among Korean ballads. Dark song among them. Sound sources can be grouped according to conditions such as excluding boy groups, ”dance music, and excluding Korean songs from very fast songs”. At this time, personal condition can be given by writing tag information.
- a sound source group can be created by inputting a condition such as “Music that Haedae listened to.” have.
- the generated sound source group may be generated by moving to a new folder and storing the sound source group.
- the sound source group may be defined by the sound source group information.
- the sound source group information is information for specifying a sound source group, including a list index of which sound source is included in the sound source group generated by the group generator 103, a name of a sound source group specified by the user, and the like. Say it.
- the music providing system 100 when the music providing system 100 communicates with another user terminal separate from the user terminal 20, the music providing system 100 hits the sound source group generated by the group generating unit 103. It may further include a group providing unit 104 provided to the user terminal.
- the sound source group may be provided in a manner of transmitting a sound source file or giving a real time streaming authority for the sound source, and may provide tag information of a sound source included in the sound source group together with the sound source group.
- the group providing unit 104 may provide the writing tag information stored in match with the sound source to other user terminals.
- the group providing unit 104 may provide a sound source group to another user terminal according to a user's input. If the generated sound source group and the other user terminal are one or more, the user may receive a selection of one or more sound source groups and one or more other user terminals from the user and provide the selected sound source group to the selected other user terminal. That is, the user may directly present or recommend a sound source group to other users.
- the group providing unit 104 provides sound source group information for the generated sound source group to another user terminal, and receives an input for selecting a sound source group according to the sound source group information from the other user terminal.
- a sound source group can be provided to other user terminals.
- the provision of the sound source group information may be achieved by uploading the sound source group information of the sound source group generated by the user terminal 20 to the sound source providing server 10. In this case, instead of directly uploading the sound source group including the sound source, only sound source group information may be uploaded to obtain an advantage in terms of data volume, time, and speed.
- the user-specific sound source group generated according to the tag information including the created tag information can be shared with other users. This has the same effect as sharing a user's own emotion with other users.
- the group providing unit 104 by using the embodiment of the group providing unit 104, by comparing the sound source groups generated by a plurality of users to search for similar sound source group overlapping a predetermined number of sound sources, another sound source of the user who created the similar sound source group You can recommend groups to each other. Through this, it is possible to provide a sound source group corresponding to the emotional taste of the user or to recommend a user having a similar emotional taste as a friend.
- the music providing system 100 further includes a charging unit 105 for charging a fee for another user terminal in accordance with the provision of the sound source group, and the charging unit 105 includes a list of sound sources held by the other user terminal. You can charge only for unowned sound sources by comparing the sound sources included in the sound source group. As a result, when the user is provided with the sound source group, the user does not have to pay a duplicate price for the sound source included in the sound source group and the previously used sound source.
- the charging unit 105 may provide the sound source group to other users and distribute the profits obtained to the user who provided the sound source group. Since individual music has new value by combining with new music, it can be used as a platform for publishing a music group in which a specific music is combined or for publishing a special purpose omnibus album by a celebrity or a professional. can do. For example, you can present a sound recording group to other users on topics such as “Taegyo music recommended by a doctor ⁇ ” and “Party music that Top Star likes to enjoy”. As such, by distributing a certain profit to the user who provided the sound source group, the user can pay for the creation of the sound source group and encourage the creation of the sound source group. As other users use the music group created according to purpose, taste, emotion and mood, and pay for it, creating a new profit structure in the music market as a whole.
- the reproduction unit 106 may reproduce the sound source stored in the storage unit 101.
- the storage unit 101 may be included in one or both of the user terminal 20 and the sound source providing server 10.
- the playback unit 106 is the sound source providing server. After the file of the sound source is downloaded from 10, the file can be stored and played, or the sound source stored in the sound source providing server 10 can be reproduced in real time.
- An embodiment of such a playback unit 106 may receive an input for tag information from a user, include a sound source having tag information equal or related to the received tag information in a playlist, and play the playlist. have.
- the tag information may include at least one of self tag information and created tag information stored corresponding to the sound source.
- the playback unit 106 receives a selection input for the sound source group generated by the group generation unit 103 from the user, includes the sound sources included in the selected sound source group in the playlist, and plays the playlist. Can play.
- the sound source does not need to be moved to a separate folder in order to include the sound source in the playlist, and may be implemented by generating a list index of the sound source to be played.
- the playlist as described above is configured according to the tag information added to the sound source by the user, through which the user can be provided with a playlist that matches the purpose, taste, emotion and mood at that time. For example, if a user enters a writing tag information of “boy” which is the name of his boyfriend who is separated in one or more sound sources, when the boyfriend breaks up, he is provided with a playlist by inputting tag information of “boy” At any time, you can listen to music that matches your emotions.
- the playback unit 106 may randomly play the sound sources included in the playlist, or play them in a sorted order on a predetermined basis, or according to the order set by the user.
- the music files in the group may be inputted when a user inputs to each music file setting value (for example, volume, bass, treble, distortion, echo, etc.). It can be played back according to the settings set.
- the playback unit 106 plays back one or more sound sources grouped by predetermined tag information
- the tag information corresponding to the one or more sound sources is plural
- the playback unit 106 assigns a weight to each of the tag information to be included in the playlist. Sound sources can be sorted, and the sound sources in the group can be played in the sorted order.
- the playback unit 106 checks the storage location of the music file corresponding to the predetermined tag information, lists the storage locations, and calls the sound source stored in the storage location when playing the sound source matching the predetermined tag information. Can be played by
- the music providing method according to the embodiment shown in FIG. 3 includes steps that are processed in time series in the music providing system 100 shown in FIGS. 1 to 2. Therefore, even if omitted below, the above description of the music providing system 100 shown in FIGS. 1 to 2 may be applied to the music providing method according to the embodiment shown in FIG. 3.
- the music providing method may include matching tag information corresponding to a sound source with a sound source, storing the tag information, and reproducing the sound source.
- the music providing method includes receiving random writing tag information from a user with respect to a sound source (S301) and matching and storing the received writing tag information with a sound source (S302). ) May be included.
- step S302 may include recommending at least one recommendation tag information related to the created tag information received from the user and receiving input from at least one of the recommendation tag information from the user, and matching the received recommendation tag information with the sound source. And storing.
- the music providing method may further include generating a sound source group according to the created tag information (S303).
- the sound source group including one or more sound sources may be generated according to the composition tag information input from the user.
- the one or more sound sources may include sound sources having tag information that is the same as or related to the composition tag information.
- the sound source included in the generated sound source group may be included in a playlist and played in a later stage.
- step S303 may further include providing the generated sound source group to the other user terminal. have.
- the user's own sound source group can be shared by presenting or recommending to other users.
- the music providing method may further include charging a fee for another user terminal according to the provision of the sound source group.
- the charging step may include comparing the list of sound sources owned by another user terminal with the sound sources included in the sound source group, and charging only for the unowned sound sources.
- the music providing method may include the step (S104) of reproducing a sound source included in the sound source group.
- the music providing method may include receiving a selection input for a sound source group from a user, including a sound source included in the selected sound source group in a playlist, and playing the playlist. have.
- a music providing method includes receiving an input for tag information from a user, including a sound source having tag information identical or related to the received tag information in a playlist, and a playlist.
- the sound source may be played through the playing step.
- the music providing method according to the embodiment described with reference to FIG. 3 may also be implemented in the form of a recording medium including instructions executable by a computer, such as a program module executed by the computer.
- Computer readable media can be any available media that can be accessed by a computer and includes both volatile and nonvolatile media, removable and non-removable media.
- computer readable media may include both computer storage media and communication media.
- Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
- Communication media typically includes computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave, or other transmission mechanism, and includes any information delivery media.
- the music providing method may be implemented as a computer program (or computer program product) including instructions executable by a computer.
- the computer program includes programmable machine instructions processed by the processor and may be implemented in a high-level programming language, an object-oriented programming language, an assembly language, or a machine language.
- the computer program may also be recorded on tangible computer readable media (eg, memory, hard disks, magnetic / optical media or solid-state drives, etc.).
- the music providing method may be implemented by executing the computer program as described above by the computing device.
- the computing device may include at least a portion of a processor, a memory, a storage device, a high speed interface connected to the memory and a high speed expansion port, and a low speed interface connected to the low speed bus and the storage device.
- a processor may include at least a portion of a processor, a memory, a storage device, a high speed interface connected to the memory and a high speed expansion port, and a low speed interface connected to the low speed bus and the storage device.
- Each of these components are connected to each other using a variety of buses and may be mounted on a common motherboard or otherwise mounted in a suitable manner.
- the processor may process instructions within the computing device, such as to display graphical information for providing a graphical user interface (GUI) on an external input, output device, such as a display connected to a high speed interface. Instructions stored in memory or storage. In other embodiments, multiple processors and / or multiple buses may be used with appropriately multiple memories and memory types.
- the processor may also be implemented as a chipset consisting of chips comprising a plurality of independent analog and / or digital processors.
- the memory also stores information within the computing device.
- the memory may consist of a volatile memory unit or a collection thereof.
- the memory may consist of a nonvolatile memory unit or a collection thereof.
- the memory may also be other forms of computer readable media, such as, for example, magnetic or optical disks.
- the storage device can provide a large amount of storage space to the computing device.
- the storage device may be a computer readable medium or a configuration including such a medium, and may include, for example, devices or other configurations within a storage area network (SAN), and may include a floppy disk device, a hard disk device, an optical disk device, Or a tape device, flash memory, or similar other semiconductor memory device or device array.
- SAN storage area network
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Tourism & Hospitality (AREA)
- Accounting & Taxation (AREA)
- General Business, Economics & Management (AREA)
- Strategic Management (AREA)
- Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Economics (AREA)
- General Health & Medical Sciences (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Finance (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
Claims (18)
- 음악제공시스템이 수행하는 음악제공방법에 있어서,In the music providing method performed by the music providing system,상기 음원에 대응되는 태그정보를 상기 음원과 매칭하여 저장하는 단계; 및Storing tag information corresponding to the sound source by matching the sound source; And상기 음원을 재생하는 단계를 포함하는, 음악제공방법. Reproducing the sound source, music providing method.
- 제 1 항에 있어서,The method of claim 1,상기 태그정보를 상기 음원과 매칭하여 저장하는 단계는,The step of storing the tag information to match the sound source,상기 음원에 대하여 상기 사용자로부터 임의의 작성태그정보를 수신하는 단계; 및 Receiving arbitrary creation tag information from the user with respect to the sound source; And수신한 상기 작성태그정보를 상기 음원과 매칭하여 저장하는 단계를 포함하는, 음악제공방법.And matching and storing the received tag information with the sound source.
- 제 2 항에 있어서,The method of claim 2,상기 작성태그정보를 상기 음원과 매칭하여 저장하는 단계는,The storing of the created tag information by matching the sound source may include:상기 사용자로부터 수신한 상기 작성태그정보와 유관한 하나 이상의 추천태그정보를 추천하는 단계; 및 Recommending one or more recommendation tag information related to the created tag information received from the user; And상기 추천태그정보 중 하나 이상에 대한 입력을 상기 사용자로부터 수신하고, 수신한 상기 추천태그정보를 상기 음원과 매칭하여 저장하는 단계를 포함하는, 음악제공방법.And receiving an input for at least one of the recommendation tag information from the user and matching the received recommendation tag information with the sound source.
- 제 2 항에 있어서,The method of claim 2,상기 작성태그정보를 상기 음원과 매칭하여 저장하는 단계는,The storing of the created tag information by matching the sound source may include:상기 작성태그정보에 따라 하나 이상의 음원을 포함하는 음원그룹을 생성하는 단계를 더 포함하는, 음악제공방법.And generating a sound source group including one or more sound sources according to the created tag information.
- 제 4 항에 있어서,The method of claim 4, wherein상기 음원그룹을 생성하는 단계는,Creating the sound source group,생성된 상기 음원그룹을 타사용자단말에 제공하는 단계를 더 포함하는, 음악제공방법.And providing the generated sound source group to another user terminal.
- 제 5 항에 있어서, The method of claim 5, wherein상기 생성된 음원그룹을 타사용자단말에 제공하는 단계는,Providing the generated sound source group to another user terminal,상기 음원그룹의 제공에 따라 상기 타사용자단말에 대하여 요금을 과금하는 단계를 더 포함하고,Billing the fee for the other user terminal according to the provision of the sound source group,상기 요금을 과금하는 단계는, Charging the fee,상기 타사용자단말이 기보유하는 음원의 리스트와 상기 음원그룹에 포함된 음원을 비교하여 미보유 음원에 대해서만 과금하는 단계를 포함하는, 음악제공방법.And comparing the list of sound sources owned by the other user terminal with the sound sources included in the sound source group, and charging only for the non-owned sound sources.
- 제 1 항에 있어서,The method of claim 1,상기 음원을 재생하는 단계는,Reproducing the sound source,사용자로부터 태그정보에 대한 입력을 수신하는 단계; Receiving an input for tag information from a user;수신한 상기 태그정보와 동일 또는 유관한 태그정보를 갖는 음원을 플레이리스트에 포함시키는 단계; 및 Including a sound source having tag information identical or related to the received tag information in a playlist; And상기 플레이리스트를 재생하는 단계를 포함하는, 음악제공방법.Playing the playlist.
- 제 4 항에 있어서,The method of claim 4, wherein상기 음원을 재생하는 단계는,Reproducing the sound source,사용자로부터 상기 음원그룹에 대한 선택 입력을 수신하는 단계; Receiving a selection input for the sound source group from a user;선택된 상기 음원그룹에 포함된 음원을 플레이리스트에 포함시키는 단계; 및 Including a sound source included in the selected sound source group in a playlist; And상기 플레이리스트를 재생하는 단계를 포함하는, 음악제공방법.Playing the playlist.
- 음원의 제공 및 음원의 재생 중 적어도 하나를 수행하는 음악제공시스템에 있어서,In the music providing system for performing at least one of the provision of the sound source and the reproduction of the sound source,상기 음원에 대응되는 태그정보를 상기 음원과 매칭하여 저장하는 저장부; 및A storage unit which matches tag information corresponding to the sound source with the sound source and stores the tag information; And상기 음원을 재생하는 재생부를 포함하는, 음악제공시스템. And a reproduction unit for reproducing the sound source.
- 제 9 항에 있어서,The method of claim 9,상기 음악제공시스템은,The music providing system,상기 음원에 대하여 상기 사용자로부터 임의의 작성태그정보를 수신하는 태그할당부를 더 포함하고,Further comprising a tag assignment unit for receiving any written tag information from the user for the sound source,상기 저장부는,The storage unit,상기 태그할당부가 수신한 상기 작성태그정보를 상기 음원과 매칭하여 저장하는, 음악제공방법.And matching the stored tag information received by the tag assignment unit with the sound source and storing the created tag information.
- 제 10 항에 있어서,The method of claim 10,상기 태그할당부는,The tag assignment unit,상기 사용자로부터 수신한 상기 작성태그정보와 유관한 하나 이상의 추천태그정보를 추천하고, 상기 추천태그정보 중 하나 이상에 대한 입력을 상기 사용자로부터 수신하고,Recommending at least one recommendation tag information related to the created tag information received from the user, receiving input from at least one of the recommendation tag information from the user,상기 저장부는,The storage unit,수신한 상기 추천태그정보를 상기 음원과 매칭하여 저장하는, 음악제공시스템.And matching the received tag information with the sound source and storing the received tag information.
- 제 10 항에 있어서,The method of claim 10,상기 음악제공시스템은,The music providing system,상기 작성태그정보에 따라 하나 이상의 음원을 포함하는 음원그룹을 생성하는 그룹생성부를 더 포함하는, 음악제공시스템.And a group generator for generating a sound source group including one or more sound sources according to the created tag information.
- 제 12 항에 있어서, The method of claim 12,상기 그룹생성부는, The group generation unit,생성된 상기 음원그룹을 타사용자단말에 제공하는 그룹제공부를 더 포함하는, 음악제공시스템.And a group providing unit for providing the generated sound source group to other user terminals.
- 제 13 항에 있어서,The method of claim 13,상기 음악제공시스템은, The music providing system,상기 음원그룹의 제공에 따라 상기 타사용자단말에 대하여 요금을 과금하는 과금부를 더 포함하고,Further comprising a charging unit for charging a fee for the other user terminal in accordance with the provision of the sound source group,상기 과금부는, The charging unit,상기 타사용자단말이 기보유하는 음원의 리스트와 상기 음원그룹에 포함된 음원을 비교하여 미보유 음원에 대해서만 과금하는, 음악제공시스템.And comparing the list of sound sources owned by the other user terminal with the sound sources included in the sound source group, and charging only for the unowned sound sources.
- 제 9 항에 있어서,The method of claim 9,상기 재생부는,The regeneration unit,사용자로부터 태그정보에 대한 입력을 수신하고, 수신한 상기 태그정보와 동일 또는 유관한 태그정보를 갖는 음원을 플레이리스트에 포함시키고, 상기 플레이리스트를 재생하는, 음악제공시스템.Receiving an input for tag information from a user, including a sound source having tag information identical or related to the received tag information in a playlist, and playing the playlist.
- 제 12 항에 있어서, The method of claim 12,상기 재생부는,The regeneration unit,사용자로부터 상기 음원그룹에 대한 선택 입력을 수신하고, 선택된 상기 음원그룹에 포함된 음원을 플레이리스트에 포함시키고, 상기 플레이리스트를 재생하는, 음악제공시스템.Receiving a selection input for the sound source group from a user, including a sound source included in the selected sound source group in a playlist, and playing the playlist.
- 음악제공시스템에 의해 수행되며, 제 1 항 내지 제 8 항 중 적어도 하나에 기재된 방법을 수행하기 위해 기록매체에 저장된 컴퓨터 프로그램.A computer program executed by a music providing system and stored in a recording medium for carrying out the method according to at least one of claims 1 to 8.
- 제 1 항 내지 제 8 항 중 적어도 하나에 기재된 방법을 수행하는 프로그램이 기록된 컴퓨터 판독가능한 기록매체.A computer-readable recording medium having recorded thereon a program for performing the method according to at least one of claims 1 to 8.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/554,710 US20180239577A1 (en) | 2015-03-06 | 2016-03-02 | Music providing method and music providing system |
JP2017565031A JP2018507503A (en) | 2015-03-06 | 2016-03-02 | Music providing method and music providing system |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2015-0031502 | 2015-03-06 | ||
KR20150031502 | 2015-03-06 | ||
KR1020160012649A KR101874441B1 (en) | 2015-03-06 | 2016-02-02 | Device and method for providing music |
KR10-2016-0012649 | 2016-02-02 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016144032A1 true WO2016144032A1 (en) | 2016-09-15 |
Family
ID=56880565
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2016/002043 WO2016144032A1 (en) | 2015-03-06 | 2016-03-02 | Music providing method and music providing system |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2016144032A1 (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060206582A1 (en) * | 2003-11-17 | 2006-09-14 | David Finn | Portable music device with song tag capture |
US20060242661A1 (en) * | 2003-06-03 | 2006-10-26 | Koninklijke Philips Electronics N.V. | Method and device for generating a user profile on the basis of playlists |
US20060293909A1 (en) * | 2005-04-01 | 2006-12-28 | Sony Corporation | Content and playlist providing method |
US20110264495A1 (en) * | 2010-04-22 | 2011-10-27 | Apple Inc. | Aggregation of tagged media item information |
US20110295843A1 (en) * | 2010-05-26 | 2011-12-01 | Apple Inc. | Dynamic generation of contextually aware playlists |
-
2016
- 2016-03-02 WO PCT/KR2016/002043 patent/WO2016144032A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060242661A1 (en) * | 2003-06-03 | 2006-10-26 | Koninklijke Philips Electronics N.V. | Method and device for generating a user profile on the basis of playlists |
US20060206582A1 (en) * | 2003-11-17 | 2006-09-14 | David Finn | Portable music device with song tag capture |
US20060293909A1 (en) * | 2005-04-01 | 2006-12-28 | Sony Corporation | Content and playlist providing method |
US20110264495A1 (en) * | 2010-04-22 | 2011-10-27 | Apple Inc. | Aggregation of tagged media item information |
US20110295843A1 (en) * | 2010-05-26 | 2011-12-01 | Apple Inc. | Dynamic generation of contextually aware playlists |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11210338B2 (en) | Systems, methods and apparatus for generating music recommendations based on combining song and user influencers with channel rule characterizations | |
WO2011059275A2 (en) | Method and apparatus for managing data | |
KR100826959B1 (en) | Method and system for making a picture image | |
US11079918B2 (en) | Adaptive audio and video channels in a group exercise class | |
US20090063459A1 (en) | System and Method for Recommending Songs | |
KR101963753B1 (en) | Method and apparatus for playing videos for music segment | |
CN109241242A (en) | A kind of direct broadcasting room topic recommended method, device, server and storage medium | |
WO2015163552A1 (en) | Device for providing image related to replayed music and method using same | |
CN106331822A (en) | Method and device for playing multiple videos and electronic equipment | |
KR101924205B1 (en) | Karaoke system and management method thereof | |
JP2014085644A (en) | Karaoke system | |
KR101874441B1 (en) | Device and method for providing music | |
KR20150090306A (en) | Method for sharing contents among plural terminal, system and apparatus thereof | |
WO2021045473A1 (en) | Loudness normalization method and system | |
WO2016144032A1 (en) | Music providing method and music providing system | |
WO2012154792A2 (en) | Cross-platform portable personal video compositing and media content distribution system | |
Thomas | Library-podcast intersections | |
Spalding | Turning point: The origins of Canadian content requirements for commercial radio | |
Turner et al. | Is Binaural Spatialization the Future of Hip-Hop? | |
KR20150059219A (en) | Method for providing music contents and music contents providing system performing thereof | |
Hurst | Getting started with podcasting | |
WO2011059276A2 (en) | Method and apparatus for managing content | |
WO2013089310A1 (en) | Method, terminal, and recording medium for providing user interface for content service | |
Geary et al. | Using design dimensions to develop a multi-device audio experience through workshops and prototyping | |
Kemack Goot et al. | A Spectrum of Online Rehearsal Applications: A Potential Means for Cultural Connection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16761919 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15554710 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2017565031 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16761919 Country of ref document: EP Kind code of ref document: A1 |