WO2016144032A1 - Music providing method and music providing system - Google Patents

Music providing method and music providing system Download PDF

Info

Publication number
WO2016144032A1
WO2016144032A1 PCT/KR2016/002043 KR2016002043W WO2016144032A1 WO 2016144032 A1 WO2016144032 A1 WO 2016144032A1 KR 2016002043 W KR2016002043 W KR 2016002043W WO 2016144032 A1 WO2016144032 A1 WO 2016144032A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound source
tag information
user
method
group
Prior art date
Application number
PCT/KR2016/002043
Other languages
French (fr)
Korean (ko)
Inventor
김유식
Original Assignee
김유식
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to KR10-2015-0031502 priority Critical
Priority to KR20150031502 priority
Priority to KR1020160012649A priority patent/KR101874441B1/en
Priority to KR10-2016-0012649 priority
Application filed by 김유식 filed Critical 김유식
Priority claimed from US15/554,710 external-priority patent/US20180239577A1/en
Publication of WO2016144032A1 publication Critical patent/WO2016144032A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06QDATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06QDATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/08Payment architectures
    • G06Q20/12Payment architectures specially adapted for electronic shopping systems
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing

Abstract

The present invention relates to a music providing method and a music providing system and, more specifically, to a method and a system allocating tag information corresponding to a sound source, and generating a sound source group according to the allocated tag information so as to provide the sound source group to other users, and can provide a function of receiving certain written tag information from a user with respect to the sound source so as to match the received written tag information with the sound source and store the same.

Description

Music provision method and music provision system

The present invention relates to a music providing method and a music providing system, and more particularly, to a method and system for allocating tag information corresponding to a sound source, generating a sound source group according to the assigned tag information, and providing the same to other users. .

Recently, due to the high performance, miniaturization, and low power consumption of electronic devices, overall human life has been achieved with smart devices, and now, expectations for smart devices are increasing as they help emotional purification beyond the functional convenience of human life. In other words, the emotional evolution of smart devices is required.

With regard to the field of music provision, technology has been developed with the focus on being able to conveniently receive and reproduce digital sound sources at all times (published by Korean Patent Publication No. 2003-0087791 "Method of Receiving, Storing and Playing Digital Sound Sources").

However, the development of technology that can provide a sound source according to the emotion of the listener was insufficient.

Music listening does not need to look at the screen, unlike video viewing, so that you can listen to music for a long time during the movement or exercise, in this case using a random (random play) function. Although most people store more than hundreds of music files, it is difficult to selectively listen to the songs they want to listen to according to their purpose, taste, emotion, and mood. Until now, the technology has been cumbersome, such as having to individually select each file each time or selectively create a new folder and move a sound source file in order to selectively listen to a song to listen to.

Therefore, there is a need for a technique for solving the above problems.

On the other hand, the background art described above is technical information that the inventors possess for the derivation of the present invention or acquired in the derivation process of the present invention, and is not necessarily a publicly known technique disclosed to the general public before the application of the present invention. .

One embodiment of the present invention is to provide a sound source according to the purpose, taste, emotion and mood of the listener.

In addition, an embodiment of the present invention has an object to provide a group of sound sources according to the purpose, taste, emotion and mood of the listener.

In addition, an embodiment of the present invention is to provide a playlist consisting of a sound source according to the purpose, taste, emotion and mood of the listener.

In addition, an embodiment of the present invention has an object to share the tag information, sound source group with respect to the sound source according to the purpose, taste, emotion and mood of the listener with others.

In addition, an embodiment of the present invention is to create a new revenue structure in the music market by distributing the tag information on the sound source according to the purpose, taste, emotion and mood of the listener, the profit sharing of the sound source group with others There is a purpose.

As a technical means for achieving the above-described technical problem, according to the first aspect of the present invention, in the music providing method performed by the music providing system, matching and storing tag information corresponding to the sound source with the sound source and playing the sound source Disclosed is a music providing method comprising the steps of:

According to a second aspect of the present invention, in the music providing system for performing at least one of the provision of the sound source and the reproduction of the sound source, the storage unit for matching and storing the tag information corresponding to the sound source and the playback unit for playing the sound source Disclosed is a music providing system comprising.

According to a third aspect of the present invention, there is also disclosed a computer program carried out by a music providing system and stored in a recording medium for performing the music providing method according to the first aspect.

According to a fourth aspect of the present invention, there is also disclosed a computer readable recording medium having recorded thereon a program for performing the music providing method according to the first aspect.

According to any one of the problem solving means of the present invention described above, an embodiment of the present invention can provide a sound source according to the purpose, taste, emotion and mood of the listener.

In addition, according to any one of the problem solving means of the present invention, it is possible to provide a group of sound sources according to the purpose, taste, emotion and mood of the listener.

In addition, according to any one of the problem solving means of the present invention, it is possible to provide a playlist consisting of a sound source according to the purpose, taste, emotion and mood of the listener.

In addition, according to any one of the problem solving means of the present invention, it is possible to share the tag information and sound source group for the sound source according to the purpose, taste, emotion and mood of the listener with others.

In addition, according to any one of the problem solving means of the present invention, the new revenue structure in the music market by distributing the tag information for the sound source according to the purpose, taste, emotion and mood of the listener, and the profit sharing of the sound source group with others Can create.

The effects obtainable in the present invention are not limited to the above-mentioned effects, and other effects not mentioned above may be clearly understood by those skilled in the art from the following description. will be.

1 is a block diagram of a music providing system according to an embodiment of the present invention.

Figure 2 is a block diagram showing a music providing system according to an embodiment of the present invention.

3 is a flowchart illustrating a music providing method according to an embodiment of the present invention.

DETAILED DESCRIPTION Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those skilled in the art may easily implement the present invention. As those skilled in the art would realize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present invention. In the drawings, parts irrelevant to the description are omitted in order to clearly describe the present invention, and like reference numerals designate like parts throughout the specification.

Throughout the specification, when a part is "connected" to another part, this includes not only "directly connected" but also "electrically connected" with another element in between. . In addition, when a part is said to "include" a certain component, which means that it may further include other components, except to exclude other components unless otherwise stated.

Hereinafter, the present invention will be described in detail with reference to the accompanying drawings.

1 is a block diagram illustrating a music providing system 100 according to an embodiment of the present invention.

The music providing system 100 is an apparatus for performing a music providing method according to an exemplary embodiment of the present invention. For example, the music providing system 100 may play a sound source by calling a list index of a newly created sound source group. Specifically, it may include a sound source providing server 10 for providing a sound source and a user terminal 20 for playing the sound source.

The sound source providing server 10 may store a sound source and various data related thereto, and transmit and receive data for music reproduction with the user terminal 20 through the network N.

In this case, the network N may be a local area network (LAN), a wide area network (WAN), a value added network (VAN), a personal local area network (PAN), or mobile. It can be implemented in all kinds of wired / wireless networks such as mobile radio communication network, Wibro (Wireless Broadband Internet), Mobile WiMAX, High Speed Downlink Packet Access (HSDPA) or satellite communication network.

In addition, the music providing system 100 may include a user terminal 20. The user terminal 20 may be implemented as a computer, a portable terminal, a television, a wearable device, or the like, which can be connected to the sound source providing server 10 or another user terminal through the network N. Here, the computer includes, for example, a laptop, desktop, laptop, etc., which is equipped with a web browser, and the portable terminal is, for example, a wireless communication device that ensures portability and mobility. , Personal Communication System (PCS), Personal Digital Cellular (PDC), Personal Handyphone System (PHS), Personal Digital Assistant (PDA), Global System for Mobile communications (GSM), International Mobile Telecommunication (IMT) -2000, Code Division Multiple Access (2000), all types of handhelds such as W-Code Division Multiple Access (W-CDMA), Wireless Broadband Internet (Wibro), Smart Phones, and Mobile Worldwide Interoperability for Microwave Access (WiMAX). It may include a (Handheld) based wireless communication device. In addition, the television may include an Internet Protocol Television (IPTV), an Internet Television (Internet Television), a terrestrial TV, a cable TV, or the like. Further, the wearable device is, for example, a type of information processing device that can be directly worn on a human body such as a watch, glasses, accessories, clothes, shoes, etc., and a sound source providing server at a remote place via a network (N) directly or through another information processing device. 10 may be connected to or connected to another user terminal.

In addition, the music providing system 100 may perform embodiments of more various music providing methods by communicating with other user terminals existing separately from the user terminal 20.

2 is a block diagram illustrating a music providing system 100 according to an embodiment of the present invention. According to FIG. 2, the music providing system 100 includes a storage unit 101, a tag assigning unit 102, a group generating unit 103, a group providing unit 104, a charging unit 105, and a playback unit 106. It may include, but do not necessarily include all of the above configuration at the same time, it can be implemented an embodiment of the present invention including some of the above configuration. In addition, the above configuration may be implemented in any one or both of the sound source providing server 10 and the user terminal 20 to operate one of the embodiments of the present invention.

As a specific embodiment, the storage unit 101 may store tag information corresponding to the sound source by matching the sound source. In this case, the tag information is a kind of keyword information for describing the sound source, and may be stored as being matched with the sound source as data separate from the sound source. The user can predict the mood of the music or search for the sound source according to the tag information. The tag information may be set and stored by the provider of the sound source before the sound source is provided to the user, which is called self tag information. On the other hand, arbitrary tag information that a user directly adds to a sound source is called creation tag information. The music providing system 100 may include a tag assignment unit 102 as a module that provides an interface for receiving creation tag information from a user.

The tag assigning unit 102 may receive arbitrary writing tag information from the user with respect to the sound source. At this time, the type of information that can be received as the creation tag information from the user is not limited. That is, the user can input the tag for the sound source as he / she wants. For example, you can enter the tag information according to the purpose of emotion, such as joy, excitement, depression, rainy day, sunny day, calmness, exercise, eating, concentration, etc. Place name, person's name, etc. can be written as tag information and personal and unique tag information can be assigned to the sound source. Furthermore, the tag allocating unit 102 may allocate image tag information about a sound source by presenting an image such as an emoticon and being selected by the user.

As another embodiment, the tag allocator 102 recommends one or more recommendation tag information related to the created tag information received from the user, and receives the tag information by receiving an input for one or more of the recommended tag information from the user. Can be assigned. The recommended tag information is tag information related to the created tag information input by the user, and may include tag information having a similar notation or pronunciation or containing tag information, tag information having characteristics in common with the created tag information, and the like. have. Alternatively, tag information input of a plurality of users may be analyzed and frequently inputted tag information may be recommended as recommended tag information in view of related tag information. Alternatively, popular tag information or previously written tag information may be recommended. The tag allocator 102 may present the user with at least one recommendation tag information as described above to allow the user to select from a list or to receive a text input.

In addition, when the tag assignment unit 102 receives the input of the tag information, the tag assignment unit 102 converts the received tag information to the most similar tag information among other registered tag information, and registers the converted tag information as the created tag information. have.

The tag information received by the tag allocator 102 as described above may be stored by the storage 101 to match the sound source.

Meanwhile, the group generation unit 103 may generate a sound source group including one or more sound sources according to the tag information. For example, a sound source group including one or more sound sources may be generated according to the created tag information. In this case, the sound source group may be generated by searching for and grouping one or more sound sources including tag information that is the same as or related to the created tag information. can do. The relevant tag information may include tag information having a similar notation or pronunciation or containing tag information, tag information having a feature in common with the tag information. Alternatively, the tag information input of a plurality of users may be analyzed, and the tag information having a high frequency of continuous input may be viewed as related tag information and grouped together. The playback unit 106, which will be described later, may include a sound source included in the sound source group generated by the group generation unit 103 in the playlist, and play the playlist.

As another embodiment of the group generation unit 103, the sound source group may be generated by classifying tag information including the created tag information by using a DB management technique. For example, you can use operators like and, or, if, incl., Excl., Not, than, etc. “Songs released after 2000 among Korean ballads. Dark song among them. Sound sources can be grouped according to conditions such as excluding boy groups, ”dance music, and excluding Korean songs from very fast songs”. At this time, personal condition can be given by writing tag information. For example, if the user inputs “Cheol” and “Haeundae” as the creation tag information corresponding to one or more sound sources, a sound source group can be created by inputting a condition such as “Music that Haedae listened to.” have.

The generated sound source group may be generated by moving to a new folder and storing the sound source group. Preferably, the sound source group may be defined by the sound source group information. The sound source group information is information for specifying a sound source group, including a list index of which sound source is included in the sound source group generated by the group generator 103, a name of a sound source group specified by the user, and the like. Say it.

With respect to the sound source group, when the music providing system 100 communicates with another user terminal separate from the user terminal 20, the music providing system 100 hits the sound source group generated by the group generating unit 103. It may further include a group providing unit 104 provided to the user terminal. In this case, the sound source group may be provided in a manner of transmitting a sound source file or giving a real time streaming authority for the sound source, and may provide tag information of a sound source included in the sound source group together with the sound source group. Alternatively, the group providing unit 104 may provide the writing tag information stored in match with the sound source to other user terminals.

For example, the group providing unit 104 may provide a sound source group to another user terminal according to a user's input. If the generated sound source group and the other user terminal are one or more, the user may receive a selection of one or more sound source groups and one or more other user terminals from the user and provide the selected sound source group to the selected other user terminal. That is, the user may directly present or recommend a sound source group to other users.

As another embodiment, the group providing unit 104 provides sound source group information for the generated sound source group to another user terminal, and receives an input for selecting a sound source group according to the sound source group information from the other user terminal. A sound source group can be provided to other user terminals. The provision of the sound source group information may be achieved by uploading the sound source group information of the sound source group generated by the user terminal 20 to the sound source providing server 10. In this case, instead of directly uploading the sound source group including the sound source, only sound source group information may be uploaded to obtain an advantage in terms of data volume, time, and speed.

Through the embodiment of the group providing unit 104 as described above, the user-specific sound source group generated according to the tag information including the created tag information can be shared with other users. This has the same effect as sharing a user's own emotion with other users.

That is, by using the embodiment of the group providing unit 104, by comparing the sound source groups generated by a plurality of users to search for similar sound source group overlapping a predetermined number of sound sources, another sound source of the user who created the similar sound source group You can recommend groups to each other. Through this, it is possible to provide a sound source group corresponding to the emotional taste of the user or to recommend a user having a similar emotional taste as a friend.

Meanwhile, the music providing system 100 further includes a charging unit 105 for charging a fee for another user terminal in accordance with the provision of the sound source group, and the charging unit 105 includes a list of sound sources held by the other user terminal. You can charge only for unowned sound sources by comparing the sound sources included in the sound source group. As a result, when the user is provided with the sound source group, the user does not have to pay a duplicate price for the sound source included in the sound source group and the previously used sound source.

In addition, the charging unit 105 may provide the sound source group to other users and distribute the profits obtained to the user who provided the sound source group. Since individual music has new value by combining with new music, it can be used as a platform for publishing a music group in which a specific music is combined or for publishing a special purpose omnibus album by a celebrity or a professional. can do. For example, you can present a sound recording group to other users on topics such as “Taegyo music recommended by a doctor ~” and “Party music that Top Star likes to enjoy”. As such, by distributing a certain profit to the user who provided the sound source group, the user can pay for the creation of the sound source group and encourage the creation of the sound source group. As other users use the music group created according to purpose, taste, emotion and mood, and pay for it, creating a new profit structure in the music market as a whole.

In addition, the reproduction unit 106 may reproduce the sound source stored in the storage unit 101. As described above, the storage unit 101 may be included in one or both of the user terminal 20 and the sound source providing server 10. As an embodiment of the present invention, when the storage unit 101 is included in the sound source providing server 10 and the playback unit 106 is included in the user terminal 20, the playback unit 106 is the sound source providing server. After the file of the sound source is downloaded from 10, the file can be stored and played, or the sound source stored in the sound source providing server 10 can be reproduced in real time.

An embodiment of such a playback unit 106 may receive an input for tag information from a user, include a sound source having tag information equal or related to the received tag information in a playlist, and play the playlist. have. In this case, the tag information may include at least one of self tag information and created tag information stored corresponding to the sound source.

As another embodiment, the playback unit 106 receives a selection input for the sound source group generated by the group generation unit 103 from the user, includes the sound sources included in the selected sound source group in the playlist, and plays the playlist. Can play. In this case, the sound source does not need to be moved to a separate folder in order to include the sound source in the playlist, and may be implemented by generating a list index of the sound source to be played.

The playlist as described above is configured according to the tag information added to the sound source by the user, through which the user can be provided with a playlist that matches the purpose, taste, emotion and mood at that time. For example, if a user enters a writing tag information of “boy” which is the name of his boyfriend who is separated in one or more sound sources, when the boyfriend breaks up, he is provided with a playlist by inputting tag information of “boy” At any time, you can listen to music that matches your emotions.

As an exemplary embodiment of the present invention, the playback unit 106 may randomly play the sound sources included in the playlist, or play them in a sorted order on a predetermined basis, or according to the order set by the user. In addition, the music files in the group may be inputted when a user inputs to each music file setting value (for example, volume, bass, treble, distortion, echo, etc.). It can be played back according to the settings set.

In addition, when the playback unit 106 plays back one or more sound sources grouped by predetermined tag information, when the tag information corresponding to the one or more sound sources is plural, the playback unit 106 assigns a weight to each of the tag information to be included in the playlist. Sound sources can be sorted, and the sound sources in the group can be played in the sorted order.

Also, the playback unit 106 checks the storage location of the music file corresponding to the predetermined tag information, lists the storage locations, and calls the sound source stored in the storage location when playing the sound source matching the predetermined tag information. Can be played by

The music providing method according to the embodiment shown in FIG. 3 includes steps that are processed in time series in the music providing system 100 shown in FIGS. 1 to 2. Therefore, even if omitted below, the above description of the music providing system 100 shown in FIGS. 1 to 2 may be applied to the music providing method according to the embodiment shown in FIG. 3.

According to an embodiment of the present invention, the music providing method may include matching tag information corresponding to a sound source with a sound source, storing the tag information, and reproducing the sound source.

As an example of this, according to FIG. 3, the music providing method includes receiving random writing tag information from a user with respect to a sound source (S301) and matching and storing the received writing tag information with a sound source (S302). ) May be included.

In this case, step S302 may include recommending at least one recommendation tag information related to the created tag information received from the user and receiving input from at least one of the recommendation tag information from the user, and matching the received recommendation tag information with the sound source. And storing.

Furthermore, the music providing method may further include generating a sound source group according to the created tag information (S303). The sound source group including one or more sound sources may be generated according to the composition tag information input from the user. In this case, the one or more sound sources may include sound sources having tag information that is the same as or related to the composition tag information. The sound source included in the generated sound source group may be included in a playlist and played in a later stage.

As another embodiment, when the music providing system 100 communicates with another user terminal separate from the user terminal 20, step S303 may further include providing the generated sound source group to the other user terminal. have.

Through this, the user's own sound source group can be shared by presenting or recommending to other users.

In this case, the music providing method may further include charging a fee for another user terminal according to the provision of the sound source group. The charging step may include comparing the list of sound sources owned by another user terminal with the sound sources included in the sound source group, and charging only for the unowned sound sources.

In addition, the music providing method may include the step (S104) of reproducing a sound source included in the sound source group.

As an embodiment of step S104, the music providing method may include receiving a selection input for a sound source group from a user, including a sound source included in the selected sound source group in a playlist, and playing the playlist. have.

As another embodiment of reproducing a sound source, a music providing method includes receiving an input for tag information from a user, including a sound source having tag information identical or related to the received tag information in a playlist, and a playlist. The sound source may be played through the playing step.

The music providing method according to the embodiment described with reference to FIG. 3 may also be implemented in the form of a recording medium including instructions executable by a computer, such as a program module executed by the computer. Computer readable media can be any available media that can be accessed by a computer and includes both volatile and nonvolatile media, removable and non-removable media. In addition, computer readable media may include both computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Communication media typically includes computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave, or other transmission mechanism, and includes any information delivery media.

In addition, the music providing method according to an embodiment of the present invention may be implemented as a computer program (or computer program product) including instructions executable by a computer. The computer program includes programmable machine instructions processed by the processor and may be implemented in a high-level programming language, an object-oriented programming language, an assembly language, or a machine language. . The computer program may also be recorded on tangible computer readable media (eg, memory, hard disks, magnetic / optical media or solid-state drives, etc.).

Accordingly, the music providing method according to an embodiment of the present invention may be implemented by executing the computer program as described above by the computing device. The computing device may include at least a portion of a processor, a memory, a storage device, a high speed interface connected to the memory and a high speed expansion port, and a low speed interface connected to the low speed bus and the storage device. Each of these components are connected to each other using a variety of buses and may be mounted on a common motherboard or otherwise mounted in a suitable manner.

Here, the processor may process instructions within the computing device, such as to display graphical information for providing a graphical user interface (GUI) on an external input, output device, such as a display connected to a high speed interface. Instructions stored in memory or storage. In other embodiments, multiple processors and / or multiple buses may be used with appropriately multiple memories and memory types. The processor may also be implemented as a chipset consisting of chips comprising a plurality of independent analog and / or digital processors.

The memory also stores information within the computing device. In one example, the memory may consist of a volatile memory unit or a collection thereof. As another example, the memory may consist of a nonvolatile memory unit or a collection thereof. The memory may also be other forms of computer readable media, such as, for example, magnetic or optical disks.

In addition, the storage device can provide a large amount of storage space to the computing device. The storage device may be a computer readable medium or a configuration including such a medium, and may include, for example, devices or other configurations within a storage area network (SAN), and may include a floppy disk device, a hard disk device, an optical disk device, Or a tape device, flash memory, or similar other semiconductor memory device or device array.

The foregoing description of the present invention is intended for illustration, and it will be understood by those skilled in the art that the present invention may be easily modified in other specific forms without changing the technical spirit or essential features of the present invention. will be. Therefore, it should be understood that the embodiments described above are exemplary in all respects and not restrictive. For example, each component described as a single type may be implemented in a distributed manner, and similarly, components described as distributed may be implemented in a combined form.

The scope of the present invention is shown by the following claims rather than the above description, and all changes or modifications derived from the meaning and scope of the claims and their equivalents should be construed as being included in the scope of the present invention. do.

Claims (18)

  1. In the music providing method performed by the music providing system,
    Storing tag information corresponding to the sound source by matching the sound source; And
    Reproducing the sound source, music providing method.
  2. The method of claim 1,
    The step of storing the tag information to match the sound source,
    Receiving arbitrary creation tag information from the user with respect to the sound source; And
    And matching and storing the received tag information with the sound source.
  3. The method of claim 2,
    The storing of the created tag information by matching the sound source may include:
    Recommending one or more recommendation tag information related to the created tag information received from the user; And
    And receiving an input for at least one of the recommendation tag information from the user and matching the received recommendation tag information with the sound source.
  4. The method of claim 2,
    The storing of the created tag information by matching the sound source may include:
    And generating a sound source group including one or more sound sources according to the created tag information.
  5. The method of claim 4, wherein
    Creating the sound source group,
    And providing the generated sound source group to another user terminal.
  6. The method of claim 5, wherein
    Providing the generated sound source group to another user terminal,
    Billing the fee for the other user terminal according to the provision of the sound source group,
    Charging the fee,
    And comparing the list of sound sources owned by the other user terminal with the sound sources included in the sound source group, and charging only for the non-owned sound sources.
  7. The method of claim 1,
    Reproducing the sound source,
    Receiving an input for tag information from a user;
    Including a sound source having tag information identical or related to the received tag information in a playlist; And
    Playing the playlist.
  8. The method of claim 4, wherein
    Reproducing the sound source,
    Receiving a selection input for the sound source group from a user;
    Including a sound source included in the selected sound source group in a playlist; And
    Playing the playlist.
  9. In the music providing system for performing at least one of the provision of the sound source and the reproduction of the sound source,
    A storage unit which matches tag information corresponding to the sound source with the sound source and stores the tag information; And
    And a reproduction unit for reproducing the sound source.
  10. The method of claim 9,
    The music providing system,
    Further comprising a tag assignment unit for receiving any written tag information from the user for the sound source,
    The storage unit,
    And matching the stored tag information received by the tag assignment unit with the sound source and storing the created tag information.
  11. The method of claim 10,
    The tag assignment unit,
    Recommending at least one recommendation tag information related to the created tag information received from the user, receiving input from at least one of the recommendation tag information from the user,
    The storage unit,
    And matching the received tag information with the sound source and storing the received tag information.
  12. The method of claim 10,
    The music providing system,
    And a group generator for generating a sound source group including one or more sound sources according to the created tag information.
  13. The method of claim 12,
    The group generation unit,
    And a group providing unit for providing the generated sound source group to other user terminals.
  14. The method of claim 13,
    The music providing system,
    Further comprising a charging unit for charging a fee for the other user terminal in accordance with the provision of the sound source group,
    The charging unit,
    And comparing the list of sound sources owned by the other user terminal with the sound sources included in the sound source group, and charging only for the unowned sound sources.
  15. The method of claim 9,
    The regeneration unit,
    Receiving an input for tag information from a user, including a sound source having tag information identical or related to the received tag information in a playlist, and playing the playlist.
  16. The method of claim 12,
    The regeneration unit,
    Receiving a selection input for the sound source group from a user, including a sound source included in the selected sound source group in a playlist, and playing the playlist.
  17. A computer program executed by a music providing system and stored in a recording medium for carrying out the method according to at least one of claims 1 to 8.
  18. A computer-readable recording medium having recorded thereon a program for performing the method according to at least one of claims 1 to 8.
PCT/KR2016/002043 2015-03-06 2016-03-02 Music providing method and music providing system WO2016144032A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
KR10-2015-0031502 2015-03-06
KR20150031502 2015-03-06
KR1020160012649A KR101874441B1 (en) 2015-03-06 2016-02-02 Device and method for providing music
KR10-2016-0012649 2016-02-02

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15/554,710 US20180239577A1 (en) 2015-03-06 2016-03-02 Music providing method and music providing system
JP2017565031A JP2018507503A (en) 2015-03-06 2016-03-02 Music providing method and music providing system

Publications (1)

Publication Number Publication Date
WO2016144032A1 true WO2016144032A1 (en) 2016-09-15

Family

ID=56880565

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2016/002043 WO2016144032A1 (en) 2015-03-06 2016-03-02 Music providing method and music providing system

Country Status (1)

Country Link
WO (1) WO2016144032A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060206582A1 (en) * 2003-11-17 2006-09-14 David Finn Portable music device with song tag capture
US20060242661A1 (en) * 2003-06-03 2006-10-26 Koninklijke Philips Electronics N.V. Method and device for generating a user profile on the basis of playlists
US20060293909A1 (en) * 2005-04-01 2006-12-28 Sony Corporation Content and playlist providing method
US20110264495A1 (en) * 2010-04-22 2011-10-27 Apple Inc. Aggregation of tagged media item information
US20110295843A1 (en) * 2010-05-26 2011-12-01 Apple Inc. Dynamic generation of contextually aware playlists

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060242661A1 (en) * 2003-06-03 2006-10-26 Koninklijke Philips Electronics N.V. Method and device for generating a user profile on the basis of playlists
US20060206582A1 (en) * 2003-11-17 2006-09-14 David Finn Portable music device with song tag capture
US20060293909A1 (en) * 2005-04-01 2006-12-28 Sony Corporation Content and playlist providing method
US20110264495A1 (en) * 2010-04-22 2011-10-27 Apple Inc. Aggregation of tagged media item information
US20110295843A1 (en) * 2010-05-26 2011-12-01 Apple Inc. Dynamic generation of contextually aware playlists

Similar Documents

Publication Publication Date Title
US20170039226A1 (en) Intelligent identification of multimedia content for grouping
US9396668B2 (en) Language learning exchange
US9218413B2 (en) Venue-related multi-media management, streaming, online ticketing, and electronic commerce techniques implemented via computer networks and mobile devices
US8732193B2 (en) Multi-media management and streaming techniques implemented over a computer network
US8935279B2 (en) Venue-related multi-media management, streaming, online ticketing, and electronic commerce techniques implemented via computer networks and mobile devices
JP5622210B2 (en) Method and apparatus for transferring digital content from a personal computer to a portable handset
Hjorth et al. Studying mobile media: Cultural technologies, mobile communication, and the iPhone
US8700659B2 (en) Venue-related multi-media management, streaming, and electronic commerce techniques implemented via computer networks and mobile devices
CN102572557B (en) Current device location advertisement distribution method and system
US8090606B2 (en) Embedded media recommendations
CN103827912B (en) Network-based music partner system and method
CN101364919B (en) Metadata collection system, apparatus, method and content management server
US20190258669A1 (en) Multi-input playlist selection
CN102968424B (en) Iterative cloud broadcasting rendering method
US20140101778A1 (en) Method, a system and an apparatus for delivering media layers
Dubber Radio in the digital age
Kalfatovic et al. Smithsonian Team Flickr: a library, archives, and museums collaboration in web 2.0 space
CN102640148B (en) Method and apparatus for presenting media segments
JP5250100B2 (en) Programming, distribution and consumption of media content
US9271115B2 (en) Method and system for providing a social music service using an LBS, and recording medium for recording a program for executing the method
US20140298174A1 (en) Video-karaoke system
US10469601B2 (en) Content management apparatus
CN100545936C (en) Transcriber, playback control method and program
US20050108754A1 (en) Personalized content application
EP2053818A2 (en) Mobile wireless communication terminals, systems, methods and computer program products for managing playback of song files

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16761919

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15554710

Country of ref document: US

ENP Entry into the national phase in:

Ref document number: 2017565031

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16761919

Country of ref document: EP

Kind code of ref document: A1