WO2016144032A1 - Music providing method and music providing system - Google Patents

Music providing method and music providing system Download PDF

Info

Publication number
WO2016144032A1
WO2016144032A1 PCT/KR2016/002043 KR2016002043W WO2016144032A1 WO 2016144032 A1 WO2016144032 A1 WO 2016144032A1 KR 2016002043 W KR2016002043 W KR 2016002043W WO 2016144032 A1 WO2016144032 A1 WO 2016144032A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound source
tag information
user
group
sound
Prior art date
Application number
PCT/KR2016/002043
Other languages
French (fr)
Korean (ko)
Inventor
김유식
Original Assignee
김유식
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020160012649A external-priority patent/KR101874441B1/en
Application filed by 김유식 filed Critical 김유식
Priority to US15/554,710 priority Critical patent/US20180239577A1/en
Priority to JP2017565031A priority patent/JP2018507503A/en
Publication of WO2016144032A1 publication Critical patent/WO2016144032A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/08Payment architectures
    • G06Q20/12Payment architectures specially adapted for electronic shopping systems
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing

Definitions

  • the present invention relates to a music providing method and a music providing system, and more particularly, to a method and system for allocating tag information corresponding to a sound source, generating a sound source group according to the assigned tag information, and providing the same to other users. .
  • the background art described above is technical information that the inventors possess for the derivation of the present invention or acquired in the derivation process of the present invention, and is not necessarily a publicly known technique disclosed to the general public before the application of the present invention. .
  • One embodiment of the present invention is to provide a sound source according to the purpose, taste, emotion and mood of the listener.
  • an embodiment of the present invention has an object to provide a group of sound sources according to the purpose, taste, emotion and mood of the listener.
  • an embodiment of the present invention is to provide a playlist consisting of a sound source according to the purpose, taste, emotion and mood of the listener.
  • an embodiment of the present invention has an object to share the tag information, sound source group with respect to the sound source according to the purpose, taste, emotion and mood of the listener with others.
  • an embodiment of the present invention is to create a new revenue structure in the music market by distributing the tag information on the sound source according to the purpose, taste, emotion and mood of the listener, the profit sharing of the sound source group with others There is a purpose.
  • a music providing method comprising the steps of:
  • the storage unit for matching and storing the tag information corresponding to the sound source and the playback unit for playing the sound source Disclosed is a music providing system comprising.
  • a third aspect of the present invention there is also disclosed a computer program carried out by a music providing system and stored in a recording medium for performing the music providing method according to the first aspect.
  • a computer readable recording medium having recorded thereon a program for performing the music providing method according to the first aspect.
  • an embodiment of the present invention can provide a sound source according to the purpose, taste, emotion and mood of the listener.
  • any one of the problem solving means of the present invention it is possible to provide a group of sound sources according to the purpose, taste, emotion and mood of the listener.
  • any one of the problem solving means of the present invention it is possible to provide a playlist consisting of a sound source according to the purpose, taste, emotion and mood of the listener.
  • any one of the problem solving means of the present invention it is possible to share the tag information and sound source group for the sound source according to the purpose, taste, emotion and mood of the listener with others.
  • the new revenue structure in the music market by distributing the tag information for the sound source according to the purpose, taste, emotion and mood of the listener, and the profit sharing of the sound source group with others Can create.
  • FIG. 1 is a block diagram of a music providing system according to an embodiment of the present invention.
  • FIG. 2 is a block diagram showing a music providing system according to an embodiment of the present invention.
  • FIG. 3 is a flowchart illustrating a music providing method according to an embodiment of the present invention.
  • FIG. 1 is a block diagram illustrating a music providing system 100 according to an embodiment of the present invention.
  • the music providing system 100 is an apparatus for performing a music providing method according to an exemplary embodiment of the present invention.
  • the music providing system 100 may play a sound source by calling a list index of a newly created sound source group.
  • the music providing system 100 may include a sound source providing server 10 for providing a sound source and a user terminal 20 for playing the sound source.
  • the sound source providing server 10 may store a sound source and various data related thereto, and transmit and receive data for music reproduction with the user terminal 20 through the network N.
  • the network N may be a local area network (LAN), a wide area network (WAN), a value added network (VAN), a personal local area network (PAN), or mobile. It can be implemented in all kinds of wired / wireless networks such as mobile radio communication network, Wibro (Wireless Broadband Internet), Mobile WiMAX, High Speed Downlink Packet Access (HSDPA) or satellite communication network.
  • LAN local area network
  • WAN wide area network
  • VAN value added network
  • PAN personal local area network
  • mobile can be implemented in all kinds of wired / wireless networks such as mobile radio communication network, Wibro (Wireless Broadband Internet), Mobile WiMAX, High Speed Downlink Packet Access (HSDPA) or satellite communication network.
  • Wibro Wireless Broadband Internet
  • HSDPA High Speed Downlink Packet Access
  • the music providing system 100 may include a user terminal 20.
  • the user terminal 20 may be implemented as a computer, a portable terminal, a television, a wearable device, or the like, which can be connected to the sound source providing server 10 or another user terminal through the network N.
  • the computer includes, for example, a laptop, desktop, laptop, etc., which is equipped with a web browser, and the portable terminal is, for example, a wireless communication device that ensures portability and mobility.
  • the television may include an Internet Protocol Television (IPTV), an Internet Television (Internet Television), a terrestrial TV, a cable TV, or the like.
  • IPTV Internet Protocol Television
  • Internet Television Internet Television
  • the wearable device is, for example, a type of information processing device that can be directly worn on a human body such as a watch, glasses, accessories, clothes, shoes, etc., and a sound source providing server at a remote place via a network (N) directly or through another information processing device. 10 may be connected to or connected to another user terminal.
  • the music providing system 100 may perform embodiments of more various music providing methods by communicating with other user terminals existing separately from the user terminal 20.
  • the music providing system 100 includes a storage unit 101, a tag assigning unit 102, a group generating unit 103, a group providing unit 104, a charging unit 105, and a playback unit 106. It may include, but do not necessarily include all of the above configuration at the same time, it can be implemented an embodiment of the present invention including some of the above configuration. In addition, the above configuration may be implemented in any one or both of the sound source providing server 10 and the user terminal 20 to operate one of the embodiments of the present invention.
  • the storage unit 101 may store tag information corresponding to the sound source by matching the sound source.
  • the tag information is a kind of keyword information for describing the sound source, and may be stored as being matched with the sound source as data separate from the sound source.
  • the user can predict the mood of the music or search for the sound source according to the tag information.
  • the tag information may be set and stored by the provider of the sound source before the sound source is provided to the user, which is called self tag information.
  • creation tag information arbitrary tag information that a user directly adds to a sound source.
  • the music providing system 100 may include a tag assignment unit 102 as a module that provides an interface for receiving creation tag information from a user.
  • the tag assigning unit 102 may receive arbitrary writing tag information from the user with respect to the sound source.
  • the type of information that can be received as the creation tag information from the user is not limited. That is, the user can input the tag for the sound source as he / she wants. For example, you can enter the tag information according to the purpose of emotion, such as joy, excitement, depression, rainy day, sunny day, calmness, exercise, eating, concentration, etc. Place name, person's name, etc. can be written as tag information and personal and unique tag information can be assigned to the sound source.
  • the tag allocating unit 102 may allocate image tag information about a sound source by presenting an image such as an emoticon and being selected by the user.
  • the tag allocator 102 recommends one or more recommendation tag information related to the created tag information received from the user, and receives the tag information by receiving an input for one or more of the recommended tag information from the user. Can be assigned.
  • the recommended tag information is tag information related to the created tag information input by the user, and may include tag information having a similar notation or pronunciation or containing tag information, tag information having characteristics in common with the created tag information, and the like. have.
  • tag information input of a plurality of users may be analyzed and frequently inputted tag information may be recommended as recommended tag information in view of related tag information.
  • popular tag information or previously written tag information may be recommended.
  • the tag allocator 102 may present the user with at least one recommendation tag information as described above to allow the user to select from a list or to receive a text input.
  • the tag assignment unit 102 when the tag assignment unit 102 receives the input of the tag information, the tag assignment unit 102 converts the received tag information to the most similar tag information among other registered tag information, and registers the converted tag information as the created tag information. have.
  • the tag information received by the tag allocator 102 as described above may be stored by the storage 101 to match the sound source.
  • the group generation unit 103 may generate a sound source group including one or more sound sources according to the tag information.
  • a sound source group including one or more sound sources may be generated according to the created tag information.
  • the sound source group may be generated by searching for and grouping one or more sound sources including tag information that is the same as or related to the created tag information. can do.
  • the relevant tag information may include tag information having a similar notation or pronunciation or containing tag information, tag information having a feature in common with the tag information.
  • the tag information input of a plurality of users may be analyzed, and the tag information having a high frequency of continuous input may be viewed as related tag information and grouped together.
  • the playback unit 106 which will be described later, may include a sound source included in the sound source group generated by the group generation unit 103 in the playlist, and play the playlist.
  • the sound source group may be generated by classifying tag information including the created tag information by using a DB management technique. For example, you can use operators like and, or, if, incl., Excl., Not, than, etc. “Songs released after 2000 among Korean ballads. Dark song among them. Sound sources can be grouped according to conditions such as excluding boy groups, ”dance music, and excluding Korean songs from very fast songs”. At this time, personal condition can be given by writing tag information.
  • a sound source group can be created by inputting a condition such as “Music that Haedae listened to.” have.
  • the generated sound source group may be generated by moving to a new folder and storing the sound source group.
  • the sound source group may be defined by the sound source group information.
  • the sound source group information is information for specifying a sound source group, including a list index of which sound source is included in the sound source group generated by the group generator 103, a name of a sound source group specified by the user, and the like. Say it.
  • the music providing system 100 when the music providing system 100 communicates with another user terminal separate from the user terminal 20, the music providing system 100 hits the sound source group generated by the group generating unit 103. It may further include a group providing unit 104 provided to the user terminal.
  • the sound source group may be provided in a manner of transmitting a sound source file or giving a real time streaming authority for the sound source, and may provide tag information of a sound source included in the sound source group together with the sound source group.
  • the group providing unit 104 may provide the writing tag information stored in match with the sound source to other user terminals.
  • the group providing unit 104 may provide a sound source group to another user terminal according to a user's input. If the generated sound source group and the other user terminal are one or more, the user may receive a selection of one or more sound source groups and one or more other user terminals from the user and provide the selected sound source group to the selected other user terminal. That is, the user may directly present or recommend a sound source group to other users.
  • the group providing unit 104 provides sound source group information for the generated sound source group to another user terminal, and receives an input for selecting a sound source group according to the sound source group information from the other user terminal.
  • a sound source group can be provided to other user terminals.
  • the provision of the sound source group information may be achieved by uploading the sound source group information of the sound source group generated by the user terminal 20 to the sound source providing server 10. In this case, instead of directly uploading the sound source group including the sound source, only sound source group information may be uploaded to obtain an advantage in terms of data volume, time, and speed.
  • the user-specific sound source group generated according to the tag information including the created tag information can be shared with other users. This has the same effect as sharing a user's own emotion with other users.
  • the group providing unit 104 by using the embodiment of the group providing unit 104, by comparing the sound source groups generated by a plurality of users to search for similar sound source group overlapping a predetermined number of sound sources, another sound source of the user who created the similar sound source group You can recommend groups to each other. Through this, it is possible to provide a sound source group corresponding to the emotional taste of the user or to recommend a user having a similar emotional taste as a friend.
  • the music providing system 100 further includes a charging unit 105 for charging a fee for another user terminal in accordance with the provision of the sound source group, and the charging unit 105 includes a list of sound sources held by the other user terminal. You can charge only for unowned sound sources by comparing the sound sources included in the sound source group. As a result, when the user is provided with the sound source group, the user does not have to pay a duplicate price for the sound source included in the sound source group and the previously used sound source.
  • the charging unit 105 may provide the sound source group to other users and distribute the profits obtained to the user who provided the sound source group. Since individual music has new value by combining with new music, it can be used as a platform for publishing a music group in which a specific music is combined or for publishing a special purpose omnibus album by a celebrity or a professional. can do. For example, you can present a sound recording group to other users on topics such as “Taegyo music recommended by a doctor ⁇ ” and “Party music that Top Star likes to enjoy”. As such, by distributing a certain profit to the user who provided the sound source group, the user can pay for the creation of the sound source group and encourage the creation of the sound source group. As other users use the music group created according to purpose, taste, emotion and mood, and pay for it, creating a new profit structure in the music market as a whole.
  • the reproduction unit 106 may reproduce the sound source stored in the storage unit 101.
  • the storage unit 101 may be included in one or both of the user terminal 20 and the sound source providing server 10.
  • the playback unit 106 is the sound source providing server. After the file of the sound source is downloaded from 10, the file can be stored and played, or the sound source stored in the sound source providing server 10 can be reproduced in real time.
  • An embodiment of such a playback unit 106 may receive an input for tag information from a user, include a sound source having tag information equal or related to the received tag information in a playlist, and play the playlist. have.
  • the tag information may include at least one of self tag information and created tag information stored corresponding to the sound source.
  • the playback unit 106 receives a selection input for the sound source group generated by the group generation unit 103 from the user, includes the sound sources included in the selected sound source group in the playlist, and plays the playlist. Can play.
  • the sound source does not need to be moved to a separate folder in order to include the sound source in the playlist, and may be implemented by generating a list index of the sound source to be played.
  • the playlist as described above is configured according to the tag information added to the sound source by the user, through which the user can be provided with a playlist that matches the purpose, taste, emotion and mood at that time. For example, if a user enters a writing tag information of “boy” which is the name of his boyfriend who is separated in one or more sound sources, when the boyfriend breaks up, he is provided with a playlist by inputting tag information of “boy” At any time, you can listen to music that matches your emotions.
  • the playback unit 106 may randomly play the sound sources included in the playlist, or play them in a sorted order on a predetermined basis, or according to the order set by the user.
  • the music files in the group may be inputted when a user inputs to each music file setting value (for example, volume, bass, treble, distortion, echo, etc.). It can be played back according to the settings set.
  • the playback unit 106 plays back one or more sound sources grouped by predetermined tag information
  • the tag information corresponding to the one or more sound sources is plural
  • the playback unit 106 assigns a weight to each of the tag information to be included in the playlist. Sound sources can be sorted, and the sound sources in the group can be played in the sorted order.
  • the playback unit 106 checks the storage location of the music file corresponding to the predetermined tag information, lists the storage locations, and calls the sound source stored in the storage location when playing the sound source matching the predetermined tag information. Can be played by
  • the music providing method according to the embodiment shown in FIG. 3 includes steps that are processed in time series in the music providing system 100 shown in FIGS. 1 to 2. Therefore, even if omitted below, the above description of the music providing system 100 shown in FIGS. 1 to 2 may be applied to the music providing method according to the embodiment shown in FIG. 3.
  • the music providing method may include matching tag information corresponding to a sound source with a sound source, storing the tag information, and reproducing the sound source.
  • the music providing method includes receiving random writing tag information from a user with respect to a sound source (S301) and matching and storing the received writing tag information with a sound source (S302). ) May be included.
  • step S302 may include recommending at least one recommendation tag information related to the created tag information received from the user and receiving input from at least one of the recommendation tag information from the user, and matching the received recommendation tag information with the sound source. And storing.
  • the music providing method may further include generating a sound source group according to the created tag information (S303).
  • the sound source group including one or more sound sources may be generated according to the composition tag information input from the user.
  • the one or more sound sources may include sound sources having tag information that is the same as or related to the composition tag information.
  • the sound source included in the generated sound source group may be included in a playlist and played in a later stage.
  • step S303 may further include providing the generated sound source group to the other user terminal. have.
  • the user's own sound source group can be shared by presenting or recommending to other users.
  • the music providing method may further include charging a fee for another user terminal according to the provision of the sound source group.
  • the charging step may include comparing the list of sound sources owned by another user terminal with the sound sources included in the sound source group, and charging only for the unowned sound sources.
  • the music providing method may include the step (S104) of reproducing a sound source included in the sound source group.
  • the music providing method may include receiving a selection input for a sound source group from a user, including a sound source included in the selected sound source group in a playlist, and playing the playlist. have.
  • a music providing method includes receiving an input for tag information from a user, including a sound source having tag information identical or related to the received tag information in a playlist, and a playlist.
  • the sound source may be played through the playing step.
  • the music providing method according to the embodiment described with reference to FIG. 3 may also be implemented in the form of a recording medium including instructions executable by a computer, such as a program module executed by the computer.
  • Computer readable media can be any available media that can be accessed by a computer and includes both volatile and nonvolatile media, removable and non-removable media.
  • computer readable media may include both computer storage media and communication media.
  • Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Communication media typically includes computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave, or other transmission mechanism, and includes any information delivery media.
  • the music providing method may be implemented as a computer program (or computer program product) including instructions executable by a computer.
  • the computer program includes programmable machine instructions processed by the processor and may be implemented in a high-level programming language, an object-oriented programming language, an assembly language, or a machine language.
  • the computer program may also be recorded on tangible computer readable media (eg, memory, hard disks, magnetic / optical media or solid-state drives, etc.).
  • the music providing method may be implemented by executing the computer program as described above by the computing device.
  • the computing device may include at least a portion of a processor, a memory, a storage device, a high speed interface connected to the memory and a high speed expansion port, and a low speed interface connected to the low speed bus and the storage device.
  • a processor may include at least a portion of a processor, a memory, a storage device, a high speed interface connected to the memory and a high speed expansion port, and a low speed interface connected to the low speed bus and the storage device.
  • Each of these components are connected to each other using a variety of buses and may be mounted on a common motherboard or otherwise mounted in a suitable manner.
  • the processor may process instructions within the computing device, such as to display graphical information for providing a graphical user interface (GUI) on an external input, output device, such as a display connected to a high speed interface. Instructions stored in memory or storage. In other embodiments, multiple processors and / or multiple buses may be used with appropriately multiple memories and memory types.
  • the processor may also be implemented as a chipset consisting of chips comprising a plurality of independent analog and / or digital processors.
  • the memory also stores information within the computing device.
  • the memory may consist of a volatile memory unit or a collection thereof.
  • the memory may consist of a nonvolatile memory unit or a collection thereof.
  • the memory may also be other forms of computer readable media, such as, for example, magnetic or optical disks.
  • the storage device can provide a large amount of storage space to the computing device.
  • the storage device may be a computer readable medium or a configuration including such a medium, and may include, for example, devices or other configurations within a storage area network (SAN), and may include a floppy disk device, a hard disk device, an optical disk device, Or a tape device, flash memory, or similar other semiconductor memory device or device array.
  • SAN storage area network

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Tourism & Hospitality (AREA)
  • Accounting & Taxation (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Finance (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention relates to a music providing method and a music providing system and, more specifically, to a method and a system allocating tag information corresponding to a sound source, and generating a sound source group according to the allocated tag information so as to provide the sound source group to other users, and can provide a function of receiving certain written tag information from a user with respect to the sound source so as to match the received written tag information with the sound source and store the same.

Description

음악제공방법 및 음악제공시스템Music provision method and music provision system
본 발명은 음악제공방법 및 음악제공시스템에 관한 것으로, 보다 상세하게는 음원에 대응되는 태그정보를 할당하고, 할당된 태그정보에 따른 음원그룹을 생성하여 타사용자에게 제공하는 방법 및 시스템에 관한 것이다.The present invention relates to a music providing method and a music providing system, and more particularly, to a method and system for allocating tag information corresponding to a sound source, generating a sound source group according to the assigned tag information, and providing the same to other users. .
최근 들어 전자기기의 고성능화, 소형화, 저전력화로 인해 인간 생활 전반이 스마트 기기와 함께 이루어지고 있으며, 이제는 인간 생활의 기능적인 편리를 넘어서 감성적 순화를 돕기 위한 것으로서의 스마트 기기에 대한 기대가 높아지고 있다. 즉, 스마트 기기의 감성적 진화가 요구되고 있다. Recently, due to the high performance, miniaturization, and low power consumption of electronic devices, overall human life has been achieved with smart devices, and now, expectations for smart devices are increasing as they help emotional purification beyond the functional convenience of human life. In other words, the emotional evolution of smart devices is required.
음악 제공 분야와 관련하여서는, 디지털 음원을 편리하게 제공받고, 언제든지 재생할 수 있는 것에 초점을 맞추어 기술이 발달되어 왔다(공개번호 특2003-0087791 “디지털 음원 수신, 저장 및 재생 방법”). With regard to the field of music provision, technology has been developed with the focus on being able to conveniently receive and reproduce digital sound sources at all times (published by Korean Patent Publication No. 2003-0087791 "Method of Receiving, Storing and Playing Digital Sound Sources").
그러나 듣는 이의 감성에 따른 음원을 제공할 수 있는 기술의 발달은 미흡하였다.However, the development of technology that can provide a sound source according to the emotion of the listener was insufficient.
음악 감상은 동영상 감상과는 달리 화면을 볼 필요가 없어, 이동 또는 운동하는 시간 중에 장시간 음악 감상을 하게 되고, 이 경우 랜덤(random play) 기능을 이용하게 된다. 대부분의 사람들이 수백 곡 이상의 음원 파일을 저장하고 있으나, 목적, 취향, 감성 및 분위기에 따라 듣고 싶은 곡을 선택적으로 감상하는데 어려움이 있다. 지금까지의 기술로는 듣고 싶은 곡을 선택적으로 감상하기 위해 매번 각각의 파일을 개별 선택하거나 폴더를 새로 생성하여 음원 파일을 이동해야 하는 등의 번거로움이 따랐기 때문이다.Music listening does not need to look at the screen, unlike video viewing, so that you can listen to music for a long time during the movement or exercise, in this case using a random (random play) function. Although most people store more than hundreds of music files, it is difficult to selectively listen to the songs they want to listen to according to their purpose, taste, emotion, and mood. Until now, the technology has been cumbersome, such as having to individually select each file each time or selectively create a new folder and move a sound source file in order to selectively listen to a song to listen to.
따라서 상술된 문제점을 해결하기 위한 기술이 필요하게 되었다.Therefore, there is a need for a technique for solving the above problems.
한편, 전술한 배경기술은 발명자가 본 발명의 도출을 위해 보유하고 있었거나, 본 발명의 도출 과정에서 습득한 기술 정보로서, 반드시 본 발명의 출원 전에 일반 공중에게 공개된 공지기술이라 할 수는 없다.On the other hand, the background art described above is technical information that the inventors possess for the derivation of the present invention or acquired in the derivation process of the present invention, and is not necessarily a publicly known technique disclosed to the general public before the application of the present invention. .
본 발명의 일 실시예는 듣는 이의 목적, 취향, 감성 및 분위기에 따라 음원을 제공하는 데에 목적이 있다. One embodiment of the present invention is to provide a sound source according to the purpose, taste, emotion and mood of the listener.
또한, 본 발명의 일 실시예는 듣는 이의 목적, 취향, 감성 및 분위기에 따라 음원의 그룹을 제공하는 데에 목적이 있다. In addition, an embodiment of the present invention has an object to provide a group of sound sources according to the purpose, taste, emotion and mood of the listener.
또한, 본 발명의 일 실시예는 듣는 이의 목적, 취향, 감성 및 분위기에 따른 음원으로 이루어진 플레이리스트를 제공하는 데에 목적이 있다. In addition, an embodiment of the present invention is to provide a playlist consisting of a sound source according to the purpose, taste, emotion and mood of the listener.
또한, 본 발명의 일 실시예는 듣는 이의 목적, 취향, 감성 및 분위기에 따른 음원에 대한 태그정보, 음원그룹을 타인과 공유할 수 있도록 하는 데에 목적이 있다.In addition, an embodiment of the present invention has an object to share the tag information, sound source group with respect to the sound source according to the purpose, taste, emotion and mood of the listener with others.
또한, 본 발명의 일 실시예는 듣는 이의 목적, 취향, 감성 및 분위기에 따른 음원에 대한 태그정보, 음원그룹을 타인과 공유함에 대한 수익을 분배함으로써 음원시장에서의 새로운 수익구조를 창출하는 데에 목적이 있다.In addition, an embodiment of the present invention is to create a new revenue structure in the music market by distributing the tag information on the sound source according to the purpose, taste, emotion and mood of the listener, the profit sharing of the sound source group with others There is a purpose.
상술한 기술적 과제를 달성하기 위한 기술적 수단으로서, 본 발명의 제 1 측면에 따르면 음악제공시스템이 수행하는 음악제공방법에 있어서, 음원에 대응되는 태그정보를 음원과 매칭하여 저장하는 단계 및 음원을 재생하는 단계를 포함하는 음악제공방법이 개시된다.As a technical means for achieving the above-described technical problem, according to the first aspect of the present invention, in the music providing method performed by the music providing system, matching and storing tag information corresponding to the sound source with the sound source and playing the sound source Disclosed is a music providing method comprising the steps of:
본 발명의 제 2 측면에 따르면, 음원의 제공 및 음원의 재생 중 적어도 하나를 수행하는 음악제공시스템에 있어서, 음원에 대응되는 태그정보를 음원과 매칭하여 저장하는 저장부 및 음원을 재생하는 재생부를 포함하는 음악제공시스템이 개시된다. According to a second aspect of the present invention, in the music providing system for performing at least one of the provision of the sound source and the reproduction of the sound source, the storage unit for matching and storing the tag information corresponding to the sound source and the playback unit for playing the sound source Disclosed is a music providing system comprising.
또한 본 발명의 제 3 측면에 따르면, 음악제공시스템에 의해 수행되며, 제 1 측면에 따르는 음악제공방법을 수행하기 위해 기록매체에 저장된 컴퓨터 프로그램이 개시된다.According to a third aspect of the present invention, there is also disclosed a computer program carried out by a music providing system and stored in a recording medium for performing the music providing method according to the first aspect.
또한 본 발명의 제 4 측면에 따르면, 제 1 측면에 따르는 음악제공방법을 수행하는 프로그램이 기록된 컴퓨터 판독가능한 기록매체가 개시된다.According to a fourth aspect of the present invention, there is also disclosed a computer readable recording medium having recorded thereon a program for performing the music providing method according to the first aspect.
전술한 본 발명의 과제 해결 수단 중 어느 하나에 의하면, 본 발명의 일실시예는 듣는 이의 목적, 취향, 감성 및 분위기에 따라 음원을 제공할 수 있다. According to any one of the problem solving means of the present invention described above, an embodiment of the present invention can provide a sound source according to the purpose, taste, emotion and mood of the listener.
또한, 본 발명의 과제 해결 수단 중 어느 하나에 의하면, 듣는 이의 목적, 취향, 감성 및 분위기에 따라 음원의 그룹을 제공할 수 있다. In addition, according to any one of the problem solving means of the present invention, it is possible to provide a group of sound sources according to the purpose, taste, emotion and mood of the listener.
또한, 본 발명의 과제 해결 수단 중 어느 하나에 의하면, 듣는 이의 목적, 취향, 감성 및 분위기에 따른 음원으로 이루어진 플레이리스트를 제공할 수 있다.In addition, according to any one of the problem solving means of the present invention, it is possible to provide a playlist consisting of a sound source according to the purpose, taste, emotion and mood of the listener.
또한, 본 발명의 과제 해결 수단 중 어느 하나에 의하면, 듣는 이의 목적, 취향, 감성 및 분위기에 따른 음원에 대한 태그정보, 음원그룹을 타인과 공유할 수 있다. In addition, according to any one of the problem solving means of the present invention, it is possible to share the tag information and sound source group for the sound source according to the purpose, taste, emotion and mood of the listener with others.
또한, 본 발명의 과제 해결 수단 중 어느 하나에 의하면, 듣는 이의 목적, 취향, 감성 및 분위기에 따른 음원에 대한 태그정보, 음원그룹을 타인과 공유함에 대한 수익을 분배함으로써 음원시장에서의 새로운 수익구조를 창출할 수 있다.In addition, according to any one of the problem solving means of the present invention, the new revenue structure in the music market by distributing the tag information for the sound source according to the purpose, taste, emotion and mood of the listener, and the profit sharing of the sound source group with others Can create.
본 발명에서 얻을 수 있는 효과는 이상에서 언급한 효과들로 제한되지 않으며, 언급하지 않은 또 다른 효과들은 아래의 기재로부터 본 발명이 속하는 기술분야에서 통상의 지식을 가진 자에게 명확하게 이해될 수 있을 것이다.The effects obtainable in the present invention are not limited to the above-mentioned effects, and other effects not mentioned above may be clearly understood by those skilled in the art from the following description. will be.
도 1은 본 발명의 일실시예에 따른 음악제공시스템의 구성도이다.1 is a block diagram of a music providing system according to an embodiment of the present invention.
도 2는 본 발명의 일실시예에 따른 음악제공시스템을 도시한 블록도이다.Figure 2 is a block diagram showing a music providing system according to an embodiment of the present invention.
도 3은 본 발명의 일실시예에 따른 음악제공방법을 설명하기 위한 순서도이다.3 is a flowchart illustrating a music providing method according to an embodiment of the present invention.
아래에서는 첨부한 도면을 참조하여 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자가 용이하게 실시할 수 있도록 본 발명의 실시예를 상세히 설명한다. 그러나 본 발명은 여러 가지 상이한 형태로 구현될 수 있으며 여기에서 설명하는 실시예에 한정되지 않는다. 그리고 도면에서 본 발명을 명확하게 설명하기 위해서 설명과 관계없는 부분은 생략하였으며, 명세서 전체를 통하여 유사한 부분에 대해서는 유사한 도면 부호를 붙였다.DETAILED DESCRIPTION Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those skilled in the art may easily implement the present invention. As those skilled in the art would realize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present invention. In the drawings, parts irrelevant to the description are omitted in order to clearly describe the present invention, and like reference numerals designate like parts throughout the specification.
명세서 전체에서, 어떤 부분이 다른 부분과 "연결"되어 있다고 할 때, 이는 "직접적으로 연결"되어 있는 경우뿐 아니라, 그 중간에 다른 소자를 사이에 두고 "전기적으로 연결"되어 있는 경우도 포함한다. 또한 어떤 부분이 어떤 구성요소를 "포함"한다고 할 때, 이는 특별히 반대되는 기재가 없는 한 다른 구성요소를 제외하는 것이 아니라 다른 구성요소를 더 포함할 수 있는 것을 의미한다.Throughout the specification, when a part is "connected" to another part, this includes not only "directly connected" but also "electrically connected" with another element in between. . In addition, when a part is said to "include" a certain component, which means that it may further include other components, except to exclude other components unless otherwise stated.
이하 첨부된 도면을 참고하여 본 발명을 상세히 설명하기로 한다.Hereinafter, the present invention will be described in detail with reference to the accompanying drawings.
도 1은 본 발명의 일실시예에 따른 음악제공시스템(100)을 설명하기 위한 구성도이다.1 is a block diagram illustrating a music providing system 100 according to an embodiment of the present invention.
음악제공시스템(100)은 본 발명의 일 실시예에 따른 음악제공방법을 수행하기 위한 장치로서, 예를 들어, 새로이 생성된 음원그룹의 리스트 인덱스(list index)를 호출하여 음원을 재생할 수 있다. 구체적으로는 음원을 제공하는 음원제공서버(10) 및 음원을 재생하는 사용자단말(20)을 포함할 수 있다. The music providing system 100 is an apparatus for performing a music providing method according to an exemplary embodiment of the present invention. For example, the music providing system 100 may play a sound source by calling a list index of a newly created sound source group. Specifically, it may include a sound source providing server 10 for providing a sound source and a user terminal 20 for playing the sound source.
음원제공서버(10)는 음원과 그에 관련된 각종 데이터를 저장하고, 네트워크(N)를 통하여 사용자단말(20)과 음악 재생을 위한 데이터를 송수신할 수 있다.The sound source providing server 10 may store a sound source and various data related thereto, and transmit and receive data for music reproduction with the user terminal 20 through the network N.
이때, 네트워크(N)는 근거리 통신망(Local Area Network; LAN), 광역 통신망(Wide Area Network; WAN), 부가가치 통신망(Value Added Network; VAN), 개인 근거리 무선통신(Personal Area Network; PAN), 이동 통신망(mobile radio communication network), Wibro(Wireless Broadband Internet), Mobile WiMAX, HSDPA(High Speed Downlink Packet Access) 또는 위성 통신망 등과 같은 모든 종류의 유/무선 네트워크로 구현될 수 있다. In this case, the network N may be a local area network (LAN), a wide area network (WAN), a value added network (VAN), a personal local area network (PAN), or mobile. It can be implemented in all kinds of wired / wireless networks such as mobile radio communication network, Wibro (Wireless Broadband Internet), Mobile WiMAX, High Speed Downlink Packet Access (HSDPA) or satellite communication network.
또한 음악제공시스템(100)은 사용자단말(20)을 포함할 수 있다. 사용자단말(20)은 네트워크(N)를 통해 음원제공서버(10) 또는 타사용자단말에 접속 가능한 컴퓨터나 휴대용 단말기, 텔레비전, 웨어러블 디바이스(Wearable Device) 등으로 구현될 수 있다. 여기서, 컴퓨터는 예를 들어, 웹 브라우저(WEB Browser)가 탑재된 노트북, 데스크톱(desktop), 랩톱(laptop)등을 포함하고, 휴대용 단말기는 예를 들어, 휴대성과 이동성이 보장되는 무선 통신 장치로서, PCS(Personal Communication System), PDC(Personal Digital Cellular), PHS(Personal Handyphone System), PDA(Personal Digital Assistant),GSM(Global System for Mobile communications), IMT(International Mobile Telecommunication)-2000, CDMA(Code Division Multiple Access)-2000, W-CDMA(W-Code Division Multiple Access), Wibro(Wireless Broadband Internet), 스마트폰(Smart Phone), 모바일 WiMAX(Mobile Worldwide Interoperability for Microwave Access) 등과 같은 모든 종류의 핸드헬드(Handheld) 기반의 무선 통신 장치를 포함할 수 있다. 또한, 텔레비전은 IPTV(Internet Protocol Television), 인터넷 TV(Internet Television), 지상파 TV, 케이블 TV 등을 포함할 수 있다. 나아가 웨어러블 디바이스는 예를 들어, 시계, 안경, 액세서리, 의복, 신발 등 인체에 직접 착용 가능한 타입의 정보처리장치로서, 직접 또는 다른 정보처리장치를 통해 네트워크(N)를 경유하여 원격지의 음원제공서버(10)에 접속하거나 타사용자단말과 연결될 수 있다.In addition, the music providing system 100 may include a user terminal 20. The user terminal 20 may be implemented as a computer, a portable terminal, a television, a wearable device, or the like, which can be connected to the sound source providing server 10 or another user terminal through the network N. Here, the computer includes, for example, a laptop, desktop, laptop, etc., which is equipped with a web browser, and the portable terminal is, for example, a wireless communication device that ensures portability and mobility. , Personal Communication System (PCS), Personal Digital Cellular (PDC), Personal Handyphone System (PHS), Personal Digital Assistant (PDA), Global System for Mobile communications (GSM), International Mobile Telecommunication (IMT) -2000, Code Division Multiple Access (2000), all types of handhelds such as W-Code Division Multiple Access (W-CDMA), Wireless Broadband Internet (Wibro), Smart Phones, and Mobile Worldwide Interoperability for Microwave Access (WiMAX). It may include a (Handheld) based wireless communication device. In addition, the television may include an Internet Protocol Television (IPTV), an Internet Television (Internet Television), a terrestrial TV, a cable TV, or the like. Further, the wearable device is, for example, a type of information processing device that can be directly worn on a human body such as a watch, glasses, accessories, clothes, shoes, etc., and a sound source providing server at a remote place via a network (N) directly or through another information processing device. 10 may be connected to or connected to another user terminal.
추가적으로, 음악제공시스템(100)은 사용자단말(20)과 별개로 존재하는 타사용자단말과 통신함으로써 더 다양한 음악제공방법의 실시예를 수행할 수 있다. In addition, the music providing system 100 may perform embodiments of more various music providing methods by communicating with other user terminals existing separately from the user terminal 20.
도 2는 본 발명의 일 실시예에 따른 음악제공시스템(100)을 도시하기 위한 블록도이다. 도 2에 따르면, 음악제공시스템(100)은 저장부(101), 태그할당부(102), 그룹생성부(103), 그룹제공부(104), 과금부(105) 및 재생부(106)를 포함할 수 있으나, 위 구성 모두를 반드시 동시에 포함해야 하는 것은 아니며, 위 구성 중 일부를 포함하여 본 발명의 일 실시예를 구현할 수 있다. 또한, 위 구성들은 음원제공서버(10) 및 사용자단말(20) 중 어느 하나 또는 둘 모두에 구비되어 작동함으로써 본 발명의 실시예 중 하나를 구현할 수 있다.2 is a block diagram illustrating a music providing system 100 according to an embodiment of the present invention. According to FIG. 2, the music providing system 100 includes a storage unit 101, a tag assigning unit 102, a group generating unit 103, a group providing unit 104, a charging unit 105, and a playback unit 106. It may include, but do not necessarily include all of the above configuration at the same time, it can be implemented an embodiment of the present invention including some of the above configuration. In addition, the above configuration may be implemented in any one or both of the sound source providing server 10 and the user terminal 20 to operate one of the embodiments of the present invention.
구체적인 실시예로서, 저장부(101)는 음원에 대응되는 태그정보를 음원과 매칭하여 저장할 수 있다. 이때, 태그정보란 음원을 설명하기 위한 일종의 키워드 정보로서 음원과 별개의 데이터로 음원에 매칭되어 저장될 수 있다. 사용자로서는 태그정보에 따라 음악의 분위기를 예상하거나 음원을 검색할 수 있다. 태그정보는 음원이 사용자에게 제공되기 전 음원의 제공자에 의해 설정되어 저장될 수 있는데, 이를 자체태그정보라고 한다. 반면 사용자가 직접 음원에 대하여 부가하는 임의의 태그정보를 작성태그정보라고 한다. 음악제공시스템(100)은 사용자로부터 작성태그정보를 수신하기 위한 인터페이스를 제공하는 모듈로서 태그할당부(102)를 포함할 수 있다.As a specific embodiment, the storage unit 101 may store tag information corresponding to the sound source by matching the sound source. In this case, the tag information is a kind of keyword information for describing the sound source, and may be stored as being matched with the sound source as data separate from the sound source. The user can predict the mood of the music or search for the sound source according to the tag information. The tag information may be set and stored by the provider of the sound source before the sound source is provided to the user, which is called self tag information. On the other hand, arbitrary tag information that a user directly adds to a sound source is called creation tag information. The music providing system 100 may include a tag assignment unit 102 as a module that provides an interface for receiving creation tag information from a user.
태그할당부(102)는 음원에 대하여 사용자로부터 임의의 작성태그정보를 수신할 수 있다. 이때 사용자로부터 작성태그정보로서 수신할 수 있는 정보의 종류는 제한되지 않는다. 즉, 사용자로서는 자신이 원하는 대로 음원에 대한 태그를 입력할 수 있다. 예를 들면, 기쁨, 신남, 우울 등의 감정상태 또는 비오는 날, 맑은 날, 잔잔함 등의 분위기나 운동할 때, 먹을 때, 집중이 필요할 때 등 목적에 따라 작성태그정보를 입력할 수 있고, 추억이 있는 장소명, 사람이름 등을 작성태그정보로서 기입하여 개인적이면서 고유한 태그정보를 음원에 대해 할당할 수 있다. 나아가 태그할당부(102)는 이모티콘 등의 이미지를 제시하여 사용자로부터 선택받음으로써 음원에 대한 이미지태그정보를 할당할 수 있다.The tag assigning unit 102 may receive arbitrary writing tag information from the user with respect to the sound source. At this time, the type of information that can be received as the creation tag information from the user is not limited. That is, the user can input the tag for the sound source as he / she wants. For example, you can enter the tag information according to the purpose of emotion, such as joy, excitement, depression, rainy day, sunny day, calmness, exercise, eating, concentration, etc. Place name, person's name, etc. can be written as tag information and personal and unique tag information can be assigned to the sound source. Furthermore, the tag allocating unit 102 may allocate image tag information about a sound source by presenting an image such as an emoticon and being selected by the user.
또 다른 일 실시예로서, 태그할당부(102)는 사용자로부터 입력받은 작성태그정보와 유관한 하나 이상의 추천태그정보를 추천하고, 추천태그정보 중 하나 이상에 대한 입력을 사용자로부터 수신함으로써 태그정보를 할당할 수 있다. 추천태그정보란, 사용자가 입력한 작성태그정보와 관련이 있는 태그정보로서 표기나 발음이 유사하거나 작성태그정보가 포함되어 있는 태그정보, 작성태그정보와 공통된 특징을 갖는 태그정보 등을 포함할 수 있다. 또는, 다수의 사용자의 태그정보 입력을 분석하여, 연속으로 입력하는 빈도가 높은 태그정보는 유관한 태그정보로 보아 추천태그정보로서 추천할 수 있다. 또는 인기 태그정보나 기 입력했던 작성태그정보를 추천할 수도 있다. 태그할당부(102)는 위와 같은 하나 이상의 추천태그정보를 사용자에게 제시함으로써 사용자에게 리스트 중에서 선택하게 하거나 문자로 입력 받을 수 있다. As another embodiment, the tag allocator 102 recommends one or more recommendation tag information related to the created tag information received from the user, and receives the tag information by receiving an input for one or more of the recommended tag information from the user. Can be assigned. The recommended tag information is tag information related to the created tag information input by the user, and may include tag information having a similar notation or pronunciation or containing tag information, tag information having characteristics in common with the created tag information, and the like. have. Alternatively, tag information input of a plurality of users may be analyzed and frequently inputted tag information may be recommended as recommended tag information in view of related tag information. Alternatively, popular tag information or previously written tag information may be recommended. The tag allocator 102 may present the user with at least one recommendation tag information as described above to allow the user to select from a list or to receive a text input.
또한 태그할당부(102)는 태그정보의 입력을 수신하면, 수신한 태그정보를 기등록된 다른 태그정보 중에서 가장 유사한 태그정보와 동일하게 변환하고, 변환된 태그정보를 작성태그정보로서 등록할 수 있다.In addition, when the tag assignment unit 102 receives the input of the tag information, the tag assignment unit 102 converts the received tag information to the most similar tag information among other registered tag information, and registers the converted tag information as the created tag information. have.
위와 같이 태그할당부(102)가 입력받은 태그정보는 저장부(101)가 음원과 매칭하여 저장할 수 있다.The tag information received by the tag allocator 102 as described above may be stored by the storage 101 to match the sound source.
한편, 그룹생성부(103)는 태그정보에 따라 하나 이상의 음원을 포함하는 음원그룹을 생성할 수 있다. 예를 들어, 작성태그정보에 따라 하나 이상의 음원을 포함하는 음원그룹을 생성할 수 있으며, 이때, 작성태그정보와 동일 또는 유관한 태그정보를 포함하는 하나 이상의 음원을 검색하여 그룹핑함으로써 음원그룹을 생성할 수 있다. 유관한 태그정보란, 표기나 발음이 유사하거나 작성태그정보를 포함하고 있는 태그정보, 작성태그정보와 공통된 특징을 갖는 태그정보 등을 포함할 수 있다. 또는, 다수의 사용자의 태그정보 입력을 분석하여, 연속으로 입력하는 빈도가 높은 태그정보를 유관한 태그정보로 보아 함께 그룹핑할 수 있다. 추후에 설명할 재생부(106)는, 그룹생성부(103)가 생성한 음원그룹에 포함된 음원을 플레이리스트에 포함시키고, 플레이리스트를 재생할 수 있다.Meanwhile, the group generation unit 103 may generate a sound source group including one or more sound sources according to the tag information. For example, a sound source group including one or more sound sources may be generated according to the created tag information. In this case, the sound source group may be generated by searching for and grouping one or more sound sources including tag information that is the same as or related to the created tag information. can do. The relevant tag information may include tag information having a similar notation or pronunciation or containing tag information, tag information having a feature in common with the tag information. Alternatively, the tag information input of a plurality of users may be analyzed, and the tag information having a high frequency of continuous input may be viewed as related tag information and grouped together. The playback unit 106, which will be described later, may include a sound source included in the sound source group generated by the group generation unit 103 in the playlist, and play the playlist.
그룹생성부(103)에 대한 또 다른 실시예로서, 작성태그정보를 비롯한 태그정보를 DB관리 기법을 사용하여 분류함으로써 음원그룹을 생성할 수 있다. 예를 들면, and, or, if, incl., excl., not, than 등의 연산자를 이용할 수 있으며, “한국 발라드 중 2000년 이후 발매된 곡들. 그 중 어두운 곡. 단 보이그룹 제외”, “댄스뮤직, 매우 빠른 곡 들 중 한국 곡 제외” 등과 같은 조건에 따라 음원을 그룹핑할 수 있다. 이때 작성태그정보에 의해 개인적인 조건 부여가 가능하다. 예를 들면, 사용자가 하나 이상의 음원에 대응하는 작성태그정보로서 “철수”, “해운대”를 입력하였다면, “철수와 들었던 음악 중 해운대에서 들었던 음악”과 같은 조건을 입력하여 음원그룹을 생성할 수 있다.As another embodiment of the group generation unit 103, the sound source group may be generated by classifying tag information including the created tag information by using a DB management technique. For example, you can use operators like and, or, if, incl., Excl., Not, than, etc. “Songs released after 2000 among Korean ballads. Dark song among them. Sound sources can be grouped according to conditions such as excluding boy groups, ”dance music, and excluding Korean songs from very fast songs”. At this time, personal condition can be given by writing tag information. For example, if the user inputs “Cheol” and “Haeundae” as the creation tag information corresponding to one or more sound sources, a sound source group can be created by inputting a condition such as “Music that Haedae listened to.” have.
생성되는 음원그룹은 새로운 폴더로 이동, 저장됨으로써 생성될 수도 있지만, 바람직하게는 음원그룹정보로 인해 정의되어 존재할 수 있다. 음원그룹정보란, 그룹생성부(103)에서 생성한 음원그룹에 어떤 음원이 포함되어 있는지에 대한 리스트 인덱스(list index), 사용자가 지정한 음원그룹의 명칭 등을 비롯한 음원그룹을 특정할 수 있는 정보들을 말한다. The generated sound source group may be generated by moving to a new folder and storing the sound source group. Preferably, the sound source group may be defined by the sound source group information. The sound source group information is information for specifying a sound source group, including a list index of which sound source is included in the sound source group generated by the group generator 103, a name of a sound source group specified by the user, and the like. Say it.
음원그룹과 관련하여, 음악제공시스템(100)이 사용자단말(20)과 별개의 타사용자단말과 통신할 때, 음악제공시스템(100)은 그룹생성부(103)에 의해 생성된 음원그룹을 타사용자단말에 제공하는 그룹제공부(104)를 더 포함할 수 있다. 이때 음원그룹의 제공은, 음원 파일을 전송하거나 음원에 대한 실시간 스트리밍 권한을 주는 방식으로 이루어질 수 있으며, 음원그룹과 함께 음원그룹에 포함된 음원의 태그정보를 제공할 수도 있다. 또는 그룹제공부(104)는 음원에 매칭되어 저장된 작성태그정보를 타사용자단말에 제공할 수도 있다.With respect to the sound source group, when the music providing system 100 communicates with another user terminal separate from the user terminal 20, the music providing system 100 hits the sound source group generated by the group generating unit 103. It may further include a group providing unit 104 provided to the user terminal. In this case, the sound source group may be provided in a manner of transmitting a sound source file or giving a real time streaming authority for the sound source, and may provide tag information of a sound source included in the sound source group together with the sound source group. Alternatively, the group providing unit 104 may provide the writing tag information stored in match with the sound source to other user terminals.
예를 들어, 그룹제공부(104)는 사용자의 입력에 따라 음원그룹을 타사용자단말에 제공할 수 있다. 만약 생성된 음원그룹과 타사용자단말이 하나 이상이라면, 그 중 하나 이상의 음원그룹과 하나 이상의 타사용자단말에 대한 선택을 사용자로부터 수신하여 선택된 음원그룹을 선택된 타사용자단말에 제공할 수 있다. 즉, 사용자가 직접 타사용자에 대하여 음원그룹을 선물하거나 추천할 수 있다.For example, the group providing unit 104 may provide a sound source group to another user terminal according to a user's input. If the generated sound source group and the other user terminal are one or more, the user may receive a selection of one or more sound source groups and one or more other user terminals from the user and provide the selected sound source group to the selected other user terminal. That is, the user may directly present or recommend a sound source group to other users.
또 다른 실시예로서, 그룹제공부(104)는 생성된 음원그룹에 대한 음원그룹정보를 타사용자단말에 제공하고, 타사용자단말로부터 음원그룹정보에 따른 음원그룹을 선택하는 입력을 수신한 후 선택된 음원그룹을 타사용자단말에 제공할 수 있다. 음원그룹정보의 제공은 사용자단말(20)에서 생성된 음원그룹의 음원그룹정보를 음원제공서버(10)에 업로드함으로써 이루어질 수 있다. 이때, 음원이 포함된 음원그룹을 직접 업로드하는 대신 음원그룹정보만을 업로드하여 데이터량, 시간 및 속도 측면에서 이점을 얻을 수 있다. As another embodiment, the group providing unit 104 provides sound source group information for the generated sound source group to another user terminal, and receives an input for selecting a sound source group according to the sound source group information from the other user terminal. A sound source group can be provided to other user terminals. The provision of the sound source group information may be achieved by uploading the sound source group information of the sound source group generated by the user terminal 20 to the sound source providing server 10. In this case, instead of directly uploading the sound source group including the sound source, only sound source group information may be uploaded to obtain an advantage in terms of data volume, time, and speed.
위와 같은 그룹제공부(104)의 실시예를 통해 작성태그정보를 비롯한 태그정보에 따라 생성된, 사용자 고유의 음원그룹을 타사용자와 공유할 수 있다. 이는 사용자 고유의 감성을 타사용자와 공유하는 것과 같은 효과가 있다. Through the embodiment of the group providing unit 104 as described above, the user-specific sound source group generated according to the tag information including the created tag information can be shared with other users. This has the same effect as sharing a user's own emotion with other users.
즉, 그룹제공부(104)의 실시예를 이용하면, 다수의 사용자가 생성한 음원그룹을 서로 비교하여 소정 개수의 음원이 겹치는 유사 음원그룹을 검색하고, 유사 음원그룹을 생성한 사용자의 다른 음원그룹을 서로에게 추천해줄 수 있다. 이를 통해 사용자의 감성적 취향에 부합하는 음원그룹을 제공하거나 비슷한 감성적 취향을 갖는 사용자를 친구로 추천할 수 있다.That is, by using the embodiment of the group providing unit 104, by comparing the sound source groups generated by a plurality of users to search for similar sound source group overlapping a predetermined number of sound sources, another sound source of the user who created the similar sound source group You can recommend groups to each other. Through this, it is possible to provide a sound source group corresponding to the emotional taste of the user or to recommend a user having a similar emotional taste as a friend.
한편 음악제공시스템(100)은, 음원그룹의 제공에 따라 타사용자단말에 대하여 요금을 과금하는 과금부(105)를 더 포함하고, 과금부(105)는 타사용자단말이 기보유하는 음원의 리스트와 음원그룹에 포함된 음원을 비교하여 미보유 음원에 대해서만 과금할 수 있다. 이로써 타사용자는 음원그룹을 제공받을 때, 음원그룹에 포함된 음원과 기 이용하던 음원에 대해 중복하여 대가를 지불할 필요가 없게 된다. Meanwhile, the music providing system 100 further includes a charging unit 105 for charging a fee for another user terminal in accordance with the provision of the sound source group, and the charging unit 105 includes a list of sound sources held by the other user terminal. You can charge only for unowned sound sources by comparing the sound sources included in the sound source group. As a result, when the user is provided with the sound source group, the user does not have to pay a duplicate price for the sound source included in the sound source group and the previously used sound source.
또한 과금부(105)는 타사용자에게 음원그룹을 제공하고 얻은 수익을 상기 음원그룹을 제공한 사용자와 분배할 수 있다. 개별 음악은 새로운 음악과의 조합에 의하여 새로운 가치를 갖게 되므로, 이를 이용하여 특정 음악이 조합된 음원그룹을 일반인이 출간하는 플랫폼, 또는 유명인이나 전문인이 특수 목적의 옴니버스 앨범을 출간하는 플랫폼의 역할을 할 수 있다. 예를 들어, “전문의 ~박사가 추천하는 태교 음악”, “탑스타 ~가 즐겨 듣는 파티 음악” 등의 주제로 타사용자에게 음원그룹을 제시할 수 있다. 이와 같이 음원그룹을 제공한 사용자에게 일정한 수익을 분배함으로써 음원그룹의 창작에 대한 대가를 지불하고 음원그룹 창작을 격려할 수 있다. 타사용자로서도 목적, 취향, 감성 및 분위기에 맞게 생성된 음원그룹을 이용하고 그에 대한 대가를 지불하게 되어 전체적으로 음원 시장에 새로운 수익구조가 창출된다. In addition, the charging unit 105 may provide the sound source group to other users and distribute the profits obtained to the user who provided the sound source group. Since individual music has new value by combining with new music, it can be used as a platform for publishing a music group in which a specific music is combined or for publishing a special purpose omnibus album by a celebrity or a professional. can do. For example, you can present a sound recording group to other users on topics such as “Taegyo music recommended by a doctor ~” and “Party music that Top Star likes to enjoy”. As such, by distributing a certain profit to the user who provided the sound source group, the user can pay for the creation of the sound source group and encourage the creation of the sound source group. As other users use the music group created according to purpose, taste, emotion and mood, and pay for it, creating a new profit structure in the music market as a whole.
또한, 재생부(106)는 저장부(101)에서 저장한 음원을 재생할 수 있다. 앞서 설명한 바와 같이 저장부(101)는 사용자단말(20) 및 음원제공서버(10) 둘 중 하나 또는 둘 모두에 포함될 수 있다. 본 발명의 일 실시예로서 저장부(101)가 음원제공서버(10)에 포함되어 있고, 재생부(106)는 사용자단말(20)에 포함되어 있을 때, 재생부(106)는 음원제공서버(10)로부터 음원의 파일을 다운로드한 뒤 저장하여 재생하거나 음원제공서버(10)에 저장된 음원을 실시간 스트리밍 재생할 수 있다.In addition, the reproduction unit 106 may reproduce the sound source stored in the storage unit 101. As described above, the storage unit 101 may be included in one or both of the user terminal 20 and the sound source providing server 10. As an embodiment of the present invention, when the storage unit 101 is included in the sound source providing server 10 and the playback unit 106 is included in the user terminal 20, the playback unit 106 is the sound source providing server. After the file of the sound source is downloaded from 10, the file can be stored and played, or the sound source stored in the sound source providing server 10 can be reproduced in real time.
이와 같은 재생부(106)의 한 실시예는, 사용자로부터 태그정보에 대한 입력을 수신하고, 수신한 태그정보와 동일 또는 유관한 태그정보를 갖는 음원을 플레이리스트에 포함시키고, 플레이리스트를 재생할 수 있다. 이때 태그정보는 음원에 대응되어 저장된 자체태그정보 및 작성태그정보 중 적어도 하나를 포함할 수 있다. An embodiment of such a playback unit 106 may receive an input for tag information from a user, include a sound source having tag information equal or related to the received tag information in a playlist, and play the playlist. have. In this case, the tag information may include at least one of self tag information and created tag information stored corresponding to the sound source.
또 다른 실시예로서 재생부(106)는, 사용자로부터 그룹생성부(103)에 의해 생성된 음원그룹에 대한 선택 입력을 수신하고, 선택된 음원그룹에 포함된 음원을 플레이리스트에 포함시키고, 플레이리스트를 재생할 수 있다. 이때, 플레이리스트에 음원이 포함되기 위해 음원이 별도의 폴더로 이동될 필요는 없으며, 재생하고자 하는 음원의 리스트 인덱스(list index)가 생성됨으로써 구현될 수 있다.As another embodiment, the playback unit 106 receives a selection input for the sound source group generated by the group generation unit 103 from the user, includes the sound sources included in the selected sound source group in the playlist, and plays the playlist. Can play. In this case, the sound source does not need to be moved to a separate folder in order to include the sound source in the playlist, and may be implemented by generating a list index of the sound source to be played.
위와 같은 플레이리스트는 사용자 스스로 음원에 대해 덧붙인 태그정보에 따라 구성되는 것이며, 이를 통해 사용자는 그때 그때의 목적, 취향, 감성 및 분위기에 맞는 플레이리스트를 제공받을 수 있게 된다. 예를 들어, 사용자가 하나 이상의 음원에 헤어진 남자친구의 이름인 “철수”라는 작성태그정보를 입력해 두면, 헤어진 남자친구가 떠오를 때, “철수”라는 태그정보를 입력하여 플레이리스트를 제공받음으로써 언제든지 당시의 감성에 맞는 음악을 들을 수 있게 된다.The playlist as described above is configured according to the tag information added to the sound source by the user, through which the user can be provided with a playlist that matches the purpose, taste, emotion and mood at that time. For example, if a user enters a writing tag information of “boy” which is the name of his boyfriend who is separated in one or more sound sources, when the boyfriend breaks up, he is provided with a playlist by inputting tag information of “boy” At any time, you can listen to music that matches your emotions.
본 발명의 실시예로서, 재생부(106)는 플레이리스트에 포함된 음원을 랜덤하게 재생하거나, 또는 소정의 기준으로 정렬된 순서로 재생하거나, 또는 사용자에 의해 설정된 순서에 따라 재생할 수 있다. 또한 그룹 내의 음악파일은, 음악파일 각각의 세팅값(예를 들어, 음량, 베이스(bass), 트레블(treble), 디스토션(distortion), 에코(echo) 등)에 대한 사용자의 입력이 있으면 상기 입력된 세팅값에 따라 재생될 수 있다.As an exemplary embodiment of the present invention, the playback unit 106 may randomly play the sound sources included in the playlist, or play them in a sorted order on a predetermined basis, or according to the order set by the user. In addition, the music files in the group may be inputted when a user inputs to each music file setting value (for example, volume, bass, treble, distortion, echo, etc.). It can be played back according to the settings set.
또한, 재생부(106)는 소정의 태그정보에 의해 그룹핑된 하나 이상의 음원을 재생함에 있어서, 하나 이상의 음원에 대응되는 태그정보가 복수개인 경우 태그정보 각각에 대한 가중치를 부여하여 플레이리스트 내에 포함된 음원을 정렬할 수 있으며, 정렬된 순서에 따라 그룹 내의 음원을 재생할 수 있다.In addition, when the playback unit 106 plays back one or more sound sources grouped by predetermined tag information, when the tag information corresponding to the one or more sound sources is plural, the playback unit 106 assigns a weight to each of the tag information to be included in the playlist. Sound sources can be sorted, and the sound sources in the group can be played in the sorted order.
또한, 재생부(106)는 소정의 태그정보에 대응되는 음악파일의 저장위치를 확인하고, 저장위치를 리스트화하여 소정의 태그정보에 매칭되는 음원을 재생할 때 저장위치에 저장되어 있는 음원을 호출하여 재생할 수 있다Also, the playback unit 106 checks the storage location of the music file corresponding to the predetermined tag information, lists the storage locations, and calls the sound source stored in the storage location when playing the sound source matching the predetermined tag information. Can be played by
도 3에 도시된 실시예에 따른 음악제공방법은 도 1 내지 도 2에 도시된 음악제공시스템(100)에서 시계열적으로 처리되는 단계들을 포함한다. 따라서, 이하에서 생략된 내용이라고 하더라도 도 1 내지 도 2에 도시된 음악제공시스템(100)에 관하여 이상에서 기술한 내용은 도 3에 도시된 실시예에 따른 음악제공방법에도 적용될 수 있다.The music providing method according to the embodiment shown in FIG. 3 includes steps that are processed in time series in the music providing system 100 shown in FIGS. 1 to 2. Therefore, even if omitted below, the above description of the music providing system 100 shown in FIGS. 1 to 2 may be applied to the music providing method according to the embodiment shown in FIG. 3.
본 발명의 일 실시예에 따르면, 음악제공방법은 음원에 대응되는 태그정보를 음원과 매칭하여 저장하는 단계 및 음원을 재생하는 단계를 포함할 수 있다. According to an embodiment of the present invention, the music providing method may include matching tag information corresponding to a sound source with a sound source, storing the tag information, and reproducing the sound source.
이에 대한 하나의 실시예로서, 도 3에 따르면, 음악제공방법은 음원에 대하여 사용자로부터 임의의 작성태그정보를 수신하는 단계(S301) 및 수신한 작성태그정보를 음원과 매칭하여 저장하는 단계(S302)를 포함할 수 있다.As an example of this, according to FIG. 3, the music providing method includes receiving random writing tag information from a user with respect to a sound source (S301) and matching and storing the received writing tag information with a sound source (S302). ) May be included.
이때, S302단계는 사용자로부터 수신한 작성태그정보와 유관한 하나 이상의 추천태그정보를 추천하는 단계 및 추천태그정보 중 하나 이상에 대한 입력을 사용자로부터 수신하고, 수신한 추천태그정보를 음원과 매칭하여 저장하는 단계를 포함할 수 있다. In this case, step S302 may include recommending at least one recommendation tag information related to the created tag information received from the user and receiving input from at least one of the recommendation tag information from the user, and matching the received recommendation tag information with the sound source. And storing.
나아가 음악제공방법은 작성태그정보에 따라 음원그룹을 생성하는 단계(S303)를 더 포함할 수 있다. 사용자로부터 입력받은 작성태그정보에 따라 하나 이상의 음원이 포함된 음원그룹을 생성할 수 있으며, 이때 하나 이상의 음원에는 작성태그정보와 동일하거나 유관한 태그정보를 갖는 음원이 포함될 수 있다. 생성된 음원그룹에 포함된 음원은 추후에 등장할 단계에서 플레이리스트에 포함되어 재생될 수 있다. Furthermore, the music providing method may further include generating a sound source group according to the created tag information (S303). The sound source group including one or more sound sources may be generated according to the composition tag information input from the user. In this case, the one or more sound sources may include sound sources having tag information that is the same as or related to the composition tag information. The sound source included in the generated sound source group may be included in a playlist and played in a later stage.
또 다른 일 실시예로서, 음악제공시스템(100)이 사용자단말(20)과 별개의 타사용자단말과 통신할 때, S303단계는 생성된 음원그룹을 타사용자단말에 제공하는 단계를 더 포함할 수 있다. As another embodiment, when the music providing system 100 communicates with another user terminal separate from the user terminal 20, step S303 may further include providing the generated sound source group to the other user terminal. have.
이를 통해 사용자 고유의 음원그룹을 타사용자에게 선물 또는 추천함으로써 공유할 수 있다. Through this, the user's own sound source group can be shared by presenting or recommending to other users.
이때, 음악제공방법은 음원그룹의 제공에 따라 타사용자단말에 대하여 요금을 과금하는 단계를 더 포함할 수 있다. 요금을 과금하는 단계는 타사용자단말이 기보유하는 음원의 리스트와 음원그룹에 포함된 음원을 비교하여 미보유 음원에 대해서만 과금하는 단계를 포함할 수 있다.In this case, the music providing method may further include charging a fee for another user terminal according to the provision of the sound source group. The charging step may include comparing the list of sound sources owned by another user terminal with the sound sources included in the sound source group, and charging only for the unowned sound sources.
또한, 음악제공방법은 음원그룹에 포함된 음원을 재생하는 단계(S104)를 포함할 수 있다. In addition, the music providing method may include the step (S104) of reproducing a sound source included in the sound source group.
S104 단계의 일 실시예로서, 음악제공방법은 사용자로부터 음원그룹에 대한 선택 입력을 수신하는 단계, 선택된 음원그룹에 포함된 음원을 플레이리스트에 포함시키는 단계 및 플레이리스트를 재생하는 단계를 포함할 수 있다. As an embodiment of step S104, the music providing method may include receiving a selection input for a sound source group from a user, including a sound source included in the selected sound source group in a playlist, and playing the playlist. have.
음원을 재생하는 또 다른 실시예로서 음악제공방법은 사용자로부터 태그정보에 대한 입력을 수신하는 단계, 수신한 태그정보와 동일 또는 유관한 태그정보를 갖는 음원을 플레이리스트에 포함시키는 단계 및 플레이리스트를 재생하는 단계를 통해 음원을 재생할 수도 있다.As another embodiment of reproducing a sound source, a music providing method includes receiving an input for tag information from a user, including a sound source having tag information identical or related to the received tag information in a playlist, and a playlist. The sound source may be played through the playing step.
도 3을 통해 설명된 실시예에 따른 음악제공방법은 컴퓨터에 의해 실행되는 프로그램 모듈과 같은 컴퓨터에 의해 실행가능한 명령어를 포함하는 기록 매체의 형태로도 구현될 수 있다. 컴퓨터 판독 가능 매체는 컴퓨터에 의해 액세스될 수 있는 임의의 가용 매체일 수 있고, 휘발성 및 비휘발성 매체, 분리형 및 비분리형 매체를 모두 포함한다. 또한, 컴퓨터 판독가능 매체는 컴퓨터 저장 매체 및 통신 매체를 모두 포함할 수 있다. 컴퓨터 저장 매체는 컴퓨터 판독가능 명령어, 데이터 구조, 프로그램 모듈 또는 기타 데이터와 같은 정보의 저장을 위한 임의의 방법 또는 기술로 구현된 휘발성 및 비휘발성, 분리형 및 비분리형 매체를 모두 포함한다. 통신 매체는 전형적으로 컴퓨터 판독가능 명령어, 데이터 구조, 프로그램 모듈, 또는 반송파와 같은 변조된 데이터 신호의 기타 데이터, 또는 기타 전송 메커니즘을 포함하며, 임의의 정보 전달 매체를 포함한다. The music providing method according to the embodiment described with reference to FIG. 3 may also be implemented in the form of a recording medium including instructions executable by a computer, such as a program module executed by the computer. Computer readable media can be any available media that can be accessed by a computer and includes both volatile and nonvolatile media, removable and non-removable media. In addition, computer readable media may include both computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Communication media typically includes computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave, or other transmission mechanism, and includes any information delivery media.
또한 본 발명의 일실시예에 따르는 음악제공방법은 컴퓨터에 의해 실행 가능한 명령어를 포함하는 컴퓨터 프로그램(또는 컴퓨터 프로그램 제품)으로 구현될 수도 있다. 컴퓨터 프로그램은 프로세서에 의해 처리되는 프로그래밍 가능한 기계 명령어를 포함하고, 고레벨 프로그래밍 언어(High-level Programming Language), 객체 지향 프로그래밍 언어(Object-oriented Programming Language), 어셈블리 언어 또는 기계 언어 등으로 구현될 수 있다. 또한 컴퓨터 프로그램은 유형의 컴퓨터 판독가능 기록매체(예를 들어, 메모리, 하드디스크, 자기/광학 매체 또는 SSD(Solid-State Drive) 등)에 기록될 수 있다. In addition, the music providing method according to an embodiment of the present invention may be implemented as a computer program (or computer program product) including instructions executable by a computer. The computer program includes programmable machine instructions processed by the processor and may be implemented in a high-level programming language, an object-oriented programming language, an assembly language, or a machine language. . The computer program may also be recorded on tangible computer readable media (eg, memory, hard disks, magnetic / optical media or solid-state drives, etc.).
따라서 본 발명의 일실시예에 따르는 음악제공방법은 상술한 바와 같은 컴퓨터 프로그램이 컴퓨팅 장치에 의해 실행됨으로써 구현될 수 있다. 컴퓨팅 장치는 프로세서와, 메모리와, 저장 장치와, 메모리 및 고속 확장포트에 접속하고 있는 고속 인터페이스와, 저속 버스와 저장 장치에 접속하고 있는 저속 인터페이스 중 적어도 일부를 포함할 수 있다. 이러한 성분들 각각은 다양한 버스를 이용하여 서로 접속되어 있으며, 공통 머더보드에 탑재되거나 다른 적절한 방식으로 장착될 수 있다. Accordingly, the music providing method according to an embodiment of the present invention may be implemented by executing the computer program as described above by the computing device. The computing device may include at least a portion of a processor, a memory, a storage device, a high speed interface connected to the memory and a high speed expansion port, and a low speed interface connected to the low speed bus and the storage device. Each of these components are connected to each other using a variety of buses and may be mounted on a common motherboard or otherwise mounted in a suitable manner.
여기서 프로세서는 컴퓨팅 장치 내에서 명령어를 처리할 수 있는데, 이런 명령어로는, 예컨대 고속 인터페이스에 접속된 디스플레이처럼 외부 입력, 출력 장치상에 GUI(Graphic User Interface)를 제공하기 위한 그래픽 정보를 표시하기 위해 메모리나 저장 장치에 저장된 명령어를 들 수 있다. 다른 실시예로서, 다수의 프로세서 및(또는) 다수의 버스가 적절히 다수의 메모리 및 메모리 형태와 함께 이용될 수 있다. 또한 프로세서는 독립적인 다수의 아날로그 및(또는) 디지털 프로세서를 포함하는 칩들이 이루는 칩셋으로 구현될 수 있다. Here, the processor may process instructions within the computing device, such as to display graphical information for providing a graphical user interface (GUI) on an external input, output device, such as a display connected to a high speed interface. Instructions stored in memory or storage. In other embodiments, multiple processors and / or multiple buses may be used with appropriately multiple memories and memory types. The processor may also be implemented as a chipset consisting of chips comprising a plurality of independent analog and / or digital processors.
또한 메모리는 컴퓨팅 장치 내에서 정보를 저장한다. 일례로, 메모리는 휘발성 메모리 유닛 또는 그들의 집합으로 구성될 수 있다. 다른 예로, 메모리는 비휘발성 메모리 유닛 또는 그들의 집합으로 구성될 수 있다. 또한 메모리는 예컨대, 자기 혹은 광 디스크와 같이 다른 형태의 컴퓨터 판독 가능한 매체일 수도 있다. The memory also stores information within the computing device. In one example, the memory may consist of a volatile memory unit or a collection thereof. As another example, the memory may consist of a nonvolatile memory unit or a collection thereof. The memory may also be other forms of computer readable media, such as, for example, magnetic or optical disks.
그리고 저장장치는 컴퓨팅 장치에게 대용량의 저장공간을 제공할 수 있다. 저장 장치는 컴퓨터 판독 가능한 매체이거나 이런 매체를 포함하는 구성일 수 있으며, 예를 들어 SAN(Storage Area Network) 내의 장치들이나 다른 구성도 포함할 수 있고, 플로피 디스크 장치, 하드 디스크 장치, 광 디스크 장치, 혹은 테이프 장치, 플래시 메모리, 그와 유사한 다른 반도체 메모리 장치 혹은 장치 어레이일 수 있다. In addition, the storage device can provide a large amount of storage space to the computing device. The storage device may be a computer readable medium or a configuration including such a medium, and may include, for example, devices or other configurations within a storage area network (SAN), and may include a floppy disk device, a hard disk device, an optical disk device, Or a tape device, flash memory, or similar other semiconductor memory device or device array.
전술한 본 발명의 설명은 예시를 위한 것이며, 본 발명이 속하는 기술분야의 통상의 지식을 가진 자는 본 발명의 기술적 사상이나 필수적인 특징을 변경하지 않고서 다른 구체적인 형태로 쉽게 변형이 가능하다는 것을 이해할 수 있을 것이다. 그러므로 이상에서 기술한 실시예들은 모든 면에서 예시적인 것이며 한정적이 아닌 것으로 이해해야만 한다. 예를 들어, 단일형으로 설명되어 있는 각 구성 요소는 분산되어 실시될 수도 있으며, 마찬가지로 분산된 것으로 설명되어 있는 구성 요소들도 결합된 형태로 실시될 수 있다.The foregoing description of the present invention is intended for illustration, and it will be understood by those skilled in the art that the present invention may be easily modified in other specific forms without changing the technical spirit or essential features of the present invention. will be. Therefore, it should be understood that the embodiments described above are exemplary in all respects and not restrictive. For example, each component described as a single type may be implemented in a distributed manner, and similarly, components described as distributed may be implemented in a combined form.
본 발명의 범위는 상기 상세한 설명보다는 후술하는 특허청구범위에 의하여 나타내어지며, 특허청구범위의 의미 및 범위 그리고 그 균등 개념으로부터 도출되는 모든 변경 또는 변형된 형태가 본 발명의 범위에 포함되는 것으로 해석되어야 한다.The scope of the present invention is shown by the following claims rather than the above description, and all changes or modifications derived from the meaning and scope of the claims and their equivalents should be construed as being included in the scope of the present invention. do.

Claims (18)

  1. 음악제공시스템이 수행하는 음악제공방법에 있어서,In the music providing method performed by the music providing system,
    상기 음원에 대응되는 태그정보를 상기 음원과 매칭하여 저장하는 단계; 및Storing tag information corresponding to the sound source by matching the sound source; And
    상기 음원을 재생하는 단계를 포함하는, 음악제공방법. Reproducing the sound source, music providing method.
  2. 제 1 항에 있어서,The method of claim 1,
    상기 태그정보를 상기 음원과 매칭하여 저장하는 단계는,The step of storing the tag information to match the sound source,
    상기 음원에 대하여 상기 사용자로부터 임의의 작성태그정보를 수신하는 단계; 및 Receiving arbitrary creation tag information from the user with respect to the sound source; And
    수신한 상기 작성태그정보를 상기 음원과 매칭하여 저장하는 단계를 포함하는, 음악제공방법.And matching and storing the received tag information with the sound source.
  3. 제 2 항에 있어서,The method of claim 2,
    상기 작성태그정보를 상기 음원과 매칭하여 저장하는 단계는,The storing of the created tag information by matching the sound source may include:
    상기 사용자로부터 수신한 상기 작성태그정보와 유관한 하나 이상의 추천태그정보를 추천하는 단계; 및 Recommending one or more recommendation tag information related to the created tag information received from the user; And
    상기 추천태그정보 중 하나 이상에 대한 입력을 상기 사용자로부터 수신하고, 수신한 상기 추천태그정보를 상기 음원과 매칭하여 저장하는 단계를 포함하는, 음악제공방법.And receiving an input for at least one of the recommendation tag information from the user and matching the received recommendation tag information with the sound source.
  4. 제 2 항에 있어서,The method of claim 2,
    상기 작성태그정보를 상기 음원과 매칭하여 저장하는 단계는,The storing of the created tag information by matching the sound source may include:
    상기 작성태그정보에 따라 하나 이상의 음원을 포함하는 음원그룹을 생성하는 단계를 더 포함하는, 음악제공방법.And generating a sound source group including one or more sound sources according to the created tag information.
  5. 제 4 항에 있어서,The method of claim 4, wherein
    상기 음원그룹을 생성하는 단계는,Creating the sound source group,
    생성된 상기 음원그룹을 타사용자단말에 제공하는 단계를 더 포함하는, 음악제공방법.And providing the generated sound source group to another user terminal.
  6. 제 5 항에 있어서, The method of claim 5, wherein
    상기 생성된 음원그룹을 타사용자단말에 제공하는 단계는,Providing the generated sound source group to another user terminal,
    상기 음원그룹의 제공에 따라 상기 타사용자단말에 대하여 요금을 과금하는 단계를 더 포함하고,Billing the fee for the other user terminal according to the provision of the sound source group,
    상기 요금을 과금하는 단계는, Charging the fee,
    상기 타사용자단말이 기보유하는 음원의 리스트와 상기 음원그룹에 포함된 음원을 비교하여 미보유 음원에 대해서만 과금하는 단계를 포함하는, 음악제공방법.And comparing the list of sound sources owned by the other user terminal with the sound sources included in the sound source group, and charging only for the non-owned sound sources.
  7. 제 1 항에 있어서,The method of claim 1,
    상기 음원을 재생하는 단계는,Reproducing the sound source,
    사용자로부터 태그정보에 대한 입력을 수신하는 단계; Receiving an input for tag information from a user;
    수신한 상기 태그정보와 동일 또는 유관한 태그정보를 갖는 음원을 플레이리스트에 포함시키는 단계; 및 Including a sound source having tag information identical or related to the received tag information in a playlist; And
    상기 플레이리스트를 재생하는 단계를 포함하는, 음악제공방법.Playing the playlist.
  8. 제 4 항에 있어서,The method of claim 4, wherein
    상기 음원을 재생하는 단계는,Reproducing the sound source,
    사용자로부터 상기 음원그룹에 대한 선택 입력을 수신하는 단계; Receiving a selection input for the sound source group from a user;
    선택된 상기 음원그룹에 포함된 음원을 플레이리스트에 포함시키는 단계; 및 Including a sound source included in the selected sound source group in a playlist; And
    상기 플레이리스트를 재생하는 단계를 포함하는, 음악제공방법.Playing the playlist.
  9. 음원의 제공 및 음원의 재생 중 적어도 하나를 수행하는 음악제공시스템에 있어서,In the music providing system for performing at least one of the provision of the sound source and the reproduction of the sound source,
    상기 음원에 대응되는 태그정보를 상기 음원과 매칭하여 저장하는 저장부; 및A storage unit which matches tag information corresponding to the sound source with the sound source and stores the tag information; And
    상기 음원을 재생하는 재생부를 포함하는, 음악제공시스템. And a reproduction unit for reproducing the sound source.
  10. 제 9 항에 있어서,The method of claim 9,
    상기 음악제공시스템은,The music providing system,
    상기 음원에 대하여 상기 사용자로부터 임의의 작성태그정보를 수신하는 태그할당부를 더 포함하고,Further comprising a tag assignment unit for receiving any written tag information from the user for the sound source,
    상기 저장부는,The storage unit,
    상기 태그할당부가 수신한 상기 작성태그정보를 상기 음원과 매칭하여 저장하는, 음악제공방법.And matching the stored tag information received by the tag assignment unit with the sound source and storing the created tag information.
  11. 제 10 항에 있어서,The method of claim 10,
    상기 태그할당부는,The tag assignment unit,
    상기 사용자로부터 수신한 상기 작성태그정보와 유관한 하나 이상의 추천태그정보를 추천하고, 상기 추천태그정보 중 하나 이상에 대한 입력을 상기 사용자로부터 수신하고,Recommending at least one recommendation tag information related to the created tag information received from the user, receiving input from at least one of the recommendation tag information from the user,
    상기 저장부는,The storage unit,
    수신한 상기 추천태그정보를 상기 음원과 매칭하여 저장하는, 음악제공시스템.And matching the received tag information with the sound source and storing the received tag information.
  12. 제 10 항에 있어서,The method of claim 10,
    상기 음악제공시스템은,The music providing system,
    상기 작성태그정보에 따라 하나 이상의 음원을 포함하는 음원그룹을 생성하는 그룹생성부를 더 포함하는, 음악제공시스템.And a group generator for generating a sound source group including one or more sound sources according to the created tag information.
  13. 제 12 항에 있어서, The method of claim 12,
    상기 그룹생성부는, The group generation unit,
    생성된 상기 음원그룹을 타사용자단말에 제공하는 그룹제공부를 더 포함하는, 음악제공시스템.And a group providing unit for providing the generated sound source group to other user terminals.
  14. 제 13 항에 있어서,The method of claim 13,
    상기 음악제공시스템은, The music providing system,
    상기 음원그룹의 제공에 따라 상기 타사용자단말에 대하여 요금을 과금하는 과금부를 더 포함하고,Further comprising a charging unit for charging a fee for the other user terminal in accordance with the provision of the sound source group,
    상기 과금부는, The charging unit,
    상기 타사용자단말이 기보유하는 음원의 리스트와 상기 음원그룹에 포함된 음원을 비교하여 미보유 음원에 대해서만 과금하는, 음악제공시스템.And comparing the list of sound sources owned by the other user terminal with the sound sources included in the sound source group, and charging only for the unowned sound sources.
  15. 제 9 항에 있어서,The method of claim 9,
    상기 재생부는,The regeneration unit,
    사용자로부터 태그정보에 대한 입력을 수신하고, 수신한 상기 태그정보와 동일 또는 유관한 태그정보를 갖는 음원을 플레이리스트에 포함시키고, 상기 플레이리스트를 재생하는, 음악제공시스템.Receiving an input for tag information from a user, including a sound source having tag information identical or related to the received tag information in a playlist, and playing the playlist.
  16. 제 12 항에 있어서, The method of claim 12,
    상기 재생부는,The regeneration unit,
    사용자로부터 상기 음원그룹에 대한 선택 입력을 수신하고, 선택된 상기 음원그룹에 포함된 음원을 플레이리스트에 포함시키고, 상기 플레이리스트를 재생하는, 음악제공시스템.Receiving a selection input for the sound source group from a user, including a sound source included in the selected sound source group in a playlist, and playing the playlist.
  17. 음악제공시스템에 의해 수행되며, 제 1 항 내지 제 8 항 중 적어도 하나에 기재된 방법을 수행하기 위해 기록매체에 저장된 컴퓨터 프로그램.A computer program executed by a music providing system and stored in a recording medium for carrying out the method according to at least one of claims 1 to 8.
  18. 제 1 항 내지 제 8 항 중 적어도 하나에 기재된 방법을 수행하는 프로그램이 기록된 컴퓨터 판독가능한 기록매체.A computer-readable recording medium having recorded thereon a program for performing the method according to at least one of claims 1 to 8.
PCT/KR2016/002043 2015-03-06 2016-03-02 Music providing method and music providing system WO2016144032A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US15/554,710 US20180239577A1 (en) 2015-03-06 2016-03-02 Music providing method and music providing system
JP2017565031A JP2018507503A (en) 2015-03-06 2016-03-02 Music providing method and music providing system

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR10-2015-0031502 2015-03-06
KR20150031502 2015-03-06
KR1020160012649A KR101874441B1 (en) 2015-03-06 2016-02-02 Device and method for providing music
KR10-2016-0012649 2016-02-02

Publications (1)

Publication Number Publication Date
WO2016144032A1 true WO2016144032A1 (en) 2016-09-15

Family

ID=56880565

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2016/002043 WO2016144032A1 (en) 2015-03-06 2016-03-02 Music providing method and music providing system

Country Status (1)

Country Link
WO (1) WO2016144032A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060206582A1 (en) * 2003-11-17 2006-09-14 David Finn Portable music device with song tag capture
US20060242661A1 (en) * 2003-06-03 2006-10-26 Koninklijke Philips Electronics N.V. Method and device for generating a user profile on the basis of playlists
US20060293909A1 (en) * 2005-04-01 2006-12-28 Sony Corporation Content and playlist providing method
US20110264495A1 (en) * 2010-04-22 2011-10-27 Apple Inc. Aggregation of tagged media item information
US20110295843A1 (en) * 2010-05-26 2011-12-01 Apple Inc. Dynamic generation of contextually aware playlists

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060242661A1 (en) * 2003-06-03 2006-10-26 Koninklijke Philips Electronics N.V. Method and device for generating a user profile on the basis of playlists
US20060206582A1 (en) * 2003-11-17 2006-09-14 David Finn Portable music device with song tag capture
US20060293909A1 (en) * 2005-04-01 2006-12-28 Sony Corporation Content and playlist providing method
US20110264495A1 (en) * 2010-04-22 2011-10-27 Apple Inc. Aggregation of tagged media item information
US20110295843A1 (en) * 2010-05-26 2011-12-01 Apple Inc. Dynamic generation of contextually aware playlists

Similar Documents

Publication Publication Date Title
US11210338B2 (en) Systems, methods and apparatus for generating music recommendations based on combining song and user influencers with channel rule characterizations
WO2011059275A2 (en) Method and apparatus for managing data
KR100826959B1 (en) Method and system for making a picture image
US11079918B2 (en) Adaptive audio and video channels in a group exercise class
US20090063459A1 (en) System and Method for Recommending Songs
KR101963753B1 (en) Method and apparatus for playing videos for music segment
CN109241242A (en) A kind of direct broadcasting room topic recommended method, device, server and storage medium
WO2015163552A1 (en) Device for providing image related to replayed music and method using same
CN106331822A (en) Method and device for playing multiple videos and electronic equipment
KR101924205B1 (en) Karaoke system and management method thereof
JP2014085644A (en) Karaoke system
KR101874441B1 (en) Device and method for providing music
KR20150090306A (en) Method for sharing contents among plural terminal, system and apparatus thereof
WO2021045473A1 (en) Loudness normalization method and system
WO2016144032A1 (en) Music providing method and music providing system
WO2012154792A2 (en) Cross-platform portable personal video compositing and media content distribution system
Thomas Library-podcast intersections
Spalding Turning point: The origins of Canadian content requirements for commercial radio
Turner et al. Is Binaural Spatialization the Future of Hip-Hop?
KR20150059219A (en) Method for providing music contents and music contents providing system performing thereof
Hurst Getting started with podcasting
WO2011059276A2 (en) Method and apparatus for managing content
WO2013089310A1 (en) Method, terminal, and recording medium for providing user interface for content service
Geary et al. Using design dimensions to develop a multi-device audio experience through workshops and prototyping
Kemack Goot et al. A Spectrum of Online Rehearsal Applications: A Potential Means for Cultural Connection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16761919

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15554710

Country of ref document: US

ENP Entry into the national phase

Ref document number: 2017565031

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16761919

Country of ref document: EP

Kind code of ref document: A1