CN106775567B - Sound effect matching method and system - Google Patents

Sound effect matching method and system Download PDF

Info

Publication number
CN106775567B
CN106775567B CN201710008897.6A CN201710008897A CN106775567B CN 106775567 B CN106775567 B CN 106775567B CN 201710008897 A CN201710008897 A CN 201710008897A CN 106775567 B CN106775567 B CN 106775567B
Authority
CN
China
Prior art keywords
information
type
audio file
genre
sound effect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710008897.6A
Other languages
Chinese (zh)
Other versions
CN106775567A (en
Inventor
钟华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Whaley Technology Co Ltd
Original Assignee
Whaley Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Whaley Technology Co Ltd filed Critical Whaley Technology Co Ltd
Priority to CN201710008897.6A priority Critical patent/CN106775567B/en
Publication of CN106775567A publication Critical patent/CN106775567A/en
Application granted granted Critical
Publication of CN106775567B publication Critical patent/CN106775567B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/162Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/686Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title or artist information, time, location or usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/951Indexing; Web crawling techniques

Abstract

The embodiment of the invention provides a sound effect matching method and system, and belongs to the technical field of audio processing. According to the method, the sound effect mode corresponding to the audio file is determined through the attribute information of the audio file, and the audio file is played according to the sound effect mode, so that the automatic matching setting of the sound effect mode is realized, manual operation of a user is not needed, and the use experience of the user is improved.

Description

Sound effect matching method and system
Technical Field
The invention relates to the technical field of audio processing, in particular to a sound effect matching method and system.
Background
With the continuous development of electronic technology, more and more intelligent terminals with audio playing function are moving into people's daily life, such as smart televisions, smart phones, personal computers, and the like. When playing an audio file, the intelligent terminal needs to adopt a proper sound effect mode to enable a user to really feel the enjoyment in the sense of hearing. For example, when the user plays an audio song through the terminal, the corresponding sound effect mode can be manually selected according to the genre type of the song. However, this method of adjusting the sound effect mode is inefficient and poor in user experience. Moreover, most of the ordinary users also have little knowledge about the genre of the audio files, and it is difficult to select an appropriate sound effect mode, which results in poor hearing experience for the users.
Disclosure of Invention
In view of the above, the present invention provides a sound effect matching method and system to improve the above problem.
The preferred embodiment of the present invention provides a sound effect matching method, which comprises:
acquiring attribute information of an audio file, wherein the attribute information comprises singer information corresponding to the audio file;
determining a sound effect mode matched with the audio file according to the attribute information;
and playing the audio file according to the sound effect mode.
Another preferred embodiment of the present invention provides a sound effect matching system, which includes:
the attribute acquisition module is used for acquiring attribute information of the audio file, wherein the attribute information comprises singer information corresponding to the audio file;
the sound effect matching module is used for determining a sound effect mode matched with the audio file according to the attribute information;
and the audio playing module is used for playing the audio file according to the sound effect mode.
According to the sound effect matching method and system provided by the embodiment of the invention, the sound effect mode corresponding to the audio file is determined through the attribute information of the audio file, and the audio file is played according to the sound effect mode, so that the automatic matching setting of the sound effect mode is realized, the manual operation of a user is not needed, and the use experience of the user is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic diagram of information interaction between a server and a media playing device according to an embodiment of the present invention;
fig. 2 is a schematic block diagram of a media playing device according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart illustrating a sound effect matching method according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a data list stored in a database according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating the sub-steps included in step S103 of the sound effect matching method shown in FIG. 3 according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating sub-steps included in step S103 of the sound effect matching method shown in FIG. 3 according to another embodiment of the present invention;
FIG. 7 is a flowchart illustrating sub-steps included in step S103 of the sound effect matching method shown in FIG. 3 according to another embodiment of the present invention;
fig. 8 is a functional block diagram of a sound effect matching system according to an embodiment of the present invention.
Icon: 100-a media playing device; 200-a server; 110-a memory; 120-a processor; 130-an audio player; 140-sound effect matching system; 1402-attribute obtaining module; 1404 — sound effect matching module; 1406-audio playing module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is an interactive schematic diagram of at least one media playing device 100 communicating with a server 200 through a network according to a preferred embodiment of the present invention. The server 200 may be, but is not limited to, a web server, a database server, and the like. The media playing device 100 may be, but is not limited to, a smart television, a smart phone, a personal computer, a tablet computer, and the like.
Fig. 2 is a block diagram of the media playing device 100 according to an embodiment of the present invention. The media playback device 100 includes a memory 110, a processor 120, an audio player 130, and a sound effect matching system 140. The memory 110, the processor 120 and the audio player 130 are electrically connected directly or indirectly to realize data transmission or interaction. The audio player 130 may be, but is not limited to, a local audio player or a network audio player.
The sound effect matching system 140 includes at least one software functional module which can be stored in the memory 110 in the form of software or firmware or solidified in the operating system of the media playing device 100. The processor 120 may be an integrated circuit chip with signal processing capabilities, such as a general purpose processor, digital signal processor, etc., for executing executable modules stored in the memory 110, such as software functional modules or computer programs included in the sound matching system 140. The processor 120 executes the executable module after receiving the execution instruction, and the method executed by the media playing device 100 defined by the flow process disclosed in any embodiment of the invention described below can be applied to the processor 120, or implemented by the processor 120.
Fig. 3 is a flowchart of a sound effect matching method applied to the media playing device 100 shown in fig. 1 according to an embodiment of the present invention. It should be noted that the method provided by the present invention is not limited by the specific sequence shown in fig. 3 and described below. The steps shown in fig. 3 are explained in detail below.
Step S101, acquiring attribute information of the audio file.
The audio files described in this embodiment include music files, such as music files in MP3 format, music files in WMA format, music files in WAV format, and the like. In the following description, no particular explanation is made, and the mentioned audio file is exemplified by the music file.
An audio file is usually composed of three parts, a header, audio data, and a trailer. Currently, attribute information of an audio file is generally stored in the file header. For example, in the case of an audio file in the MP3 format, most of the attribute information is stored in the ID3V2 tag located in the header.
The attribute information of the audio file includes one or more of a plurality of tag information such as singer information, year of production, album information, genre type, and genre type of album. Wherein the genre type is one of a plurality of sound effect modes suitable for playing the audio file. The genre type may be, but is not limited to, Pop mode (Pop), Jazz mode (Jazz), Rock mode (Rock), classical mode (Classic), blue mode (Blues), Hip-Hop mode (Hip-Hop), Disco mode (Disco), and normal mode (Nomal), etc.
Generally, attribute information of each audio file is labeled with singer information, but whether the rest of label information is contained depends on different audio files. That is, in this embodiment, the attribute information obtained from the header of the audio file should at least include singer information corresponding to the audio file.
In addition, although the attribute information of the existing partial audio file is labeled with the genre type or the genre type of the album, the accuracy thereof is not reliable. For example, in a network music platform (e.g., a hot dog music), the tag information related to genre types of music files stored in the background database is added to the attribute information in batch, and is not checked one by one. Therefore, it is not feasible to determine the sound effect mode of the audio file only by the genre type indicated in the attribute information.
And step S103, determining a sound effect mode matched with the audio file according to the attribute information.
In this embodiment, after the attribute information of the audio file is acquired, the attribute information is searched in a pre-stored database according to the acquired attribute information, and a sound effect mode matched with the audio file is determined. The database may be a terminal database stored in the media playback device 100, or may be a network database stored in the server 200. When the database is a network database, the media playing device 100 establishes a communication connection with the server 200 through a wireless or wired network. After acquiring the attribute information of the audio file, the media playing device 100 sends the attribute information to the server 200, so that the server 200 searches the network database according to the received attribute information to determine the sound effect mode of the audio file. Then, the server 200 returns the found sound effect mode to the media playing device 100.
The database may be updated in real time or periodically. When the database is a terminal database, the media playing device 100 may access the server 200 according to a predetermined period, and download the updated data to the terminal database, or the server 200 pushes the updated data to the media playing device 100 in real time.
It will of course be appreciated that the database is preferably a network database in general. Because the database is stored in the cloud, the local storage space of the terminal is not occupied, a user can use the latest data at any time without upgrading the client, and the terminal cannot bring hidden system troubles due to the increasingly huge database.
The database stores a data list as shown in fig. 4. Of course, it is understood that the data list is merely exemplary, and in other embodiments, the data stored in the database may be stored in other data structures.
The data list may be obtained through a large amount of data collection and analysis. Specifically, the published musical pieces, including albums, single songs, etc., may be acquired from the network to the maximum extent for each singer. Then, data analysis is performed on all the acquired musical pieces of the singer to determine a main genre thereof, that is, a genre to which most or all of the musical pieces belong. Then, the musical pieces are divided by the production years, the genre type of the artist's year is determined, and the genre type of the album is determined based on the album information. Wherein the genre type of the year is a genre type to which most or all of the musical pieces produced by the singer in the current year belong.
For example, suppose singer a produced eight albums and two single songs from 2001 to date. Wherein, the genre types of six albums and two single songs are popular music, and the other two albums are classical music. Thus, the mainstream genre of the singer a can be determined as the pop mode. Then, all musical pieces are divided by the year of production. Five pop music albums and one pop music single song are produced in the 00 s, and the genre type of the 00 s is a pop music mode. Three albums and a single song are produced in the 10 s, wherein one album and the single song are popular music, and the other two albums are classical music, so that the genre type of the 10 s is a classical music mode.
In this embodiment, since the number of the tag information included in the attribute information of different audio files is different, the manner of determining the sound effect mode by searching the database according to the attribute information is also different.
As shown in fig. 5, if the attribute information obtained as described above only includes singer information, step S103 includes the following sub-steps:
substep S131: and searching in the database according to the singer information to obtain the main dispatch type corresponding to the singer information.
Substep S133: and determining a sound effect mode matched with the audio file according to the main dispatch type.
When only singer information is written in the attribute information of the audio file, the corresponding main dispatch type can be searched in the database according to the singer information, and then the sound effect mode matched with the audio file is determined. Although the determination method of the sound effect mode is not absolutely accurate, statistics proves that the accuracy is very high, the technical implementation is simple, a large amount of calculation overhead is not needed, and the practicability is high.
As shown in fig. 6, if the attribute information obtained as described above includes tag information in addition to singer information, step S103 includes substeps 125 to 129, as follows, before substep S131 shown in fig. 5. Wherein the attached tag information includes at least one of the production age information and the album information of the audio file. In this embodiment, the album information may be, but is not limited to, an album name.
Substep S125: and searching a distribution type corresponding to the singer information and the attached label information in the prestored database according to the singer information and the attached label information in the attribute information.
In this embodiment, the genre type is a chronological genre type or an album genre type. Specifically, when the attached tag information only includes the production year information, the database may be searched for the singer information and the production year information, and the genre type of the year when the singer corresponds to the production year may be obtained. And when the auxiliary tag information only comprises album information, searching in the database according to the singer information and the album information to obtain the corresponding album genre type. When the attached tag information includes both the production year information and the album information, the year genre type and the album genre type are obtained according to the above searching method, and when the two types are different, the album genre type may be used as the standard.
Of course, in some embodiments, there may be cases where no genre-type is found. Thus, the following substeps may be performed.
And a substep S127 of judging whether the genre type is found. If the search is found, the following substep S129 is performed, and if the search is not found, the process returns to substep S131.
And a substep S129, directly determining a sound effect mode matched with the audio file according to the shunting type.
As shown in fig. 7, if the obtained attribute information is labeled with at least one of genre type information corresponding to the audio file and genre type information corresponding to the album on which the audio file is located, in addition to singer information, step S103 further includes sub-step 141 before sub-step S133 shown in fig. 5 or 6.
Substep S141: and judging whether the main genre type is consistent with the genre type information marked in the attribute information. If they match, substep S133 is executed, and if they do not match, substep S143 is executed.
In this embodiment, if the genre type information labeled in the attribute information only includes a genre type corresponding to the audio file, it is only necessary to determine whether the main genre type is consistent with the genre type corresponding to the audio file. If the genre type information marked in the attribute information only includes genre type information corresponding to the album where the audio file is located, it is only necessary to determine whether the main genre type is consistent with the genre type information corresponding to the album where the audio file is located. If the genre type information labeled in the attribute information includes both the genre type information corresponding to the album in which the audio file is located and the genre type corresponding to the audio file, it can be determined whether the main genre type is consistent with the two genre type information. Alternatively, in another embodiment, it may be determined whether the main genre type is consistent with one of the two genre type information.
Substep S143: and converting according to a preset weighting proportion to determine a final genre type, and determining a sound effect mode matched with the audio file according to the final genre type.
As mentioned in the foregoing, the accuracy of the genre type information labeled in the attribute information of the audio file is low, so that it is not feasible to determine the acoustics mode of the audio file directly from the genre type information in the attribute information.
When the main genre type determined by the singer information is consistent with the genre type information labeled in the attribute information, it can be said that the accuracy of the sound effect mode determined by the main genre type is further enhanced. When the main genre type is inconsistent with the genre type information marked in the attribute information, in this embodiment, the final genre type may be determined by performing conversion according to a predetermined weighting ratio.
The conversion according to the predetermined weighting ratio may be that the mainstream genre and the genre type information labeled in the attribute information are weighted and calculated according to a certain weighting ratio. Then, one of the largest weight ratios is determined as the final genre type. For example, in an embodiment, the weight proportion of the main genre is 60%, the weight proportion of the genre type information of the audio file is 20%, and the weight proportion of the genre type information corresponding to the album on which the audio file is located is 20%, then the final genre type is determined as the main genre type.
It will be appreciated that different weighting ratios may be set for different singers or music on different platforms. As mentioned above, the genre type information labeled in the attribute information is usually inaccurate, but it is not excluded that the genre type information included in the attribute information is accurate in the audio file corresponding to a singer. Therefore, specific weighting ratio division can be performed for this kind of singer, for example, the weight ratio occupied by the main genre is 30%, the weight ratio occupied by the genre type information of the audio file is 40%, and the weight ratio occupied by the genre type information corresponding to the album where the audio file is located is 30%. Or, the weight ratio occupied by the main genre type is 45%, and the weight ratio occupied by the genre type information of the audio file is 55%.
Step S105, playing the audio file according to the sound effect mode.
In this embodiment, after the media playing device 100 determines the sound effect mode matched with the audio file, the audio player 130 sets audio playing parameters according to the sound effect mode, and plays the audio file.
It should be noted here that the media playing device 100 may also be a combination device, such as a combination of a PC (personal computer) and a smart speaker. The PC is used for acquiring attribute information of the audio file and determining a matched sound effect mode according to the attribute information. And then, the PC sends the sound effect mode to the intelligent sound box, so that the intelligent sound box sets audio playing parameters according to the received sound effect mode and plays the audio file.
In addition, it should be noted that, in other embodiments, the database may also perform data update according to feedback information of the user, so as to further improve the accuracy of the sound effect mode determined by the sound effect matching method.
Fig. 8 is a functional block diagram of a sound effect matching system 140 according to an embodiment of the present invention. The sound effects matching system 140 includes an attribute obtaining module 1402, a sound effects matching module 1404, and an audio playing module 1406. The functional blocks shown in fig. 8 will be described in detail below.
The attribute obtaining module 1402 is configured to obtain attribute information of an audio file, where the attribute information includes singer information corresponding to the audio file.
The sound effect matching module 1404 is configured to determine a sound effect mode matched with the audio file according to the attribute information.
The audio playing module 1406 is configured to play the audio file according to the sound effect mode.
The specific operation method of each functional module described in this embodiment may refer to the description of the corresponding step in the above method embodiment, and is not described in detail here.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a read-only memory, a random access memory magnetic disk, or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
It should be noted that: like reference numbers and letters refer to like items in the above figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (6)

1. A sound effect matching method is characterized by comprising the following steps:
acquiring attribute information of an audio file, wherein the attribute information comprises singer information and attached tag information corresponding to the audio file, and the attached tag information comprises at least one of production age information and album information of the audio file;
determining a sound effect mode matched with the audio file according to the attribute information;
playing the audio file according to the sound effect mode;
the step of determining the sound effect mode matched with the audio file according to the attribute information comprises the following steps:
searching a branching type corresponding to the singer information and the attached label information in a pre-stored database according to the singer information and the attached label information in the attribute information, wherein the branching type is a genre type of the era or an album genre type, and the genre type of the era is a genre type to which most or all of the musical works of the singer in the current generation belong;
judging whether the shunting type is found;
if the audio file is found, determining a sound effect mode matched with the audio file directly through the distribution type;
if the singer information is not found, searching in a pre-stored database according to the singer information in the attribute information, and obtaining a main genre corresponding to the singer information, wherein the main genre of each singer obtained after data analysis is carried out on the musical piece published by each singer is pre-stored in the database, the musical pieces are divided according to the production years, the genre type of the singer is determined, and the genre type of the album is determined according to the album information, wherein the main genre type refers to a genre type to which most or all of the musical pieces of the singer belong;
and determining a sound effect mode matched with the audio file according to the main dispatch type.
2. The method according to claim 1, wherein the attribute information further includes genre type information, wherein the genre type information is genre type information corresponding to the audio file or genre type information corresponding to an album on which the audio file is located;
before the step of determining the sound effect mode matched with the audio file through the main dispatch type, the method further comprises the following steps:
judging whether the main genre type is consistent with the genre type information in the attribute information;
if the audio files are consistent with the main dispatch type, determining a sound effect mode matched with the audio files directly through the main dispatch type;
and if the audio files are inconsistent, converting according to a preset weighting proportion to determine a final genre type, and then determining a sound effect mode matched with the audio file according to the final genre type.
3. The method of claim 1, wherein the database comprises a terminal database or a network database.
4. A sound effect matching system, the system comprising:
the attribute acquisition module is used for acquiring attribute information of an audio file, wherein the attribute information comprises singer information and attached tag information corresponding to the audio file, and the attached tag information comprises at least one of production age information and album information of the audio file;
the sound effect matching module is used for determining a sound effect mode matched with the audio file according to the attribute information, and specifically comprises the following steps: searching a sub-genre type corresponding to the singer information and the attached tag information in a pre-stored database according to the singer information and the attached tag information in the attribute information, wherein the sub-genre type is a year genre type or an album genre type; judging whether the shunting type is found; if the audio file is found, determining a sound effect mode matched with the audio file directly through the distribution type; if not, searching in a pre-stored database according to singer information in the attribute information to obtain a main dispatch type corresponding to the singer information; determining a sound effect mode matched with the audio file according to the main dispatch type; the database is pre-stored with main genre of each singer obtained by data analysis according to the published musical works of each singer, wherein the main genre is a genre to which most or all of the musical works of the singer belong;
and the audio playing module is used for playing the audio file according to the sound effect mode.
5. The system according to claim 4, wherein the attribute information further includes genre type information, wherein the genre type information is genre type information corresponding to the audio file or genre type information corresponding to an album on which the audio file is located;
the sound effect matching module is also used for determining a sound effect mode matched with the audio file according to the main dispatch type:
judging whether the main genre type is consistent with the genre type information in the attribute information;
if the audio files are consistent with the main dispatch type, determining a sound effect mode matched with the audio files directly through the main dispatch type;
and if the audio files are inconsistent, converting according to a preset weighting proportion to determine a final genre type, and then determining a sound effect mode matched with the audio file according to the final genre type.
6. The system of claim 4, wherein the database comprises a terminal database or a network database.
CN201710008897.6A 2017-01-05 2017-01-05 Sound effect matching method and system Expired - Fee Related CN106775567B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710008897.6A CN106775567B (en) 2017-01-05 2017-01-05 Sound effect matching method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710008897.6A CN106775567B (en) 2017-01-05 2017-01-05 Sound effect matching method and system

Publications (2)

Publication Number Publication Date
CN106775567A CN106775567A (en) 2017-05-31
CN106775567B true CN106775567B (en) 2020-10-02

Family

ID=58949716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710008897.6A Expired - Fee Related CN106775567B (en) 2017-01-05 2017-01-05 Sound effect matching method and system

Country Status (1)

Country Link
CN (1) CN106775567B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107507634A (en) * 2017-08-22 2017-12-22 维沃移动通信有限公司 A kind of method for playing music and electronic equipment
CN112002296B (en) * 2020-08-24 2023-08-25 广州小鹏汽车科技有限公司 Music playing method, vehicle, server and storage medium
CN112684998A (en) * 2020-12-21 2021-04-20 北京四维智联科技有限公司 Sound effect mode switching method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101197929A (en) * 2006-12-08 2008-06-11 索尼株式会社 Information processing apparatus, display control processing method and display control processing program
CN103117074A (en) * 2013-01-05 2013-05-22 广东欧珀移动通信有限公司 Method and system for automatic adjustment of audio playing parameters
CN103927146A (en) * 2014-04-30 2014-07-16 深圳市中兴移动通信有限公司 Sound effect self-adapting method and device
CN104934048A (en) * 2015-06-24 2015-09-23 小米科技有限责任公司 Sound effect regulation method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2096626A1 (en) * 2008-02-29 2009-09-02 Sony Corporation Method for visualizing audio data
CN104485121A (en) * 2014-11-24 2015-04-01 惠州Tcl移动通信有限公司 Method and system for automatically setting sound effect parameters

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101197929A (en) * 2006-12-08 2008-06-11 索尼株式会社 Information processing apparatus, display control processing method and display control processing program
CN103117074A (en) * 2013-01-05 2013-05-22 广东欧珀移动通信有限公司 Method and system for automatic adjustment of audio playing parameters
CN103927146A (en) * 2014-04-30 2014-07-16 深圳市中兴移动通信有限公司 Sound effect self-adapting method and device
CN104934048A (en) * 2015-06-24 2015-09-23 小米科技有限责任公司 Sound effect regulation method and device

Also Published As

Publication number Publication date
CN106775567A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN101256811B (en) Apparatus and method for producing play list
CN101996627B (en) Speech processing apparatus, speech processing method and program
US7613736B2 (en) Sharing music essence in a recommendation system
US20100063975A1 (en) Scalable system and method for predicting hit music preferences for an individual
US20220035858A1 (en) Generating playlists using calendar, location and event data
US9576050B1 (en) Generating a playlist based on input acoustic information
CN101520808A (en) Method for visualizing audio data
CN100380367C (en) Electronic appliance having music database and method of forming such music database
KR101942459B1 (en) Method and system for generating playlist using sound source content and meta information
CN106775567B (en) Sound effect matching method and system
CN102165527B (en) Initialising of a system for automatically selecting content based on a user's physiological response
US20190294690A1 (en) Media content item recommendation system
WO2020015411A1 (en) Method and device for training adaptation level evaluation model, and method and device for evaluating adaptation level
KR101336846B1 (en) Contents Search Service Providing Method, Search Server and Search System Including that
CN111723289B (en) Information recommendation method and device
EP2096558A1 (en) Method for generating an ordered list of content items
Jun et al. Social mix: automatic music recommendation and mixing scheme based on social network analysis
CN101925897A (en) Method of suggesting accompaniment tracks for synchronised rendering with content data item
CN110532419B (en) Audio processing method and device
CN113032616A (en) Audio recommendation method and device, computer equipment and storage medium
CN109710797B (en) Audio file pushing method and device, electronic device and storage medium
EP3798865A1 (en) Methods and systems for organizing music tracks
Uno et al. MALL: A life log based music recommendation system and portable music player
KR20190009821A (en) Method and system for generating playlist using sound source content and meta information
WO2015176116A1 (en) System and method for dynamic entertainment playlist generation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201002

Termination date: 20210105