CN111008298A - Method, device, system, equipment and storage medium for searching song accompaniment - Google Patents

Method, device, system, equipment and storage medium for searching song accompaniment Download PDF

Info

Publication number
CN111008298A
CN111008298A CN201911236421.3A CN201911236421A CN111008298A CN 111008298 A CN111008298 A CN 111008298A CN 201911236421 A CN201911236421 A CN 201911236421A CN 111008298 A CN111008298 A CN 111008298A
Authority
CN
China
Prior art keywords
audio
accompaniment
song
name
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911236421.3A
Other languages
Chinese (zh)
Inventor
阮陈贵
张超论
李文
潘学基
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN201911236421.3A priority Critical patent/CN111008298A/en
Publication of CN111008298A publication Critical patent/CN111008298A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/635Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/686Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title or artist information, time, location or usage information, user ratings

Abstract

The application discloses a method, a device, a system, equipment and a storage medium for searching song accompaniment, and belongs to the technical field of computers. The method comprises the following steps: receiving a singing instruction; acquiring a song name and a version identification of a target song audio; sending an accompaniment acquisition request carrying the song name and the version identification of the target song audio to a server, wherein the accompaniment acquisition request is used for indicating the server to search the accompaniment audio corresponding to the target song audio; and receiving the accompaniment audio corresponding to the target song audio sent by the server. By the adoption of the method and the device, the version of the accompaniment audio provided for the terminal can be matched with the version of the song audio.

Description

Method, device, system, equipment and storage medium for searching song accompaniment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a system, a device, and a storage medium for searching for song accompaniment.
Background
At present, with the rise of music platforms such as live broadcast platforms, singing platforms and the like, singing enthusiasm of users is actively brought.
In the related technology, a client supporting song audio listening and song audio singing is installed and operated on a terminal device, a user logs in the client, selects a song audio in a homepage of the client for listening, and can click an accompaniment button if the user wants to sing the song audio for listening. And when the terminal detects the clicking operation of the user, sending an accompaniment acquisition request carrying the song name to the server. The server returns the accompaniment audio of the song to the terminal according to the song name, and the client plays the accompaniment audio. When the terminal plays the accompaniment audio, the voice audio of the user during singing is recorded, and the accompaniment audio and the voice audio of the user are synthesized into the singing audio.
In the course of implementing the present application, the inventors found that the related art has at least the following problems:
in the method, one song name may correspond to multiple versions of song audio and accompaniment audio, and the server only feeds back one accompaniment audio to the terminal when searching for the accompaniment audio based on the song name, so that the situation that the accompaniment audio provided to the terminal is not matched with the song audio version listened to by the user may occur, for example, the user listens to the song audio in a recording studio version, and the searched accompaniment audio is the accompaniment audio in a concert version.
Disclosure of Invention
In order to solve the technical problems in the related art, the present embodiment provides a method, an apparatus, a system, a device, and a storage medium for searching for song accompaniment. The technical scheme of the method, the device, the system, the equipment and the storage medium for searching the song accompaniment is as follows:
in a first aspect, a method for searching for song accompaniment is provided, the method comprising:
receiving a singing instruction;
acquiring a song name and a version identification of a target song audio;
sending an accompaniment acquisition request carrying the song name and the version identification of the target song audio to a server, wherein the accompaniment acquisition request is used for indicating the server to search the accompaniment audio corresponding to the target song audio;
and receiving the accompaniment audio corresponding to the target song audio sent by the server.
Optionally, the accompaniment acquisition request further carries the name of the singer of the target song audio;
the sending of the accompaniment acquisition request carrying the song name and the version identifier of the target song audio to the server includes:
and sending an accompanying acquisition request carrying the song name, the singer name and the version identification of the target song audio to a server.
In a second aspect, a method for searching song accompaniment is provided, and applied to a server, the method includes:
receiving an accompaniment acquisition request which is sent by a terminal and carries a song name and a version identification of a target song audio;
acquiring the song name of the target song audio and the accompaniment audio corresponding to the version identification based on the pre-stored corresponding relation among the song name, the version identification and the accompaniment audio;
and sending the accompaniment audio to the terminal.
Optionally, the accompaniment acquisition request further carries the name of the singer of the target song audio;
the acquiring the accompaniment audio corresponding to the song name and the version identification of the target song audio based on the pre-stored corresponding relationship among the song name, the version identification and the accompaniment audio comprises the following steps:
and acquiring the accompaniment audio corresponding to the song name, the singer name and the version identification of the target song audio based on the pre-stored corresponding relationship among the song name, the singer name, the version identification and the accompaniment audio.
Optionally, the method further includes:
acquiring a plurality of song audios and a plurality of accompaniment audios of the same song name;
determining at least one accompaniment audio corresponding to each song audio in the plurality of song audio and the plurality of accompaniment audio based on an audio alignment algorithm;
selecting accompaniment audio bound with the song audio from at least one accompaniment audio corresponding to each song audio;
and adding the song name, the version identification and the bound accompaniment audio of each song audio into the corresponding relation among the song name, the version identification and the accompaniment audio.
Optionally, the selecting the accompaniment audio bound to the song audio from the at least one accompaniment audio corresponding to each song audio includes:
and selecting the accompaniment audio bound with the song audio in at least one accompaniment audio corresponding to each song audio according to reference information of each accompaniment audio, wherein the reference information comprises one or more information of attribute information integrity, playing times and storage duration.
Optionally, in the at least one accompaniment audio corresponding to each song audio, selecting an accompaniment audio bound to the song audio according to reference information of each accompaniment audio, including:
for each song audio, determining the attribute information integrity of each accompaniment audio in at least one accompaniment audio corresponding to the song audio;
if the determined attribute information integrity has the unique highest attribute information integrity, determining the accompaniment audio corresponding to the unique highest attribute information integrity as the accompaniment audio bound with the song audio;
if a plurality of highest attribute information integrity degrees exist in the determined attribute information integrity degrees, acquiring the playing times of the accompaniment audio corresponding to the highest attribute information integrity degrees;
if the unique highest playing times exist in the playing times of the accompaniment audios corresponding to the acquired multiple highest attribute information integrity degrees, determining the accompaniment audio corresponding to the unique highest playing times as the accompaniment audio bound with the song audio;
if multiple highest playing times exist in the playing times of the accompaniment audio corresponding to the acquired multiple highest attribute information integrity degrees, the storage duration of the accompaniment audio corresponding to the multiple highest playing times is acquired, and the accompaniment audio with the longest storage duration is determined as the accompaniment audio bound with the song audio.
In a third aspect, an apparatus for searching song accompaniment is provided, the apparatus comprising:
the receiving module is used for receiving a singing instruction;
the acquisition module is used for acquiring the song name and the version identification of the target song audio;
the sending module is used for sending an accompaniment acquisition request carrying the song name and the version identification of the target song audio to a server, wherein the accompaniment acquisition request is used for indicating the server to search the accompaniment audio corresponding to the target song audio;
and the accompaniment audio receiving module is used for receiving the accompaniment audio corresponding to the target song audio sent by the server.
Optionally, the accompaniment acquisition request further carries the name of the singer of the target song audio;
and the sending module is used for sending an accompaniment acquisition request carrying the song name, the song hand name and the version identification of the target song audio to a server.
In a fourth aspect, there is provided an apparatus for searching for song accompaniment, the apparatus comprising:
the system comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving an accompaniment acquisition request which is sent by a terminal and carries a song name and a version identification of a target song audio;
the acquisition module is used for acquiring the accompaniment audio corresponding to the song name and the version identification of the target song audio based on the pre-stored corresponding relationship among the song name, the version identification and the accompaniment audio;
and the sending module is used for sending the accompaniment audio to the terminal.
Optionally, the accompaniment acquisition request further carries the name of the singer of the target song audio;
the acquisition module is used for acquiring accompanying audio corresponding to the song name, the singer name and the version identification of the target song audio based on the pre-stored corresponding relation among the song name, the singer name, the version identification and the accompanying audio.
Optionally, the apparatus for finding song accompaniment further includes an adding module, configured to:
acquiring a plurality of song audios and a plurality of accompaniment audios of the same song name;
determining at least one accompaniment audio corresponding to each song audio in the plurality of song audio and the plurality of accompaniment audio based on an audio alignment algorithm;
selecting accompaniment audio bound with the song audio from at least one accompaniment audio corresponding to each song audio;
and adding the song name, the version identification and the bound accompaniment audio of each song audio into the corresponding relation among the song name, the version identification and the accompaniment audio.
Optionally, the adding module is configured to select, in at least one accompaniment audio corresponding to each song audio, an accompaniment audio bound to the song audio according to reference information of each accompaniment audio, where the reference information includes one or more information of integrity of attribute information, playing times, and storage duration.
Optionally, the adding module is configured to:
for each song audio, determining the attribute information integrity of each accompaniment audio in at least one accompaniment audio corresponding to the song audio;
if the determined attribute information integrity has the unique highest attribute information integrity, determining the accompaniment audio corresponding to the unique highest attribute information integrity as the accompaniment audio bound with the song audio;
if a plurality of highest attribute information integrity degrees exist in the determined attribute information integrity degrees, acquiring the playing times of the accompaniment audio corresponding to the highest attribute information integrity degrees;
if the unique highest playing times exist in the playing times of the accompaniment audios corresponding to the acquired multiple highest attribute information integrity degrees, determining the accompaniment audio corresponding to the unique highest playing times as the accompaniment audio bound with the song audio;
if multiple highest playing times exist in the playing times of the accompaniment audio corresponding to the acquired multiple highest attribute information integrity degrees, the storage duration of the accompaniment audio corresponding to the multiple highest playing times is acquired, and the accompaniment audio with the longest storage duration is determined as the accompaniment audio bound with the song audio.
In a fifth aspect, a system for searching song accompaniment is provided, the system comprising a terminal and a server, wherein:
the terminal is used for receiving a singing instruction; acquiring a song name and a version identification of a target song audio; sending an accompaniment acquisition request carrying the song name and the version identification of the target song audio to the server; receiving accompaniment audio corresponding to the target song audio sent by the server;
the server is used for receiving an accompaniment acquisition request which is sent by the terminal and carries the song name and the version identification of the target song audio; acquiring the song name of the target song audio and the accompaniment audio corresponding to the version identification based on the pre-stored corresponding relation among the song name, the version identification and the accompaniment audio; and sending the accompaniment audio to the terminal.
In a sixth aspect, a computer device is provided, the terminal comprising a processor and a memory, the memory having stored therein at least one instruction, the at least one instruction being loaded and executed by the processor to implement a method of searching for song accompaniment.
In a seventh aspect, a computer-readable storage medium having at least one instruction stored therein is provided, the at least one instruction being loaded and executed by a processor to implement a method of searching for song accompaniment.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
according to the method provided by the embodiment of the application, after the terminal receives the singing instruction, the terminal sends the request for acquiring the accompaniment audio to the server. And the server searches the accompaniment audio corresponding to the target song audio according to the pre-stored corresponding relation among the song name, the version identification and the accompaniment audio, and sends the accompaniment audio to the terminal. And after receiving the accompaniment audio, the terminal plays the accompaniment audio. The application can enable the version of the accompaniment audio provided for the terminal to be matched with the version of the song audio.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic illustration of an implementation environment provided by an embodiment of the present application;
FIG. 2 is a flowchart of a method for searching for song accompaniment according to an embodiment of the present application;
FIG. 3 is a flowchart of a method for searching for song accompaniment according to an embodiment of the present application;
FIG. 4 is a flowchart of a method for searching for song accompaniment according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a search song accompaniment according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a search song accompaniment according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a search song accompaniment according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a search song accompaniment according to an embodiment of the present application;
FIG. 9 is a schematic structural diagram of an apparatus for searching song accompaniment according to an embodiment of the present application;
FIG. 10 is a schematic structural diagram of an apparatus for searching song accompaniment according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The method for searching song accompaniment provided by the embodiment of the application can be realized by computer equipment, and the computer equipment can be a terminal and a server. The terminal can be a mobile terminal such as a mobile phone, a tablet computer and a notebook computer, and can also be a fixed terminal such as a desktop computer. The server may be a single server or a server group, and if the server is a single server, the server may be responsible for all processing in the following scheme, and if the server is a server group, different servers in the server group may be respectively responsible for different processing in the following scheme, and the specific processing allocation condition may be arbitrarily set by a technical person according to actual needs, and is not described herein again.
According to the method provided by the embodiment of the application, as shown in fig. 1, when a user opens a music platform, the user can search any song audio that the user wants to listen to. After the user finds the song audio that the user wants to listen to, the user can click and play the song audio, so that the current page jumps to the page for playing the song audio, and the terminal plays the song audio. Lyrics can be displayed on a playing page, and a K song button is arranged on the playing page. When a user wants to sing, the user can click a K song button, at the moment, the terminal receives a K song instruction, stops playing the audio frequency of the song, and jumps to a K song page. At this time, after the terminal receives the accompaniment audio sent by the server, the terminal plays the accompaniment audio. The technician can also set a play button on the karaoke page, and the user can click the button to play the accompaniment audio.
Fig. 2 is a flowchart of a terminal side in a method for searching for song accompaniment according to an embodiment of the present application.
Referring to fig. 2, the embodiment includes:
step 201, receiving a singing instruction;
step 202, acquiring a song name and a version identification of a target song audio;
step 203, sending an accompaniment acquisition request carrying a song name and a version identifier of the target song audio to the server, wherein the accompaniment acquisition request is used for indicating the server to search accompaniment audio corresponding to the target song audio;
and step 204, receiving accompaniment audio corresponding to the target song audio sent by the server.
Optionally, the embodiment of the present application further includes: step 205, playing the accompaniment audio and recording the singing audio.
Fig. 3 is a flowchart of a server side in a method for searching for song accompaniment according to an embodiment of the present application.
Referring to fig. 3, the embodiment includes:
step 301, receiving an accompaniment acquisition request which is sent by a terminal and carries a song name and a version identification of a target song audio;
step 302, acquiring a song name of a target song audio and an accompaniment audio corresponding to a version identification based on a pre-stored corresponding relationship among the song name, the version identification and the accompaniment audio;
step 303, sending the accompaniment audio to the terminal.
According to the method provided by the embodiment of the application, after the terminal receives the singing instruction, the terminal sends the request for acquiring the accompaniment audio to the server. And the server searches the accompaniment audio corresponding to the target song audio according to the pre-stored corresponding relation among the song name, the version identification and the accompaniment audio, and sends the accompaniment audio to the terminal. And after receiving the accompaniment audio, the terminal plays the accompaniment audio. The application can enable the version of the accompaniment audio provided for the terminal to be matched with the version of the song audio.
Fig. 4 is a flowchart illustrating interaction between a terminal and a server in a method for searching for song accompaniment according to an embodiment of the present application. Referring to fig. 4, the embodiment includes:
step 401, the terminal receives a singing instruction.
The singing instruction is used for indicating that the user wants to sing the currently played song audio, and the singing instruction can be a Karaoke instruction.
In the implementation, in the process of playing the target song audio, the user clicks the K song button, or the user directly clicks the K song button when selecting the target song audio, and the terminal receives a singing instruction.
Specifically, the user opens the music platform and searches for the desired song audio on the music platform. After the music platform searches for the song audio which is desired to be played, the user can play the song audio, so that the current page jumps to the page for playing the song audio, and the terminal plays the song audio. When the song audio is played, a K song button is arranged on a page for playing the song audio. When the user clicks the K song button, the terminal receives a singing instruction corresponding to the played song audio, stops playing the song audio and jumps to a K song page corresponding to the song audio.
In implementation, the user can press the lyrics of the target song audio for a long time, so that the terminal receives the singing instruction corresponding to the played song audio.
In step 402, the terminal obtains the song name and the version identification of the target song audio.
The version identifier is used to indicate a singing version of the audio of the song, and for example, the version identifier may be hong Kong edition or inland edition, or may be national language edition or Guangdong language edition. There may be multiple versions of song audio for the same song title.
In implementation, after the terminal receives the singing instruction, the terminal reads the information of the played target song audio and acquires the song name and the version identification of the target song audio.
Optionally, after the terminal receives the singing instruction, the terminal reads the information of the played target song audio and obtains the song name, the singer name and the version identifier of the target song audio.
Step 403, the terminal sends an accompaniment acquisition request carrying the song name and the version identification of the target song audio to the server.
The accompaniment acquisition request is used for indicating the server to search for accompaniment audio corresponding to the target song audio, and enabling the server to send the accompaniment audio to the terminal.
In implementation, after the terminal obtains the song name and the version identifier of the target song audio, the terminal sends an accompaniment acquisition request carrying the song name and the version identifier of the target song audio to the server, and requests the server to send the accompaniment audio corresponding to the target song audio to the terminal.
Optionally, the accompaniment acquisition request may also carry the name of the singer of the target song audio. In implementation, after the terminal obtains the song name, the singer name and the version identification of the target song audio, the terminal sends an accompaniment acquisition request carrying the song name, the singer name and the version identification of the target song audio to the server, and the server is requested to send the accompaniment audio corresponding to the target song audio to the terminal.
In step 404, the server receives an accompaniment acquisition request which is sent by the terminal and carries the song name and the version identification of the target song audio.
In implementation, the server receives an accompaniment acquisition request which is sent by the terminal and carries a song name and a version identification of the target song audio, and acquires the song name and the version identification of the target song audio.
Optionally, the accompaniment acquisition request may also carry the name of the singer of the target song audio. In implementation, the server receives an accompanying acquisition request which is sent by the terminal and carries the song name, the singer name and the version identification of the target song audio, and acquires the song name, the singer name and the version identification of the target song audio.
Step 405, the server obtains the song name of the target song audio and the accompaniment audio corresponding to the version identification based on the pre-stored corresponding relationship between the song name, the version identification and the accompaniment audio.
In implementation, after acquiring the song name and the version identifier of the target song audio, the server acquires the pre-stored corresponding relationship between the song name, the version identifier and the accompaniment audio, and searches for the accompaniment audio corresponding to the song name and the version identifier of the target song audio according to the pre-stored corresponding relationship between the song name, the version identifier and the accompaniment audio.
Optionally, after obtaining the song name, the singer name and the version identifier of the target song audio, the server obtains the pre-stored corresponding relationship between the song name, the version identifier and the accompaniment audio, and searches for the accompaniment audio corresponding to the song name, the song name and the version identifier of the target song audio according to the pre-stored corresponding relationship between the song name, the version identifier and the accompaniment audio.
It should be noted that many persons may sing in the same song audio, which causes a problem that the singer of the song audio searched by the server according to the song name is different from the singer of the target song audio. Therefore, the song audio can be accurately found from the song name and the singer name.
The server takes the song audio and the accompaniment audio of the same song title and singer name as a classification, stores the classification in the entries, namely, the song audio and the accompaniment audio of the same song title and singer name are stored in each entry, and the identification of the entries is formed by the song title and the singer name. As shown in fig. 5, the server contains multiple words, such as zhogeren-say no cry, zhogeren-balloon, or zhangyou-kissing. As shown in fig. 6, the zhou jiron-say not crying entry contains a plurality of groups of zhou jiron-say not crying-concert version of the song, zhou jiron-say not crying-original accompaniment, and so on. Specifically, when the server receives an accompaniment acquisition request which is sent by the terminal and carries a song title, a singer name and a version identifier of the target song audio, the server searches entries corresponding to the song title and the singer name according to the song title and the singer name of the target song audio. After the server finds out the entries corresponding to the song name and the singer name, the server determines the song audio corresponding to the song name, the singer name and the version identification according to the version identification corresponding to the target song audio.
Optionally, the server obtains a plurality of song audios and a plurality of accompaniment audios of the same song name, and determines at least one accompaniment audio corresponding to each song audio in the plurality of song audios and the plurality of accompaniment audios based on an audio alignment algorithm. The server selects accompaniment audio bound with the song audio from at least one accompaniment audio corresponding to each song audio, and adds the song name, the version identification and the bound accompaniment audio of each song audio to the corresponding relation among the song name, the version identification and the accompaniment audio.
The plurality of song audios refer to song audios identified by a plurality of different versions, and the plurality of accompaniment audios refer to accompaniment audios identified by a plurality of different versions.
The song audio of different version sign and the accompaniment audio of different version sign are in certain difference in time, and when the song audio of a certain version sign and the accompaniment audio of a certain version sign align in time, can regard this song audio and this accompaniment audio to correspond same version sign.
The audio alignment algorithm may output a song audio and a plurality of accompaniment audios and output a plurality of accompaniment audios aligned with the song audio.
In an implementation, after acquiring a plurality of song audios and a plurality of accompaniment audios with the same song name, the server inputs one song audio and the plurality of accompaniment audios into an audio alignment algorithm, and then determines at least one accompaniment audio corresponding to each song audio. The server selects accompaniment audio bound with the song audio from at least one accompaniment audio corresponding to each song audio, and adds the song name, the version identification and the bound accompaniment audio of each song audio into the corresponding relation among the song name, the version identification and the accompaniment audio.
Specifically, as shown in fig. 7, after inputting the song audio of the jiron-speaking-not-crying-concert version and the plurality of accompaniment audio into the audio alignment algorithm, the server determines three accompaniment audio having the same time as the song audio of the jiron-speaking-not-crying-concert version, which are the concert accompaniment 1, the concert accompaniment 2, and the concert accompaniment 3.
Optionally, in at least one accompaniment audio corresponding to each song audio, the server selects the accompaniment audio bound to the song audio according to reference information of each accompaniment audio, where the reference information includes one or more information of integrity, playing frequency, and storage duration of the attribute information.
The attribute information may include one or more of lyrics in the accompaniment audio, scoring the singing condition of the user when the user sings, other bound song audio, silent versions with the accompaniment audio, singing versions with the accompaniment audio, and the like besides the currently bound song audio. And determining the integrity of the attribute information according to the integrity of the attribute information. For example, only the original version of the accompaniment audio exists, and the integrity of the attribute information is 1.
The storage duration refers to the duration of any accompaniment audio stored in the server, i.e., the earlier the accompaniment audio is stored.
It should be noted that, because the time of some accompaniment audio is equal, at least one accompaniment audio corresponding to each song audio can be acquired according to the audio alignment algorithm. For example, song a corresponds to 2 accompaniments, which are a piano song and a violin song, respectively, according to the result of the audio alignment algorithm.
In implementation, in at least one accompaniment audio corresponding to each song audio, the server selects the accompaniment audio bound with the song audio according to the attribute information integrity, the playing times and the storage duration of each accompaniment audio.
Optionally, for each song audio, determining the integrity of attribute information of each accompaniment audio in at least one accompaniment audio corresponding to the song audio; if the determined attribute information integrity has the unique highest attribute information integrity, determining the accompaniment audio corresponding to the unique highest attribute information integrity as the accompaniment audio bound with the song audio; if a plurality of highest attribute information integrity degrees exist in the determined attribute information integrity degrees, acquiring the playing times of the accompaniment audio corresponding to the plurality of highest attribute information integrity degrees; if the unique highest playing times exist in the playing times of the accompaniment audios corresponding to the acquired multiple highest attribute information integrity degrees, determining the accompaniment audio corresponding to the unique highest playing times as the accompaniment audio bound with the song audio; and if the plurality of the highest playing times exist in the playing times of the accompaniment audio corresponding to the acquired plurality of the highest attribute information integrity degrees, acquiring the storage duration of the accompaniment audio corresponding to the plurality of the highest playing times, and determining the accompaniment audio with the longest storage duration as the accompaniment audio bound with the song audio.
It should be noted that, when the integrity of the attribute information of the accompaniment audio is higher, the attribute of the accompaniment audio is more improved, and the quality of the accompaniment audio is better than that of other accompaniment audio. When a plurality of accompaniment audios with equal attribute information integrity exist, the accompaniment audio with the highest playing frequency is selected, and the accompaniment audio is used as the accompaniment audio with the optimal audio quality. It can be considered that, when the playing number of the accompaniment audio is the highest, the quality of the accompaniment audio is better than that of other accompaniment audio. When a plurality of accompaniment audio with equal playing times exist, the quality of the accompaniment audio with the longest storage time is considered to be optimal.
Specifically, in fig. 7, the zhou jieren-speaking-not crying-concert-version song corresponds to three concert accompaniments, which are concert-version accompaniment 1, concert-version accompaniment 2 and concert-version accompaniment 3, wherein the attribute information integrity corresponding to the concert-version accompaniment 1 is 5, the playing frequency is 200, the storage duration is 1 year, the attribute information integrity corresponding to the concert-version accompaniment 2 is 5, the playing frequency is 200, the storage duration is 2 years, the attribute information integrity corresponding to the concert-version accompaniment 3 is 4, the playing frequency is 300, the storage duration is 5 years, and the accompaniment audio corresponding to the concert version 2 is optimal according to the above results. As shown in fig. 8, the zhou jiron-speaking not crying-concert version song and the concert version accompaniment 2 are bound and stored in the server.
The optimal accompaniment audio is selected through a plurality of accompaniment audios corresponding to the song audio and through the attribute information integrity, playing times and storage duration of the accompaniment audios. The optimal accompaniment audio is used as the played accompaniment audio, so that the user can more possibly and satisfactorily play the accompaniment audio, and the user experience is improved. Meanwhile, the accompaniment audio bound with the song audio is selected, and the quality of the accompaniment audio bound with the song audio is optimal, so that the aim of accurately searching the song accompaniment audio is fulfilled.
In step 406, the server sends the accompaniment audio to the terminal.
In implementation, after the server finds the corresponding accompaniment audio, the server sends the accompaniment audio to the terminal.
Step 407, the terminal receives the accompaniment audio corresponding to the target song audio sent by the server.
In implementation, the terminal receives accompaniment audio corresponding to the target song audio sent by the server.
Optionally, the embodiment of the present application further includes:
and step 408, the terminal plays the accompaniment audio and records the singing audio.
Wherein, after the terminal received the accompaniment audio that the target song audio frequency corresponds, the terminal can directly play the accompaniment audio. Also can set up the broadcast button on K song page, after the user clicked the broadcast button, the terminal received the broadcast instruction, began playing the accompaniment audio frequency. Of course, when the user triggers any position of the karaoke page, the terminal receives the playing instruction and starts to play the accompaniment audio.
In implementation, after the terminal receives the accompaniment audio corresponding to the target song audio, the terminal receives a playing instruction, starts to play the accompaniment audio, and records the singing song audio of the user.
According to the method provided by the embodiment of the application, when the terminal receives the singing instruction, the request for acquiring the accompaniment is sent to the server. The server obtains the song name and the version identification according to the accompaniment acquisition request, searches for the accompaniment audio corresponding to the song name and the version identification in the prestored corresponding relation among the song name, the version identification and the accompaniment audio, and sends the accompaniment audio to the terminal, and the terminal plays the accompaniment audio. The application can enable the version of the accompaniment audio provided for the terminal to be matched with the version of the song audio.
Based on the same technical concept, an embodiment of the present application further provides an apparatus, which is used for a terminal, and as shown in fig. 9, the apparatus includes:
a receiving module 910, configured to receive a singing instruction;
an obtaining module 920, configured to obtain a song name and a version identifier of a target song audio;
a sending module 930, configured to send an accompaniment acquisition request carrying a song name and a version identifier of the target song audio to a server, where the accompaniment acquisition request is used to instruct the server to search for an accompaniment audio corresponding to the target song audio;
and an accompaniment audio receiving module 940, configured to receive the accompaniment audio corresponding to the target song audio sent by the server.
Optionally, the apparatus further comprises a playing module 950, configured to play the accompaniment audio and record the singing audio.
Optionally, the accompaniment acquisition request further carries the name of the singer of the target song audio;
the sending module 930 is configured to send an accompaniment acquisition request carrying the song name, the singer name, and the version identifier of the target song audio to the server.
Based on the same technical concept, the embodiment of the present application further provides an apparatus for a server, as shown in fig. 10, the apparatus including:
a receiving module 1010, configured to receive an accompaniment acquisition request sent by a terminal and carrying a song name and a version identifier of a target song audio;
an obtaining module 1020, configured to obtain accompaniment audio corresponding to the song name and the version identifier of the target song audio based on a pre-stored correspondence relationship between the song name, the version identifier, and the accompaniment audio;
a sending module 1030, configured to send the accompaniment audio to the terminal.
Optionally, the accompaniment acquisition request further carries the name of the singer of the target song audio;
the obtaining module 1020 is configured to obtain accompaniment audio corresponding to the song name, the singer name, and the version identifier of the target song audio based on a pre-stored correspondence between the song name, the singer name, the version identifier, and the accompaniment audio.
Optionally, the apparatus for finding song accompaniment further includes an adding module, configured to:
acquiring a plurality of song audios and a plurality of accompaniment audios of the same song name;
determining at least one accompaniment audio corresponding to each song audio in the plurality of song audio and the plurality of accompaniment audio based on an audio alignment algorithm;
selecting accompaniment audio bound with the song audio from at least one accompaniment audio corresponding to each song audio;
and adding the song name, the version identification and the bound accompaniment audio of each song audio into the corresponding relation among the song name, the version identification and the accompaniment audio.
Optionally, the adding module is configured to select, in at least one accompaniment audio corresponding to each song audio, an accompaniment audio bound to the song audio according to reference information of each accompaniment audio, where the reference information includes one or more information of integrity of attribute information, playing times, and storage duration.
Optionally, the adding module is configured to:
for each song audio, determining the attribute information integrity of each accompaniment audio in at least one accompaniment audio corresponding to the song audio;
if the determined attribute information integrity has the unique highest attribute information integrity, determining the accompaniment audio corresponding to the unique highest attribute information integrity as the accompaniment audio bound with the song audio;
if a plurality of highest attribute information integrity degrees exist in the determined attribute information integrity degrees, acquiring the playing times of the accompaniment audio corresponding to the highest attribute information integrity degrees;
if the unique highest playing times exist in the playing times of the accompaniment audios corresponding to the acquired multiple highest attribute information integrity degrees, determining the accompaniment audio corresponding to the unique highest playing times as the accompaniment audio bound with the song audio;
if multiple highest playing times exist in the playing times of the accompaniment audio corresponding to the acquired multiple highest attribute information integrity degrees, the storage duration of the accompaniment audio corresponding to the multiple highest playing times is acquired, and the accompaniment audio with the longest storage duration is determined as the accompaniment audio bound with the song audio.
It should be noted that: in the device for searching for the accompaniment of a song provided by the above embodiment, only the division of the above functional modules is taken as an example when searching for the accompaniment of a song, and in practical application, the above function distribution can be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above described functions. In addition, the device embodiment and the method embodiment for searching song accompaniment belong to the same concept, and the specific implementation process is described in the method embodiment, which is not described herein again.
The embodiment of the application also provides a system for searching song accompaniment, which comprises a terminal and a server, wherein:
the terminal is used for receiving a singing instruction; acquiring a song name and a version identification of a target song audio; sending an accompaniment acquisition request carrying the song name and the version identification of the target song audio to the server; receiving accompaniment audio corresponding to the target song audio sent by the server;
the server is used for receiving an accompaniment acquisition request which is sent by the terminal and carries the song name and the version identification of the target song audio; acquiring the song name of the target song audio and the accompaniment audio corresponding to the version identification based on the pre-stored corresponding relation among the song name, the version identification and the accompaniment audio; and sending the accompaniment audio to the terminal.
Fig. 11 is a schematic structural diagram of a terminal according to an embodiment of the present application, where the terminal may be the terminal in the foregoing embodiment. The terminal 1100 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture experts Group Audio Layer III, motion video experts compress standard Audio Layer 3), an MP4 player (Moving Picture experts Group Audio Layer IV, motion video experts compress standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1100 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so forth.
In general, terminal 1100 includes: one or more processors 1101 and one or more memories 1102.
Processor 1101 may include one or more processing cores, such as a 4-core processor, an 11-core processor, or the like. The processor 1101 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1101 may also include a main processor and a coprocessor, the main processor is a processor for processing data in a wake-up state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1101 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 1101 may further include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
Memory 1102 may include one or more computer-readable storage media, which may be non-transitory. Memory 1102 can also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1102 is used to store at least one instruction for execution by the processor 1101 to implement the method of finding a song accompaniment provided by the method embodiments of the present application.
In some embodiments, the terminal 1100 may further include: a peripheral interface 1103 and at least one peripheral. The processor 1101, memory 1102 and peripheral interface 1103 may be connected by bus or signal lines. Various peripheral devices may be connected to the peripheral device interface 1103 by buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1104, display screen 1105, camera 1106, audio circuitry 1107, positioning component 11011, and power supply 1109.
The peripheral interface 1103 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1101 and the memory 1102. In some embodiments, the processor 1101, memory 1102, and peripheral interface 1103 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1101, the memory 1102 and the peripheral device interface 1103 can be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1104 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1104 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1104 converts an electric signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electric signal. Optionally, the radio frequency circuit 1104 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1104 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1104 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1105 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1105 is a touch display screen, the display screen 1105 also has the ability to capture touch signals on or over the surface of the display screen 1105. The touch signal may be input to the processor 1101 as a control signal for processing. At this point, the display screen 1105 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1105 may be one, providing the front panel of the terminal 1100; in other embodiments, the display screens 1105 can be at least two, respectively disposed on different surfaces of the terminal 1100 or in a folded design; in still other embodiments, display 1105 can be a flexible display disposed on a curved surface or on a folded surface of terminal 1100. Even further, the display screen 1105 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display screen 1105 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
Camera assembly 1106 is used to capture images or video. Optionally, camera assembly 1106 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each of the rear cameras is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, the main camera and the wide-angle camera are fused to realize panoramic shooting and a VR (Virtual Reality) shooting function or other fusion shooting functions. In some embodiments, camera assembly 1106 may also include a flash. The flash lamp can be a monochrome temperature flash lamp and can also be a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1107 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1101 for processing or inputting the electric signals to the radio frequency circuit 1104 to achieve voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of terminal 1100. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1101 or the radio frequency circuit 1104 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 1108 may also include a headphone jack.
Positioning component 1108 is used to locate the current geographic position of terminal 1100 for purposes of navigation or LBS (location based Service). The positioning component 1108 may be a positioning component based on the united states GPS (global positioning System), the chinese beidou System, the russian graves System, or the european union galileo System.
Power supply 1109 is configured to provide power to various components within terminal 1100. The power supply 1109 may be ac, dc, disposable or rechargeable. When the power supply 1109 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery can also be used to support fast charging techniques.
In some embodiments, terminal 1100 can also include one or more sensors 1110. The one or more sensors 1110 include, but are not limited to: acceleration sensor 1111, gyro sensor 1112, pressure sensor 1113, fingerprint sensor 1114, optical sensor 1115, and proximity sensor 1116.
Acceleration sensor 1111 may detect acceleration levels in three coordinate axes of a coordinate system established with terminal 1100. For example, the acceleration sensor 1111 may be configured to detect components of the gravitational acceleration in three coordinate axes. The processor 1101 may control the display screen 1105 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1111. The acceleration sensor 1111 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1112 may detect a body direction and a rotation angle of the terminal 1100, and the gyro sensor 1112 may cooperate with the acceleration sensor 1111 to acquire a 3D motion of the user with respect to the terminal 1100. From the data collected by gyroscope sensor 1112, processor 1101 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensor 1113 may be disposed on a side bezel of terminal 1100 and/or underlying display screen 1105. When the pressure sensor 1113 is disposed on the side frame of the terminal 1100, the holding signal of the terminal 1100 from the user can be detected, and the processor 1101 performs left-right hand identification or shortcut operation according to the holding signal collected by the pressure sensor 1113. When the pressure sensor 1113 is disposed at the lower layer of the display screen 1105, the processor 1101 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 1105. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1114 is configured to collect a fingerprint of the user, and the processor 1101 identifies the user according to the fingerprint collected by the fingerprint sensor 1114, or the fingerprint sensor 1114 identifies the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the user is authorized by the processor 1101 to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. Fingerprint sensor 1114 may be disposed on the front, back, or sides of terminal 1100. When a physical button or a vendor Logo is provided on the terminal 1100, the fingerprint sensor 1114 may be integrated with the physical button or the vendor Logo.
Optical sensor 1115 is used to collect ambient light intensity. In one embodiment, the processor 1101 may control the display brightness of the display screen 1105 according to the ambient light intensity collected by the optical sensor 1115. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1105 is increased; when the ambient light intensity is low, the display brightness of the display screen 1105 is reduced. In another embodiment, processor 1101 may also dynamically adjust the shooting parameters of camera head assembly 1106 based on the ambient light intensity collected by optical sensor 1115.
Proximity sensor 1116, also referred to as a distance sensor, is typically disposed on a front panel of terminal 1100. Proximity sensor 1116 is used to capture the distance between the user and the front face of terminal 1100. In one embodiment, when the proximity sensor 1116 detects that the distance between the user and the front face of the terminal 1100 is gradually decreased, the display screen 1105 is controlled by the processor 1101 to switch from a bright screen state to a dark screen state; when the proximity sensor 1116 detects that the distance between the user and the front face of the terminal 1100 becomes gradually larger, the display screen 1105 is controlled by the processor 1101 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 11 does not constitute a limitation of terminal 1100, and may include more or fewer components than those shown, or may combine certain components, or may employ a different arrangement of components.
Fig. 12 is a schematic structural diagram of a server according to an embodiment of the present application, where the server 1200 may include one or more processors (CPUs) 1201 and one or more memories 1202, where at least one program code is stored in the one or more memories 1202, and is loaded and executed by the one or more processors 1201 to implement the methods according to the foregoing method embodiments. Certainly, the server 1200 may further have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input and output, and the server 1200 may further include other components for implementing the functions of the device, which is not described herein again.
In an exemplary embodiment, a computer-readable storage medium, such as a memory, including instructions executable by a processor to perform the method of finding song accompaniment in the above-described embodiments is also provided. For example, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (12)

1. A method for searching song accompaniment is applied to a terminal, and comprises the following steps:
receiving a singing instruction;
acquiring a song name and a version identification of a target song audio;
sending an accompaniment acquisition request carrying the song name and the version identification of the target song audio to a server, wherein the accompaniment acquisition request is used for indicating the server to search the accompaniment audio corresponding to the target song audio;
and receiving the accompaniment audio corresponding to the target song audio sent by the server.
2. The method as claimed in claim 1, wherein the accompaniment acquisition request further carries a singer name of the target song audio;
the sending of the accompaniment acquisition request carrying the song name and the version identification of the target song audio to the server includes:
and sending an accompaniment acquisition request carrying the song name, the singer name and the version identification of the target song audio to a server.
3. A method for searching song accompaniment, applied to a server, the method comprising:
receiving an accompaniment acquisition request which is sent by a terminal and carries a song name and a version identification of a target song audio;
acquiring the song name of the target song audio and the accompaniment audio corresponding to the version identification based on the pre-stored corresponding relation among the song name, the version identification and the accompaniment audio;
and sending the accompaniment audio to the terminal.
4. The method as claimed in claim 3, wherein the accompaniment acquisition request further carries a singer name of the target song audio;
the acquiring the accompaniment audio corresponding to the song name and the version identification of the target song audio based on the pre-stored corresponding relationship among the song name, the version identification and the accompaniment audio comprises the following steps:
and acquiring the accompaniment audio corresponding to the song name, the singer name and the version identification of the target song audio based on the pre-stored corresponding relationship among the song name, the singer name, the version identification and the accompaniment audio.
5. The method of claim 3, further comprising:
acquiring a plurality of song audios and a plurality of accompaniment audios of the same song name;
determining at least one accompaniment audio corresponding to each song audio in the plurality of song audios and the plurality of accompaniment audios based on an audio alignment algorithm;
selecting accompaniment audio bound with the song audio from at least one accompaniment audio corresponding to each song audio;
and adding the song name, the version identification and the bound accompaniment audio of each song audio into the corresponding relation among the song name, the version identification and the accompaniment audio.
6. The method of claim 5, wherein selecting the accompaniment audio associated with each song audio from the at least one accompaniment audio corresponding to the song audio comprises:
and selecting the accompaniment audio bound with the song audio in at least one accompaniment audio corresponding to each song audio according to the reference information of each accompaniment audio, wherein the reference information comprises one or more information of attribute information integrity, playing times and storage duration.
7. The method of claim 6, wherein selecting the accompaniment audio bound to the song audio according to the reference information of each accompaniment audio in the at least one accompaniment audio corresponding to each song audio comprises:
for each song audio, determining the attribute information integrity of each accompaniment audio in at least one accompaniment audio corresponding to the song audio;
if the determined attribute information integrity has the unique highest attribute information integrity, determining the accompaniment audio corresponding to the unique highest attribute information integrity as the accompaniment audio bound with the song audio;
if a plurality of highest attribute information integrity degrees exist in the determined attribute information integrity degrees, acquiring the playing times of the accompaniment audio corresponding to the highest attribute information integrity degrees;
if the unique highest playing times exist in the playing times of the accompaniment audios corresponding to the acquired multiple highest attribute information integrity degrees, determining the accompaniment audio corresponding to the unique highest playing times as the accompaniment audio bound with the song audio;
if a plurality of highest playing times exist in the playing times of the accompaniment audio corresponding to the acquired plurality of highest attribute information integrity degrees, acquiring the storage duration of the accompaniment audio corresponding to the plurality of highest playing times, and determining the accompaniment audio with the longest storage duration as the accompaniment audio bound with the song audio.
8. An apparatus for searching for song accompaniment, the apparatus comprising:
the receiving module is used for receiving a singing instruction;
the acquisition module is used for acquiring the song name and the version identification of the target song audio;
the sending module is used for sending an accompaniment acquisition request carrying the song name and the version identification of the target song audio to a server, wherein the accompaniment acquisition request is used for indicating the server to search for the accompaniment audio corresponding to the target song audio;
and the accompaniment audio receiving module is used for receiving the accompaniment audio corresponding to the target song audio sent by the server.
9. An apparatus for searching for song accompaniment, the apparatus comprising:
the receiving module is used for receiving an accompaniment acquisition request which is sent by a terminal and carries a song name and a version identification of a target song audio;
the acquisition module is used for acquiring the accompaniment audio corresponding to the song name and the version identification of the target song audio based on the pre-stored corresponding relationship among the song name, the version identification and the accompaniment audio;
and the sending module is used for sending the accompaniment audio to the terminal.
10. A system for searching for song accompaniment, the system comprising a terminal and a server, wherein:
the terminal is used for receiving a singing instruction; acquiring a song name and a version identification of a target song audio; sending an accompaniment acquisition request carrying the song name and the version identification of the target song audio to the server, wherein the accompaniment acquisition request is used for indicating the server to search the accompaniment audio corresponding to the target song audio; receiving accompaniment audio corresponding to the target song audio sent by the server;
the server is used for receiving an accompaniment acquisition request which is sent by the terminal and carries the song name and the version identification of the target song audio; acquiring the song name of the target song audio and the accompaniment audio corresponding to the version identification based on the pre-stored corresponding relation among the song name, the version identification and the accompaniment audio; and sending the accompaniment audio to the terminal.
11. A computer device comprising a processor and a memory, said memory having stored therein at least one instruction that is loaded and executed by said processor to implement a method of searching for song accompaniment according to any one of claims 1 to 7.
12. A computer-readable storage medium having stored therein at least one instruction, which is loaded and executed by a processor, to implement the method of searching for song accompaniment according to any one of claims 1 to 7.
CN201911236421.3A 2019-12-05 2019-12-05 Method, device, system, equipment and storage medium for searching song accompaniment Pending CN111008298A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911236421.3A CN111008298A (en) 2019-12-05 2019-12-05 Method, device, system, equipment and storage medium for searching song accompaniment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911236421.3A CN111008298A (en) 2019-12-05 2019-12-05 Method, device, system, equipment and storage medium for searching song accompaniment

Publications (1)

Publication Number Publication Date
CN111008298A true CN111008298A (en) 2020-04-14

Family

ID=70115595

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911236421.3A Pending CN111008298A (en) 2019-12-05 2019-12-05 Method, device, system, equipment and storage medium for searching song accompaniment

Country Status (1)

Country Link
CN (1) CN111008298A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201417581A (en) * 2012-10-18 2014-05-01 Leo Systems Inc Online video karaoke system and karaoke video file searching method thereof
CN104657438A (en) * 2015-02-02 2015-05-27 联想(北京)有限公司 Information processing method and electronic equipment
CN105047185A (en) * 2015-05-26 2015-11-11 广州酷狗计算机科技有限公司 Method, device and system for obtaining audio frequency of accompaniment
CN106469557A (en) * 2015-08-18 2017-03-01 阿里巴巴集团控股有限公司 The offer method and apparatus of accompaniment music
US20180174559A1 (en) * 2016-12-15 2018-06-21 Michael John Elson Network musical instrument
CN110390925A (en) * 2019-08-02 2019-10-29 湖南国声声学科技股份有限公司深圳分公司 Voice and accompaniment synchronous method, terminal, bluetooth equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201417581A (en) * 2012-10-18 2014-05-01 Leo Systems Inc Online video karaoke system and karaoke video file searching method thereof
CN104657438A (en) * 2015-02-02 2015-05-27 联想(北京)有限公司 Information processing method and electronic equipment
CN105047185A (en) * 2015-05-26 2015-11-11 广州酷狗计算机科技有限公司 Method, device and system for obtaining audio frequency of accompaniment
CN106469557A (en) * 2015-08-18 2017-03-01 阿里巴巴集团控股有限公司 The offer method and apparatus of accompaniment music
US20180174559A1 (en) * 2016-12-15 2018-06-21 Michael John Elson Network musical instrument
CN110390925A (en) * 2019-08-02 2019-10-29 湖南国声声学科技股份有限公司深圳分公司 Voice and accompaniment synchronous method, terminal, bluetooth equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108538302B (en) Method and apparatus for synthesizing audio
CN109168073B (en) Method and device for displaying cover of live broadcast room
CN109144346B (en) Song sharing method and device and storage medium
CN110688082B (en) Method, device, equipment and storage medium for determining adjustment proportion information of volume
CN111061405B (en) Method, device and equipment for recording song audio and storage medium
CN110931053B (en) Method, device, terminal and storage medium for detecting recording time delay and recording audio
CN108831513B (en) Method, terminal, server and system for recording audio data
CN110266982B (en) Method and system for providing songs while recording video
CN110996167A (en) Method and device for adding subtitles in video
CN111081277B (en) Audio evaluation method, device, equipment and storage medium
CN111402844B (en) Song chorus method, device and system
CN112667844A (en) Method, device, equipment and storage medium for retrieving audio
CN111092991B (en) Lyric display method and device and computer storage medium
CN109961802B (en) Sound quality comparison method, device, electronic equipment and storage medium
CN113596516B (en) Method, system, equipment and storage medium for chorus of microphone and microphone
CN109547847B (en) Method and device for adding video information and computer readable storage medium
CN112086102B (en) Method, apparatus, device and storage medium for expanding audio frequency band
CN112118482A (en) Audio file playing method and device, terminal and storage medium
CN109003627B (en) Method, device, terminal and storage medium for determining audio score
CN108763521B (en) Method and device for storing lyric phonetic notation
WO2022227589A1 (en) Audio processing method and apparatus
CN111063364A (en) Method, apparatus, computer device and storage medium for generating audio
CN111294626A (en) Lyric display method and device
CN111063372A (en) Method, device and equipment for determining pitch characteristics and storage medium
CN111008298A (en) Method, device, system, equipment and storage medium for searching song accompaniment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200414

RJ01 Rejection of invention patent application after publication