CN104572882B - Audio data management method, server and client - Google Patents

Audio data management method, server and client Download PDF

Info

Publication number
CN104572882B
CN104572882B CN201410808946.0A CN201410808946A CN104572882B CN 104572882 B CN104572882 B CN 104572882B CN 201410808946 A CN201410808946 A CN 201410808946A CN 104572882 B CN104572882 B CN 104572882B
Authority
CN
China
Prior art keywords
audio data
audio
data
server
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410808946.0A
Other languages
Chinese (zh)
Other versions
CN104572882A (en
Inventor
敖绍青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN201410808946.0A priority Critical patent/CN104572882B/en
Publication of CN104572882A publication Critical patent/CN104572882A/en
Application granted granted Critical
Publication of CN104572882B publication Critical patent/CN104572882B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Landscapes

  • Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Reverberation, Karaoke And Other Acoustics (AREA)

Abstract

The present invention discloses a kind of audio data management method, server and client, belongs to Internet technical field.It include: the audio search requests for receiving the first client and sending;Obtain the first audio data and its first identifier;The first audio data and first identifier are sent to the first client;When receiving the second audio data that third audio data, the first audio data and the first identifier that the first client is recorded according to user generate, according to the corresponding storage second audio data of first identifier.By returning to the first audio data and first identifier to the first client, and after the first client submits second audio data, by first identifier storage corresponding with second audio data.Due to when being managed to audio data, being with first identifier for reference, so that audio data management mode is more orderly.When user searches for the second audio data that other users are generated according to same audio data, can be carried out according to first identifier, so that search process is more time saving.

Description

Audio data management method, server and client
Technical field
The present invention relates to Internet technical field, in particular to a kind of audio data management method, server and client.
Background technique
With the rapid development of Internet technology, there are various K song softwares.Software is sung by K, user can be with Realize that K is sung whenever and wherever possible.User provides accompaniment when singing using K song software K, by K song software, and user follows accompaniment to drill It sings, K song software records the song of user, and the song and accompaniment are synthesized audio data.
When a user selects the accompaniment of a song to sing, K sings software according to the use of accompaniment and the recording of the song Family song generates an audio data, and therefore, when multiple users select the accompaniment of same song to sing, K sings software can needle Accompaniment to the song generates multiple audio datas.
In this case, if each audio data because storage when do not fix management rule due to store in a jumble, It is not easy to K song software and manages each audio data.In addition, being given birth to if some user will search for other users according to same accompaniment At audio data, compared with carrying out K song, then since each audio data stores in a jumble, so that being not easy to search user needs The audio data wanted causes search process relatively time-consuming.To sum up, the management method of audio data is most important.
Summary of the invention
In order to solve the problems, such as the relevant technologies, the embodiment of the invention provides a kind of audio data management methods, server And client.The technical solution is as follows:
In a first aspect, providing a kind of audio data management method, which comprises
The audio search requests of the first client transmission are received, the audio search requests carry audio keyword;
According to the audio keyword, the first audio data is obtained and for the first audio data described in unique identification First identifier, the audio-frequency information of first audio data include the audio keyword;
First audio data and the first identifier are sent to first client;
When receiving the second audio data that first client is sent, according to the first identifier, corresponding storage institute Second audio data is stated, third audio data that the second audio data is recorded by first client according to user, institute It states the first audio data and the first identifier generates.
With reference to first aspect, in the first possible implementation of the first aspect, described to first client Before sending first audio data and the first identifier, further includes:
Obtain audio data;
According to the audio-frequency information of the audio data, judge locally whether be stored with the audio data;
If locally not stored have the audio data, the audio data is stored, the audio data includes described First audio data;
The audio identification for audio data described in unique identification is generated for the audio data.
The possible implementation of with reference to first aspect the first, in second of possible implementation of first aspect In, the acquisition audio data, comprising:
Every the first preset duration, first kind audio data, the first kind are obtained from the first audio data server Audio data is the audio data that sound quality is higher than specified pitch criteria;
Every the second preset duration, the second class audio frequency data, second class are obtained from second audio data server Audio data is the audio data that sound quality is lower than the specified pitch criteria;
Every third preset duration, third class audio frequency data, the third class are obtained from third audio data server Audio data is that the audio data that noise reduction is handled is carried out to original audio, and the sound quality of the third class audio frequency data is lower than institute State the sound quality of the second class audio frequency data;
The 4th class audio frequency data of the second client transmission are received, the 4th class audio frequency data are by second client It searches for obtain from third-party server.
The possible implementation of second with reference to first aspect, in the third possible implementation of first aspect In, it is described every the first preset duration, from the first audio data server after acquisition first kind audio data, further includes:
In the audio-frequency information according to first audio data, determine to include described first in the first kind audio data After audio data, the first audio data being locally stored is replaced using the first audio data in the first kind audio data;
It is described every the second preset duration, after the second class audio frequency data are obtained from second audio data server, also Include:
In the audio-frequency information according to first audio data, determine to include described first in the second class audio frequency data After audio data, judge whether the first audio data being locally stored derives from the third audio data server;
If the first audio data source being locally stored uses described second in the third audio data server The first audio data in class audio frequency data replaces the first audio data being locally stored;
If the first audio data being locally stored does not derive from the third audio data server, ignore described The first audio data in two class audio frequency data;
It is described to be gone back from third audio data server after acquisition third class audio frequency data every third preset duration Include:
In the audio-frequency information according to first audio data, determine it is locally stored have first audio data after, Ignore the first audio data in the third class audio frequency data got;
After the 4th class audio frequency data for receiving the transmission of the second client, further includes:
In the audio-frequency information according to first audio data, determine it is locally stored have first audio data after, When sound quality of the sound quality better than the first audio data being locally stored for determining the first audio data in the 4th class audio frequency data Afterwards, the first audio data being locally stored is updated;
The content of the audio-frequency information of the first audio data is more than and is locally stored in determining the 4th class audio frequency data After the content of the audio-frequency information of first audio data, the audio-frequency information for the first audio data being locally stored is updated.
Any possibility into the third possible implementation of the first aspect of first aspect with reference to first aspect Implementation, in a fourth possible implementation of the first aspect, first audio data be accompaniment data.
Second aspect provides a kind of audio data management method, which comprises
Audio search requests are sent, the audio search requests carry audio keyword;
Obtain the first audio data and the first identifier for the first audio data described in unique identification, first audio The audio-frequency information of data includes the audio keyword;
According to third audio data, first audio data and the first identifier that user records, the second sound is generated Frequency evidence;
The second audio data is sent to server, makes the server according to the first identifier, corresponding storage The second audio data.
In conjunction with second aspect, in the first possible implementation of the second aspect, the transmission audio search requests, Include:
Audio search requests are sent respectively to the server and third-party server;
The first audio data of the acquisition and the first identifier for the first audio data described in unique identification, comprising:
The first audio data and the server for receiving the server return are what first audio data generated First identifier;
Alternatively, receiving at least one the 4th audio data that the third-party server returns;Detecting user from institute It states after selecting the first audio data at least one the 4th audio data, the first audio data of selection is committed to the service Device makes the server first audio data generate first identifier;Receiving the server is the first audio number According to the first identifier of generation;
Alternatively, receiving the first audio data that the server returns and the server is that first audio data is given birth to At first identifier;Receive at least one the 4th audio data that the third-party server returns;When detect user select After the first audio data that the server returns, first returned when the server is returned to first audio data is marked Know as the first identifier got;When detecting that user selects an audio number at least one described the 4th audio data After as the first audio data, the first audio data of selection is committed to the server, is described by the server After first audio data of selection generates first identifier, receiving the server is what selected first audio data generated First identifier, using received first identifier as the first identifier got.
In conjunction with the first possible implementation of second aspect, in second of possible implementation of second aspect In, after described at least one the 4th audio data for receiving the third-party server return, further includes:
According to the audio-frequency information and at least one described the 4th audio data of the first audio data that the server returns Audio-frequency information, determine the first audio number returned at least one described the 4th audio data with the presence or absence of the server According to;
If there are the first audio datas that the server returns at least one described the 4th audio data, from institute State and delete the first audio data that the server returns at least one the 4th audio data, obtain it is updated at least one 4th audio data;
Show first audio data and at least one described updated the 4th audio data.
In conjunction with second of possible implementation of second aspect, in the third possible implementation of second aspect In, the display first audio data and at least one described updated the 4th audio data, comprising:
According to the different classes of of first audio data and at least one updated the 4th audio data, aobvious The different zones of display screen show first audio data and at least one described updated the 4th audio data.
In conjunction with second aspect to second aspect the third possible implementation in any possible implementation, In the fourth possible implementation of the second aspect, first audio data is accompaniment data.
The third aspect, provides a kind of server, and the server includes:
Receiving module, for receiving the audio search requests of the first client transmission, the audio search requests carry sound Frequency keyword;
First obtains module, for obtaining the first audio data and being used for unique identification according to the audio keyword The first identifier of first audio data, the audio-frequency information of first audio data include the audio keyword;
Sending module, for sending first audio data and the first identifier to first client;
First memory module, for when receiving the second audio data that first client is sent, according to described the One mark, corresponding to store the second audio data, the second audio data is recorded by first client according to user Third audio data, first audio data and the first identifier generate.
In conjunction with the third aspect, in the first possible implementation of the third aspect, the server further include:
Second obtains module, for obtaining audio data;
Judgment module judges locally whether be stored with the audio for the audio-frequency information according to the audio data Data;
Second memory module, for storing the audio data, the sound when local is not stored the audio data Frequency is according to including first audio data;
Generation module, for generating the audio identification for audio data described in unique identification for the audio data.
In conjunction with the first possible implementation of the third aspect, in second of possible implementation of the third aspect In, the second acquisition module includes:
First acquisition unit, for obtaining the first assonance from the first audio data server every the first preset duration Frequency evidence, the first kind audio data are the audio data that sound quality is higher than specified pitch criteria;
Second acquisition unit, for obtaining the second assonance from second audio data server every the second preset duration Frequency evidence, the second class audio frequency data are the audio data that sound quality is lower than the specified pitch criteria;
Third acquiring unit, for obtaining third assonance from third audio data server every third preset duration Frequency evidence, the third class audio frequency data are to carry out the audio data that noise reduction is handled, the third assonance to original audio The sound quality of frequency evidence is lower than the sound quality of the second class audio frequency data;
Receiving unit, for receive the second client transmission the 4th class audio frequency data, the 4th class audio frequency data by Second client searches for obtain from third-party server.
In conjunction with second of possible implementation of the third aspect, in the third possible implementation of the third aspect In, described second obtains module further include:
First replacement unit, for determining first class audio frequency in the audio-frequency information according to first audio data It include replacing local deposit using the first audio data in the first kind audio data after first audio data in data First audio data of storage;
Judging unit, for determining the second class audio frequency data in the audio-frequency information according to first audio data In include first audio data after, judge whether the first audio data being locally stored derives from the third audio data Server;
Second replacement unit, for when the first audio data source being locally stored is in the third audio data server When, the first audio data being locally stored is replaced using the first audio data in the second class audio frequency data;
First ignores unit, for not deriving from the third audio data service when the first audio data being locally stored When device, ignore the first audio data in the second class audio frequency data;
Second ignoring unit, for when in the audio-frequency information according to first audio data, determining locally stored have After first audio data, ignore the first audio data in the third class audio frequency data got;
First updating unit, in the audio-frequency information according to first audio data, determine it is locally stored After stating the first audio data, when the sound quality for determining the first audio data in the 4th class audio frequency data is better than be locally stored the After the sound quality of one audio data, the first audio data being locally stored is updated;
Second updating unit, in determining the 4th class audio frequency data in the audio-frequency information of the first audio data After holding the content more than the audio-frequency information for the first audio data being locally stored, the sound for the first audio data being locally stored is updated Frequency information.
In conjunction with the third aspect to the third aspect the third possible implementation in any possible implementation, In the fourth possible implementation of the third aspect, first audio data is accompaniment data.
Fourth aspect, provides a kind of client, and the client includes:
First sending module, for sending audio search requests, the audio search requests carry audio keyword;
Module is obtained, for obtaining the first audio data and for the first mark of the first audio data described in unique identification Know, the audio-frequency information of first audio data includes the audio keyword;
Generation module, third audio data, first audio data and first mark for being recorded according to user Know, generates second audio data;
Second sending module makes the server according to for the second audio data to be sent to server First identifier, it is corresponding to store the second audio data.
In conjunction with fourth aspect, in the first possible implementation of the fourth aspect, first sending module is used for Audio search requests are sent respectively to the server and third-party server;
The acquisition module, is used for
The first audio data and the server for receiving the server return are what first audio data generated First identifier;
Alternatively, receiving at least one the 4th audio data that the third-party server returns;Detecting user from institute It states after selecting the first audio data at least one the 4th audio data, the first audio data of selection is committed to the service Device makes the server first audio data generate first identifier;Receiving the server is the first audio number According to the first identifier of generation;
Alternatively, receiving the first audio data that the server returns and the server is that first audio data is given birth to At first identifier;Receive at least one the 4th audio data that the third-party server returns;When detect user select After the first audio data that the server returns, first returned when the server is returned to first audio data is marked Know as the first identifier got;When detecting that user selects an audio number at least one described the 4th audio data After as the first audio data, the first audio data of selection is committed to the server, is described by the server After first audio data of selection generates first identifier, receiving the server is what selected first audio data generated First identifier, using received first identifier as the first identifier got.
In conjunction with the first possible implementation of fourth aspect, in second of possible implementation of fourth aspect In, the client further include:
Determining module, the audio-frequency information of the first audio data for being returned according to the server and it is described at least one The audio-frequency information of 4th audio data determines at least one described the 4th audio data with the presence or absence of server return First audio data;
Removing module, for when there are the first audios that the server returns at least one described the 4th audio data When data, the first audio data that the server returns is deleted from least one described the 4th audio data, is updated At least one the 4th audio data afterwards;
Display module, for showing first audio data and at least one described updated the 4th audio data.
In conjunction with second of possible implementation of fourth aspect, in the third possible implementation of fourth aspect In, the display module, for according to first audio data and at least one updated the 4th audio data It is different classes of, display screen different zones to first audio data and at least one described updated the 4th audio number According to being shown.
In conjunction with fourth aspect to fourth aspect the third possible implementation in any possible implementation, In the fourth possible implementation of the fourth aspect, first audio data is accompaniment data.
Technical solution provided in an embodiment of the present invention has the benefit that
By returning to the first audio data and the first identifier of first audio data to the first client, it is ensured that first After client submits second audio data, server can be by first identifier storage corresponding with second audio data.Due to sound Frequency is according to when being managed, being with first identifier for reference, so that audio data management mode is more orderly.In addition, working as User will search for the second audio data that other users are generated according to same first audio data, can be with when being compared with carrying out K song It is scanned for according to first identifier, so that search process is more time saving.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other Attached drawing.
Fig. 1 is implementation environment schematic diagram involved in a kind of audio data management method of one embodiment of the invention offer;
Fig. 2 be another embodiment of the present invention provides a kind of audio data management method flow chart;
Fig. 3 be another embodiment of the present invention provides a kind of audio data management method flow chart;
Fig. 4 be another embodiment of the present invention provides a kind of audio data management method flow chart;
Fig. 5 be another embodiment of the present invention provides a kind of server structural schematic diagram;
Fig. 6 be another embodiment of the present invention provides a kind of server structural schematic diagram;
Fig. 7 be another embodiment of the present invention provides a kind of client structural schematic diagram;
Fig. 8 be another embodiment of the present invention provides a kind of terminal structural schematic diagram.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with attached drawing to embodiment party of the present invention Formula is described in further detail.
As shown in Figure 1, it illustrates implement involved in a kind of audio data management method provided in an embodiment of the present invention Environment schematic.As shown in Figure 1, the implementation environment include a server cluster, at least one client and at least one the Tripartite's server.Wherein, which includes server 101 and each audio data server, each audio data clothes Business device is respectively the first audio data server 102, second audio data server 103 and third audio data server 104. First audio data server 102, second audio data server 103 and third audio data server 104 are able to locate Reason obtains first kind audio data, the second class audio frequency data and third class audio frequency data.Server 101 can provide accompanying song It services, i.e. karaoke service, after each accessing server by customer end 101, can be sung by the audio data searched.
Specifically, server 101 obtains audio data from each audio data server every preset duration, to each The audio data that audio data server is handled is integrated, it is ensured that can be provided for the user of each client good Audio data.At least one client includes first client 105, and the first client 105 can be searched for server 101 Audio data, and sung using the audio data that server 101 returns for user.Optionally, the first client 105 may be used also To search for audio data to third-party server 106, and sung using the audio data that third-party server 106 returns. Wherein, the audio data that the first client 105 is searched from third-party server 106 is the 4th class audio frequency data.In addition, this is extremely It further include at least one second client 107 in a few client, when the second client 107 is searched from third-party server 106 After rope to the 4th class audio frequency data, it is committed to server 101, server stores the audio data, to ensure when the first client When 105 search audio data, the audio data can be obtained from server 101.
As shown in Figure 1, passing through network connection, the first client between server 101 and each audio data server respectively End 105 and the second client 107 pass through network connection, the first client 105 and the second client 107 with server 101 respectively Pass through network connection with third-party server 106 respectively.The network can be cable network, or wireless network.
Wherein, terminal corresponding to the first client 105 and the second client 107 can be smart phone, Intelligent bracelet, Wearable device, tablet computer, E-book reader, MP3 (Moving Picture Experts Group Audio Layer III, dynamic image expert's compression standard audio level 3) player, MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert's compression standard audio level 3) player, pocket computer on knee and desk-top meter Calculation machine etc..
Specific audio data management method each embodiment as described below:
Implementation environment schematic diagram as shown in connection with fig. 1, Fig. 2 provide a kind of audio data pipe according to an exemplary embodiment The flow chart of reason method.By taking server executes method provided in an embodiment of the present invention as an example, referring to fig. 2, the embodiment of the present invention is mentioned The method flow of confession includes:
201, the audio search requests that the first client is sent are received, wherein audio search requests carry audio keyword.
202, according to audio keyword, the first audio data is obtained and for the first of the first audio data of unique identification Mark, wherein the audio-frequency information of the first audio data includes audio keyword.
203, the first audio data and first identifier are sent to the first client.
204, when receiving the second audio data that the first client is sent, according to first identifier, the second sound of corresponding storage Frequency evidence, wherein third audio data that second audio data is recorded by the first client according to user, the first audio data and First identifier generates.
Method provided in an embodiment of the present invention, by returning to the first audio data and the first audio number to the first client According to first identifier, it is ensured that the first client submit second audio data after, server can be by first identifier and the second sound Frequency is stored according to corresponding.It is with first identifier for reference, so that audio data pipe when due to being managed to audio data Reason mode is more orderly.In addition, user will search for the second audio data that other users are generated according to same first audio data, When being compared with carrying out K song, it can be scanned for according to first identifier, so that search process is more time saving.
In another embodiment, before sending the first audio data and first identifier to the first client, further includes:
Obtain audio data;
According to the audio-frequency information of audio data, judge locally whether be stored with audio data;
If locally not stored have audio data, audio data is stored, wherein audio data includes the first audio number According to;
The audio identification for being used for unique identification audio data is generated for audio data.
In another embodiment, audio data is obtained, comprising:
Every the first preset duration, first kind audio data is obtained from the first audio data server, wherein the first kind Audio data is the audio data that sound quality is higher than specified pitch criteria;
Every the second preset duration, the second class audio frequency data are obtained from second audio data server, wherein the second class Audio data is the audio data that sound quality is lower than specified pitch criteria;
Every third preset duration, third class audio frequency data are obtained from third audio data server, wherein third class Audio data is that the audio data that noise reduction is handled is carried out to original audio, and the sound quality of third class audio frequency data is lower than the second class The sound quality of audio data;
Receive the second client transmission the 4th class audio frequency data, wherein the 4th class audio frequency data by the second client from Third-party server is searched for obtain.
In another embodiment, every the first preset duration, the first assonance is obtained from the first audio data server Frequency is after, further includes:
In the audio-frequency information according to the first audio data, determine in first kind audio data include the first audio data after, The first audio data being locally stored is replaced using the first audio data in first kind audio data;
Every the second preset duration, after obtaining the second class audio frequency data from second audio data server, further includes:
In the audio-frequency information according to the first audio data, determine in the second class audio frequency data include the first audio data after, Judge whether the first audio data being locally stored derives from third audio data server;
If the first audio data source being locally stored uses the second assonance frequency in third audio data server The first audio data in replaces the first audio data being locally stored;
If the first audio data being locally stored does not derive from third audio data server, ignore the second class audio frequency The first audio data in data;
Every third preset duration, from third audio data server after acquisition third class audio frequency data, further includes:
In the audio-frequency information according to the first audio data, determine it is locally stored have the first audio data after, ignore acquisition To third class audio frequency data in the first audio data;
After the 4th class audio frequency data for receiving the transmission of the second client, further includes:
In the audio-frequency information according to the first audio data, determine it is locally stored have the first audio data after, when determining After the sound quality of the first audio data is better than the sound quality for the first audio data being locally stored in four class audio frequency data, local deposit is updated First audio data of storage;
The content of the audio-frequency information of the first audio data is more than first be locally stored in determining the 4th class audio frequency data After the content of the audio-frequency information of audio data, the audio-frequency information for the first audio data being locally stored is updated.
In another embodiment, the first audio data is accompaniment data.
All the above alternatives can form alternative embodiment of the invention using any combination, herein no longer It repeats one by one.
The content of embodiment corresponding to implementation environment schematic diagram and Fig. 2 as shown in connection with fig. 1, Fig. 3 is according to an exemplary reality It applies example and provides a kind of flow chart of audio data management method.In the first client executing method provided in an embodiment of the present invention For, referring to Fig. 3, method flow provided in an embodiment of the present invention includes:
301, audio search requests are sent, wherein audio search requests carry audio keyword.
302, the first audio data and the first identifier for the first audio data of unique identification are obtained, wherein the first sound The audio-frequency information of frequency evidence includes audio keyword.
303, third audio data, the first audio data and the first identifier recorded according to user generates the second audio number According to.
304, second audio data is sent to server, makes server according to first identifier, the second audio of corresponding storage Data.
Method provided in an embodiment of the present invention, the first audio data and the first audio number returned by receiving server According to first identifier, it is ensured that when user sings according to the first audio data, after obtaining second audio data, mentioned to server Second audio data is handed over, allows server by first identifier storage corresponding with second audio data.Since server is to sound Frequency is according to when being managed, being with first identifier for reference, so that server relatively has the way to manage of audio data Sequence.In addition, when user will search for other users according to the second audio data that same first audio data generates to carry out K song ratio When spelling, server is scanned for according to first identifier, so that search process is more time saving.
In another embodiment, audio search requests are sent, comprising:
Audio search requests are sent respectively to server and third-party server;
Obtain the first audio data and the first identifier for the first audio data of unique identification, comprising:
Receive the first audio data that server returns and the first identifier that server is the generation of the first audio data;
Alternatively, receiving at least one the 4th audio data that third-party server returns;Detecting user from least one After selecting the first audio data in a 4th audio data, the first audio data of selection is committed to server, makes server First identifier is generated for the first audio data;Receiving server is the first identifier that the first audio data generates;
Alternatively, receiving the first audio data that server returns and the first mark that server is the generation of the first audio data Know;Receive at least one the 4th audio data that third-party server returns;When detecting that user selects that server returns After one audio data, the first identifier returned when server is returned to the first audio data is as the first identifier got;When Detect user select an audio data at least one the 4th audio data as the first audio data after, by selection First audio data is committed to server, after generating first identifier by the first audio data that server is selection, receives service Device is the first identifier that the first audio data of selection generates, using received first identifier as the first identifier got.
In another embodiment, it after receiving at least one the 4th audio data that third-party server returns, also wraps It includes:
The audio-frequency information of the first audio data returned according to server and the audio of at least one the 4th audio data are believed Breath determines the first audio data returned at least one the 4th audio data with the presence or absence of server;
If the first audio data that presence server returns at least one the 4th audio data, from least one the The first audio data that server returns is deleted in four audio datas, obtains at least one updated the 4th audio data;
Show the first audio data and at least one updated the 4th audio data.
In another embodiment, the first audio data and at least one updated the 4th audio data are shown, comprising:
According to the different classes of of the first audio data and at least one updated the 4th audio data, display screen not The first audio data and at least one updated the 4th audio data are shown with region.
In another embodiment, the first audio data is accompaniment data.
All the above alternatives can form alternative embodiment of the invention using any combination, herein no longer It repeats one by one.
The content of embodiment corresponding to implementation environment schematic diagram and Fig. 2 or Fig. 3 as shown in connection with fig. 1, Fig. 4 is according to an example Property embodiment provides a kind of flow chart of audio data management method.It interacts with server and the first client and executes sheet For the method that inventive embodiments provide, referring to fig. 4, method flow provided in an embodiment of the present invention includes:
401, the first user end to server sends audio search requests, which carries audio keyword.
In embodiments of the present invention, server provides K song service, and provides audio data required when K song.First client When the user at end carries out K song, audio data, and the audio obtained according to request can be requested by the first user end to server Data are sung.It is operated when the first client detects that the K of user is sung, and detects that user has carried out a certain song of search Audio data operation when, trigger the first user end to server send audio search requests.For example, when detecting user Audio keyword is had input in the audio service interface that the first client provides, and is further detected in audio service interface Search option it is selected after, trigger to server and send audio search requests.
Wherein, audio search requests carry audio keyword, and the audio keyword is can be identified for that the first client is asked The word or word of the feature for the audio data asked.For example, the audio keyword can be song title and/or singer's name.By in sound Audio keyword is carried in frequency searching request, the audio data for alloing server to know that the first client needs to search for.
402, after server receives the audio search requests that the first client is sent, according to audio keyword, first is obtained Audio data and first identifier for the first audio data of unique identification, wherein the audio-frequency information packet of the first audio data Include audio keyword.
After server receives the audio search requests that the first client is sent, in order to return to first to the first client Audio data needed for client, server obtain the first audio data according to audio keyword.Wherein, the first audio data The audio data corresponding with audio keyword got for server.
In addition, in embodiments of the present invention, server is for the ease of management as reality synthesized by audio accompaniment data The audio data that border is sung can generate a globally unique mark for each audio data of server local storage.At this In step, server is for the ease of managing the second audio data that subsequent first client is generated according to the first audio data, also Need to obtain the first identifier of first audio data.
The mode of the first audio data is obtained about server, including but not limited to: according to audio keyword, inquiry is local Pre-stored all audio datas, obtain with the matched audio data of the audio keyword, will be matched with the audio keyword Audio data as the first audio data.
For example, server can be first from the audio data being locally stored if audio keyword is singer A and song B The corresponding all audio datas of middle inquiry singer A, and song B are inquired from the corresponding all audio datas of singer A, thus will simultaneously Meet the audio data of singer A and song B as the first audio data.
Further, server needs each audio data life in advance to be locally stored before obtaining first identifier At a globally unique mark.Wherein, server, can be according to each audio number when generating mark for each audio data According to audio-frequency information realize, for example, if the audio-frequency information for the audio data being locally stored include song title, singer name and audio The source of data, then server can generate every according to the source of the song title of each audio data, singer's name and audio data The mark of a audio data.
About the type of the first audio data, the embodiment of the present invention is not especially limited.When it is implemented, since user exists When K is sung, generally require to be realized according to certain accompaniment.Therefore, the first audio data includes but is not limited to accompaniment data.
403, server sends the first audio data and first identifier to the first client.
In order to enable the user of the first client to be sung according to the first audio data, server is to the first client Send the first audio data.For the ease of the subsequent second audio data that first client is made according to the first audio data into First identifier is also sent to the first client by row management, server.
Wherein, it before server sends the first audio data and first identifier to the first client, ensures that full Demand of the first client of foot to various audio datas, it should obtain various audio datas, and in advance first to get Each audio data generate unique mark, it is ensured that include the first client requested first in the audio data got Audio data.Audio data is obtained about server, and is the mode of the audio data generation unique identification got, including but It is not limited to the following two kinds:
First way: server obtains audio data from the audio data server in server cluster, and is from sound The audio data that frequency data server is got generates unique identification.
In conjunction with Fig. 1, audio data server herein is the first audio data server in server cluster, the second sound Frequency data server and third audio data server.The specific implementation process of this kind of mode includes but is not limited to pass through to walk as follows Rapid 4031 realize to step 4034:
4031, server obtains audio data every preset duration from audio data server.
About the specific value of preset duration, the embodiment of the present invention is not especially limited.When it is implemented, can be according to need It sets.For example, the preset duration can be one week, two weeks etc..In conjunction with Fig. 1, in embodiments of the present invention, in server cluster Including different audio data servers, and each audio data server is capable of handling to obtain different types of audio data. Therefore, for the ease of managing the audio data that gets from each audio data server, server is from each audio data When server obtains audio data, different time intervals can be used.
In addition, the audio data that server is never got with audio data server for the ease of differentiation, in the present invention In embodiment, the audio data that server is got from the first audio data server is defined as first kind audio data, That is the first audio data server is capable of handling to obtain first kind audio data;By server from second audio data server The audio data got is defined as the second class audio frequency data, i.e. second audio data server is capable of handling to obtain the second assonance Frequency evidence;The audio data that server is got from third audio data server is defined as third class audio frequency data, i.e., Third audio data server is capable of handling to obtain third class audio frequency data.
Since different audio data servers is capable of handling to obtain the audio data of different sound quality, the first assonance The sound quality of frequency evidence, the second class audio frequency data and third class audio frequency data is different.Specifically, in embodiments of the present invention, The sound quality of first kind audio data is higher than specified pitch criteria;The sound quality of second class audio frequency data is lower than specified pitch criteria;The Three classes audio data is that the audio data that handles of noise reduction is carried out to original audio, and the sound quality of third class audio frequency data is lower than the The sound quality of two class audio frequency data.Wherein, the sound for being used to identify different audio datas that specified pitch criteria can be general for industry The numerical value of matter.
In conjunction with the explanation of the above-mentioned sound quality to first kind audio data, the second class audio frequency data and third class audio frequency data, First kind audio data can be referred to as good quality audio data, and the second class audio frequency data can be referred to as ordinary audio data, the Three classes audio data can be referred to as noise reduction audio data.
In addition, may be used also to more easily discriminate first kind audio data, the second class audio frequency data and third class audio frequency data Think that other adeditive attributes are respectively set in first kind audio data, the second class audio frequency data and third class audio frequency data.For example, the A kind of audio data is made by professional and has been carried out quality testing, and each first kind audio data is bundled with the lyrics. Second class audio frequency data are the audio data for the song that volumes of searches is greater than the first specified threshold.Third class audio frequency data are volumes of searches Greater than the audio data of the song of the second specified threshold.Wherein, the first specified threshold is specified less than the second specified threshold, such as first Threshold value is 1,000,000, and the second specified threshold is 2,000,000 etc..
Wherein, server can execute following procedure when obtaining audio data from each audio data server: every First preset duration obtains first kind audio data from the first audio data server;Every the second preset duration, from second The second class audio frequency data are obtained in audio data server;Every third preset duration, obtained from third audio data server Take third class audio frequency data.About the specific value of the first preset duration, the second preset duration and third preset duration, the present invention Embodiment is equally not especially limited.
Such as, server can morning, any obtains first kind audio data from the first audio data server on every Mondays; Two o'clock in the morning obtains the second class audio frequency data from second audio data server on every Mondays;3 points of morning is from third sound on every Mondays Third class audio frequency data etc. are obtained on frequency data server.
The mode of audio data is obtained from audio data server about server, including but not limited to by with audio The interface of data server connection is realized.For example, server can by the interface that is connect with the first audio data server, from First audio data server obtains first kind audio data;By the interface being connect with second audio data server, from Two audio data servers obtain the second class audio frequency data;By the interface being connect with third audio data server, from third Audio data server obtains third class audio frequency data.
4032, server judges locally whether be stored with the audio according to the audio-frequency information of the audio data got Data then follow the steps 4033 and 4034 if locally not stored have the audio data;If it is locally stored have get Audio data thens follow the steps 4035 to step 4037.
Wherein, audio-frequency information is the information that can be identified for that audio data characteristics, for example, industry is when making audio data, Generally directed to each audio data, a cryptographic Hash can be generated, same type of different audios can be distinguished according to cryptographic Hash Data, therefore, audio-frequency information can be the cryptographic Hash of audio data.In addition, audio-frequency information can also be corresponding for audio data Song title, Ge Shouming, size of data, performance duration etc..
In order to avoid server local because storage repeat audio data due to occupy unnecessary memory space, server from After getting audio data in audio data server, according to the audio-frequency information of the audio data got, whether local is judged It has been stored with the audio data.Wherein, whether server judge locally in the audio-frequency information according to the audio data got When being stored with the audio data, including but not limited to: according to the cryptographic Hash of the audio data got, inquiring locally stored The cryptographic Hash of audio data, if there is the Hash of the audio data got in the cryptographic Hash of locally stored audio data Value, it is determined that it is locally stored to have the audio data, otherwise, it determines locally not stored have the audio data.
4033, the audio data is stored.
When local is not stored gets audio data, server stores the audio data got.Wherein, it services Device can be realized by way of list in the audio data that storage is got, that is, be directed to each audio data, respective column A storage entry in table.Therefore, server, can be by adding one newly in list end when storing the audio data Entry is stored to realize.
In addition, due in embodiments of the present invention, audio number that server is got from different audio data servers According to for different types of audio data, for the ease of distinguishing different types of audio data, the sound that server is got in storage Frequency according to when, different memory spaces can be distributed for each type of audio data.For example, being distributed for first kind audio data First memory space distributes the second memory space for the second class audio frequency data, and it is empty to distribute third storage for third class audio frequency data Between.At this point, server can store the audio number according to the type of the audio data got after getting audio data According to.For example, the audio data is stored in advance by server if the audio data got is first kind audio data In first memory space of distribution.
Wherein, server is when storing audio data according to the type of the audio data got, due to different types of Audio data source is in different audio data servers, and therefore, server can determine first according to the source of the audio data The type of the audio data.For example, if the audio data source is in the first audio data server, it is determined that the audio data For first kind audio data.
4034, server is that the audio data generates the audio identification for being used for the unique identification audio data.
It is the mode that the audio data generates audio identification about server, the embodiment of the present invention is not especially limited, and is protected The audio data and other audio datas can uniquely be distinguished by demonstrate,proving the mark.
Above-mentioned steps 4031 to step 4034 for server local it is not stored have the case where audio data got into Explanation is gone.Further, any audio data being stored with for server local in the audio data got When, the processing mode of server content as described below.For ease of description, this is sentenced wraps in the audio data that server is got Include the first audio data, and server local be stored with the first audio data for be illustrated.
Specifically, since for different types of audio data, sound quality is different, therefore, when the sound that server is got When frequency is according to being different types of audio data, the processing mode of server is not identical, step 4035 as described below to step 4037。
4035, when the audio data that server is got is first kind audio data, if server is according to the first sound The audio-frequency information of frequency evidence determines to include the first audio data in the first kind audio data got, then server use obtains The first audio data in the first kind audio data got replaces the first kind audio data being locally stored.
Since in embodiments of the present invention, server determines that the first audio data server can handle to obtain sound quality in real time Higher than the first kind audio data of specified pitch criteria, i.e. server determines first that the first audio data server is handled Class audio frequency data are top quality audio data, therefore, in order to continue to optimize the sound quality for the audio data being locally stored, server The first kind audio data being locally stored is replaced using the first audio data in the first kind audio data got.
4036, when the audio data that server is got is the second class audio frequency data, if server is according to the first sound The audio-frequency information of frequency evidence determines in the second class audio frequency data got including after the first audio data, server is further Judge whether the first audio data being locally stored derives from third audio data server;If the first audio being locally stored Data source is then replaced using the first audio data in the second class audio frequency data got in third audio data server The first audio data being locally stored;If the first audio data being locally stored does not derive from third audio data server, Then ignore the first audio data of the second class audio frequency data kind got.
Specifically, the sound quality due to the sound quality of the second class audio frequency data higher than third class audio frequency data, and third class audio frequency Data are handled to obtain by third audio data server, i.e. third class audio frequency data source in third audio data server, because This, if the first audio data source being locally stored stores on server in third audio data server, in order to optimize The first audio data sound quality, server uses the first audio data replacement service in the second class audio frequency data for getting The first audio data that device is locally stored.If the first audio data of server local storage does not derive from third audio data Server, then in order to avoid repeating to store first audio data, server is ignored in the second class audio frequency data got First audio data.
For example, if the first audio data that server is got is the song got from second audio data server The audio data of bent A, the audio data source of the song A of server local storage in third audio data server, due to from The audio data of the song A got on second audio data server is the second class audio frequency data, and sound quality is higher than from third The audio data got in audio data server, therefore, server replace this using the audio data of the song A got The audio data of the song A of ground storage.
Wherein, server is replacing server local using the first audio data in the second class audio frequency data got When the first audio data of storage, the first audio data covering server in the second class audio frequency data got can be used The first audio data being locally stored.Server when ignoring the first audio data in the second class audio frequency data got, The first audio data in the second class audio frequency data got can directly be deleted.
If locally stored have the first audio data, by ignoring in the second class audio frequency data got One audio data can make it possible to avoid the unnecessary memory space for occupying server because repeating the first audio data of storage Enough optimize the memory space of server.
4037, when the audio data that server is got is third class audio frequency data, if server is according to the first sound The audio-frequency information of frequency evidence determines in the third class audio frequency data got to include the first audio data, then server, which is ignored, obtains The first audio data in third class audio frequency data got.
If locally stored have the first audio data, by ignoring in the third class audio frequency data got One audio data can make it possible to avoid the unnecessary memory space for occupying server because repeating the first audio data of storage Enough optimize the memory space of server.Wherein, server is ignoring the first audio number in the third class audio frequency data got According to when, can also directly delete the first audio data in the third class audio frequency data got.
It should be noted that the step of above-mentioned steps 4035 to step 4037 mark is only used for what identification server was got When audio data is from different audio data servers, when being stored with the first audio data for server local, service Different disposal situation of the device to the first audio data got, step mark are not used to limit successive suitable between step Sequence.
In addition, server after getting each audio data from each audio data server, can also periodically update The audio-frequency information of each audio data.For example, server can be default every the 4th if audio-frequency information includes lyrics mark Duration, updates the lyrics mark of each audio data, and whether the audio data being locally stored with identification server is bundled with the lyrics. By identifying whether the audio data being locally stored is bundled with the lyrics, when so that returning to the first audio data to the first client, The information whether the first audio data of mark is bundled with the lyrics can be carried.At this point, if the user of the first client needs to tie up Determine the first audio data of the lyrics, and the lyrics mark for the first audio data that server returns shows first audio data The unbound lyrics, then the first client can not select first audio data, to avoid the first client because having downloaded this It is unsatisfactory for the first desired audio data and wastes data on flows, so as to save the flow rate of the first client, to use Bring good usage experience in family.
The second way: server receives the 4th class audio frequency data that the second client is sent, by received 4th assonance Frequency is according to as the audio data got.Wherein, the 4th class audio frequency data are searched for by the second client from third-party server It obtains.
Specifically, the 4th class audio frequency data are made to be uploaded to third-party server after obtaining by the user of other clients. After the second client searches the 4th class audio frequency data from third-party server, the 4th class audio frequency data can be committed to Server.
Since the 4th class audio frequency data are made to obtain by the user of each client, the sound of the 4th class audio frequency data Matter is unstable, may be very well, it is also possible to very bad.In addition, the user of each client when making the 4th class audio frequency data, goes back Personal element can be added in the 4th class audio frequency data according to hobby and e.g. adds DJ (Disk in the 4th class audio frequency data Jockey, Disc Jocker) element etc..Further, since the 4th class audio frequency data may have many different users in corresponding client Production obtains, and therefore, the type for the song that the 4th class audio frequency data may include is abundant.For example, in first kind audio data, It does not include the audio data of song C in two class audio frequency data and third class audio frequency data, then some client can be directed to Song C makes an audio data, and the audio data of the song C of client production belongs to the 4th class audio frequency data.
In conjunction with above-mentioned server from audio data server get audio data after processing mode, server receives the After four class audio frequency data, it is also desirable to according to the audio-frequency information for each audio data that the 4th class audio frequency data include, judge local Whether audio data that fourth class audio frequency data include has been stored with;If server local is not stored the 4th class audio frequency Any one audio data for including in data, then store all audio datas that the 4th class audio frequency data include, and for this All audio datas that four class audio frequency data include generate unique audio identification.The principle of the process and step 4031 to step 4034 principle is consistent, and for details, reference can be made to the contents of above-mentioned steps 4031 to step 4034, and details are not described herein.
Further, when server determines locally stored there is any audio for including in received 4th class audio frequency data Data judge that server local has been stored with first audio data if server is according to the audio-frequency information of the first audio data, Then due to being directed to the audio data of same song, when making the audio data of the song, production obtains each client Audio data sound quality and/or audio-frequency information may and it is different, therefore, server needs received 4th class audio frequency further The sound quality and audio of the sound quality of first audio data and the content of audio-frequency information and the first audio data being locally stored in data The content of information is compared.When server determines that the sound quality of the first audio data in received 4th class audio frequency data is better than After the sound quality for the first audio data being locally stored, this is updated using the first audio data in received 4th class audio frequency data First audio data of ground storage.
For example, song C is directed to, if the sound quality of the audio data of the song C of server local storage is lower than received The sound quality of the audio data of song C in 4th class audio frequency data, then server uses song C in received 4th class audio frequency data Audio data update the audio data of song C being locally stored.
In addition, if the server determine that the audio-frequency information of the first audio data in received 4th class audio frequency data it is interior The audio-frequency information for holding the first audio data than being locally stored is more, then server can be used in received 4th class audio frequency data The audio-frequency information of the first audio data update the audio-frequency information of the first audio data being locally stored.
For example, if the first audio data in the 4th class audio frequency data is the audio data of song C, and received song The audio-frequency information of C includes the lyrics of song C, and the audio-frequency information for the song C being locally stored does not include the lyrics of song C, then takes The lyrics of song C can be added in the audio-frequency information for the song C being locally stored by business device, to realize what update was locally stored The audio-frequency information of song C.
To sum up, by receiving the 4th class audio frequency data from the second client, and first in the 4th class audio frequency data is used Audio data updates the first audio data being locally stored, or updates the audio letter for the first audio data being locally stored Breath, or updates the first audio data and audio-frequency information being locally stored simultaneously, so that server is updating the be locally stored It, can be real according to the sound quality and audio-frequency information for the audio data that the second client is submitted when one audio data and its audio-frequency information It is existing, so as to continue to optimize the first audio data being locally stored, it is ensured that the first user end to server requests first sound Frequency according to when, server can provide optimal audio data for it.
404, it after the first client receives the first audio data and first identifier that server returns, is recorded according to user Third audio data, the first audio data and first identifier generate second audio data, and second audio data are sent to clothes Business device.
After first client receives the first audio data that server returns and first identifier, first audio can be shown Data and first identifier.Wherein, the first client is when showing the first audio data, can by the first audio data with Protobuf data transmission format is presented to user.
Specifically, it is corresponding can to sing first audio data according to the first audio data by the user of the first client Song, the third audio data that the first client recording user generates according to first audio data, and by third audio data, First audio data and first identifier synthesize second audio data.Further, second generated for the ease of server admin The second audio data is sent to server by audio data, the first client.
For example, the user of the first client can be according to this if first audio data is the accompaniment data of song A Accompaniment data gives song recitals A, since accompaniment data is corresponding with first identifier, available includes accompaniment data, third sound The second audio data of frequency evidence and first identifier.
405, after server receives the second audio data that the first client is sent, according to first identifier, corresponding storage the Two audio datas.
Since the client that the first audio data that selection server provides is sung has many, work as different clients When the user at end selects the first audio data to sing, for first audio data, a client can make to obtain one A second audio data.For the ease of managing the second audio data that each client makes according to the first voice data, service Device is according to the corresponding storage second audio data of first identifier, i.e., the second sound made each client according to the first audio data Frequency evidence is stored under the first identifier.
On this basis, if user can obtain the second audio data of other users, can trigger the user with Another user carries out K song and compares with, to increase the K song enthusiasm of the two users.Therefore, by by different clients according to first The second audio data that audio data generates is stored under the first identifier, and subsequent each client searches it with can be convenient The second audio data that his user generates so that the K song passion triggered between user compares with, and further increases K song software User volume.It is compared in addition, being sung by K, the interaction between user can be increased, if the class of the audio data of different user hobby Type is similar, then family can be used and get to know the other users more with identical hobby, to make user reach with the mesh of singing party friend , and then improve user experience.
Above-mentioned steps 401 to step 405 with the first client when obtaining audio data, by server return audio number For as the first audio data, the management method of audio data provided in an embodiment of the present invention is explained in detail. Optionally, in step 401, the first client, can also be simultaneously to third party when sending audio search requests to server Server sends audio search requests, to obtain the audio data of needs from third-party server.When server is in step 401 When the middle transmission audio search requests to server, while audio search requests are had sent to third-party server, server carries out The method of audio data management is as follows:
First client sends audio search requests to third-party server, and third-party server receives the audio search After request, third-party server searches for its audio database according to the audio keyword in audio search requests, and acquisition meets sound The audio data of frequency searching request.Since third-party server may meet searching bar according to audio search requests acquisition is multiple The audio data of part, in embodiments of the present invention, the audio number for meeting audio search requests that third-party server is searched According to being defined as at least one the 4th audio data.Wherein, the first client sends audio search requests to third-party server The mode that mode, third-party server obtain at least one the 4th audio data may refer to the content in above-mentioned steps 401, This is repeated no more.
After the first client sends audio search requests to server and third-party server simultaneously, due to server and Third-party server can respond the audio search requests of the first client simultaneously, and therefore, server returns to the to the first client One audio data, third-party server return at least one the 4th audio data to the first client.At this point, the first client can The first audio data that server can be selected to return, it is also possible at least one the 4th audio returned from third-party server Select an audio data as the first audio data in data.On this basis, the first client obtains the first audio data And the mode of first identifier, it can there are several types of situations:
The first situation: the first client detect user select server return the first audio data as acquisition When the first audio data arrived, the first identifier returned when server is returned to the first audio data is as the first mark got Know.
Second situation: the first client is detecting that user selects an audio from least one the 4th audio data After data, using the audio data that user selects as the first audio data got.Further, in order to obtain the first mark Know, the first audio data of selection is committed to server by the first client;Server is that the first audio data of the selection is raw After first identifier, first identifier is returned to the first client;First client receives the first audio that server is the selection The first identifier that data generate, using the first identifier as the first identifier got.
Wherein, when the first client detects that user selects an audio data at least one the 4th audio data to make After the first audio data, by the way that the audio data of the selection is committed to server, not only it can expand sound in order to server The content of frequency database, and can ensure that server returns to the first identifier for being used for unique identification first audio data.
For server when returning to first identifier, if server local has been stored with first audio data, server can Directly to return to first identifier;If not stored first audio data of server, server needs first for first audio After data generate first identifier, the first identifier is returned.
The third situation: the first client has received the first audio data and third-party server of server return simultaneously At least one the 4th audio data returned.At this point, when the first client detects the first sound that user selects server to return Frequency is after, and the first identifier returned when server is returned to the first audio data is as the first identifier got;When first Client detect user select an audio data at least one the 4th audio data as the first audio data after, First audio data of selection is committed to server by one client, generates first by the first audio data that server is selection After mark, the first client receives the first identifier that the first audio data that server is selection generates, and received first is marked Know as the first identifier got.
Optionally, when the first client receives the first audio data of server return simultaneously and third-party server returns At least one the 4th audio data after, due to server return the first audio data and third-party server return at least There may be identical audio datas in one the 4th audio data, in order to avoid showing duplicate audio data, the first client End can also determine further according to the audio-frequency information of the first audio data and the audio-frequency information of at least one the 4th audio data It whether there is the first audio data at least one the 4th audio data;If at least one the 4th audio data, there are first Audio data, then the first client can delete the first audio number of server return from least one the 4th audio data According to, obtain updated 4th audio data, and the first audio data for returning of display server and it is updated at least one 4th audio data.
Wherein, the first client is according to the audio-frequency information of the first audio data and the audio of at least one the 4th audio data Information, when determining at least one the 4th audio data with the presence or absence of the first audio data, can by the first audio data and The cryptographic Hash of at least one the 4th audio data is realized.If at least one the 4th audio data, there are an audio datas Cryptographic Hash it is identical as the cryptographic Hash of the first audio data, it is determined that there are the first audio numbers at least one the 4th audio data According to;If cryptographic Hash identical with the cryptographic Hash of the first audio data is not present at least one the 4th audio data, first Client determines that there is no the first audio datas at least one the 4th audio data.
Further, the first client is showing the first audio data and at least one updated the 4th audio data When, since the first audio data and at least one updated the 4th audio data are the audio got from different server Data, and the type of the first audio data and at least one updated the 4th audio data that server returns may not phase Together, e.g., the first audio data that server returns is the second class audio frequency data, and at least one updated the 4th audio data For the 4th class volume data, therefore, the first client can be according to the first audio data and at least one updated the 4th sound Frequency evidence it is different classes of, in the first audio data and updated at least one that the different zones of display screen return to server A 4th audio data is shown.
For example, display screen can be divided into two regions up and down by the first client, thus return in display server the When one audio data and at least one updated four audio data, it can be returned in the upper half area display server of display screen The first audio data returned, shows at least one updated the 4th audio data in the lower half region of display.
The first audio data and more that first client passes through the different display area display servers return in display screen At least one the 4th audio data after new can obvious Differentiated Services device return so that user is when select audio data The audio data that audio data and third-party server return, not only makes the audio data of display very clear, but also provides It is more diversified to the selection mode of user, so as to bring good operating experience to user.
Method provided in an embodiment of the present invention, by server to the first client return the first audio data and this first The first identifier of audio data, it is ensured that after the first user end to server submits second audio data, server can be by the The storage corresponding with second audio data of one mark.It is to be with first identifier since server is when being managed audio data With reference to so that audio data management mode is more orderly.In addition, when user will search for other users according to same first sound Frequency is according to the second audio data of generation, and when being compared with carrying out K song, server can be scanned for according to first identifier, so that Search process is more time saving.
Fig. 5 is a kind of structural schematic diagram of the server provided according to an exemplary embodiment, and the server is for executing Function performed by server in any embodiment in embodiment corresponding to above-mentioned Fig. 2 to Fig. 4.Referring to Fig. 5, the server packet It includes:
Receiving module 501, for receiving the audio search requests of the first client transmission, wherein audio search requests are taken Band audio keyword;
First obtains module 502, for according to audio keyword, obtaining the first audio data and for unique identification the The first identifier of one audio data, wherein the audio-frequency information of the first audio data includes audio keyword;
Sending module 503, for sending the first audio data and first identifier to the first client;
First memory module 504, for being marked according to first when receiving the second audio data that the first client is sent Know, corresponding storage second audio data, wherein the third audio number that second audio data is recorded by the first client according to user It is generated according to, the first audio data and first identifier.
Server provided in an embodiment of the present invention, by returning to the first audio data and first audio to the first client The first identifier of data, it is ensured that after the first client submits second audio data, server can be by first identifier and second The corresponding storage of audio data.It is with first identifier for reference since server is when being managed audio data, so that Audio data management mode is more orderly.In addition, when user will search for what other users were generated according to same first audio data When second audio data is compared with carrying out K song, it can be scanned for according to first identifier, so that search process is more time saving.
In another embodiment, server further include:
Second obtains module, for obtaining audio data;
Judgment module judges locally whether be stored with audio data for the audio-frequency information according to audio data;
Second memory module, for when it is local it is not stored have audio data when, store audio data, audio data includes the One audio data;
Generation module, for generating the audio identification for being used for unique identification audio data for audio data.
In another embodiment, the second acquisition module includes:
First acquisition unit, for obtaining the first assonance from the first audio data server every the first preset duration Frequency evidence, wherein first kind audio data is the audio data that sound quality is higher than specified pitch criteria;
Second acquisition unit, for obtaining the second assonance from second audio data server every the second preset duration Frequency evidence, wherein the second class audio frequency data are the audio data that sound quality is lower than specified pitch criteria;
Third acquiring unit, for obtaining third assonance from third audio data server every third preset duration Frequency evidence, wherein third class audio frequency data are to carry out the audio data that noise reduction is handled, third assonance frequency to original audio According to sound quality be lower than the second class audio frequency data sound quality;
Receiving unit, for receive the second client transmission the 4th class audio frequency data, wherein the 4th class audio frequency data by Second client searches for obtain from third-party server.
In another embodiment, second module is obtained further include:
First replacement unit, for determining and being wrapped in first kind audio data in the audio-frequency information according to the first audio data After including the first audio data, the first audio number being locally stored is replaced using the first audio data in first kind audio data According to;
Judging unit includes the in the second class audio frequency data for determining in the audio-frequency information according to the first audio data After one audio data, judge whether the first audio data being locally stored derives from third audio data server;
Second replacement unit, for when the first audio data source being locally stored is when third audio data server, The first audio data being locally stored is replaced using the first audio data in the second class audio frequency data;
First ignores unit, for not deriving from third audio data server when the first audio data being locally stored When, ignore the first audio data in the second class audio frequency data;
Second ignoring unit, for when in the audio-frequency information according to the first audio data, determining locally stored there is first After audio data, ignore the first audio data in the third class audio frequency data got;
First updating unit locally stored has the first sound for determining in the audio-frequency information according to the first audio data Frequency is after, when the sound quality for determining the first audio data in the 4th class audio frequency data is better than the first audio data being locally stored After sound quality, the first audio data being locally stored is updated;
Second updating unit, the content for the audio-frequency information of the first audio data in determining the 4th class audio frequency data are more After the content of the audio-frequency information for the first audio data being locally stored, the audio letter for the first audio data being locally stored is updated Breath.
In another embodiment, the first audio data is accompaniment data.
All the above alternatives can form alternative embodiment of the invention using any combination, herein no longer It repeats one by one.
Fig. 6 is a kind of structural schematic diagram of server shown according to an exemplary embodiment.Referring to Fig. 6, server 600 It further comprise one or more processors including processing component 622, and the money of the memory as representated by memory 632 Source, can be by the instruction of the execution of processing component 622, such as application program for storing.The application program stored in memory 632 May include it is one or more each correspond to one group of instruction module.In addition, processing component 622 is configured as holding Row instruction, to execute the audio data management method that any embodiment provides in embodiment corresponding to above-mentioned Fig. 2 to Fig. 4.
Server 600 can also include that a power supply module 626 be configured as the power management of execute server 600, and one A wired or wireless network interface 650 is configured as server 600 being connected to network and input and output (I/O) interface 658.Server 600 can be operated based on the operating system for being stored in memory 632, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or similar.
Wherein, one perhaps more than one program be stored in memory and be configured to by one or more than one Processor executes, and one or more than one program include the instruction for performing the following operation:
Receive the audio search requests of the first client transmission, wherein audio search requests carry audio keyword;
According to audio keyword, the first audio data and the first mark for the first audio data of unique identification are obtained Know, wherein the audio-frequency information of the first audio data includes audio keyword;
The first audio data and first identifier are sent to the first client;
When receiving the second audio data that the first client is sent, according to first identifier, the second audio number of corresponding storage According to, wherein third audio data that second audio data is recorded by the first client according to user, the first audio data and first Mark generates.
Assuming that above-mentioned is the first possible embodiment, then provided based on the first possible embodiment Second of possible embodiment in, in the memory of server also include instructions for performing the following operations:
Before sending the first audio data and first identifier to the first client, further includes:
Obtain audio data;
According to the audio-frequency information of audio data, judge locally whether be stored with audio data;
If locally not stored have audio data, audio data is stored, audio data includes the first audio data;
The audio identification for being used for unique identification audio data is generated for audio data.
In the third the possible embodiment provided based on second of possible embodiment, server Also include instructions for performing the following operations in memory:
Obtain audio data, comprising:
Every the first preset duration, first kind audio data is obtained from the first audio data server, wherein the first kind Audio data is the audio data that sound quality is higher than specified pitch criteria;
Every the second preset duration, the second class audio frequency data are obtained from second audio data server, wherein the second class Audio data is the audio data that sound quality is lower than specified pitch criteria;
Every third preset duration, third class audio frequency data are obtained from third audio data server, wherein third class Audio data is that the audio data that noise reduction is handled is carried out to original audio, and the sound quality of third class audio frequency data is lower than the second class The sound quality of audio data;
Receive the second client transmission the 4th class audio frequency data, wherein the 4th class audio frequency data by the second client from Third-party server is searched for obtain.
In the 4th kind of possible embodiment provided based on the third possible embodiment, server Also include instructions for performing the following operations in memory:
Every the first preset duration, from the first audio data server after acquisition first kind audio data, further includes:
In the audio-frequency information according to the first audio data, determine in first kind audio data include the first audio data after, The first audio data being locally stored is replaced using the first audio data in first kind audio data;
Every the second preset duration, after obtaining the second class audio frequency data from second audio data server, further includes:
In the audio-frequency information according to the first audio data, determine in the second class audio frequency data include the first audio data after, Judge whether the first audio data being locally stored derives from third audio data server;
If the first audio data source being locally stored uses the second assonance frequency in third audio data server The first audio data in replaces the first audio data being locally stored;
If the first audio data being locally stored does not derive from third audio data server, ignore the second class audio frequency The first audio data in data;
Every third preset duration, from third audio data server after acquisition third class audio frequency data, further includes:
In the audio-frequency information according to the first audio data, determine it is locally stored have the first audio data after, ignore acquisition To third class audio frequency data in the first audio data;
After the 4th class audio frequency data for receiving the transmission of the second client, further includes:
In the audio-frequency information according to the first audio data, determine it is locally stored have the first audio data after, when determining After the sound quality of the first audio data is better than the sound quality for the first audio data being locally stored in four class audio frequency data, local deposit is updated First audio data of storage;
The content of the audio-frequency information of the first audio data is more than first be locally stored in determining the 4th class audio frequency data After the content of the audio-frequency information of audio data, the audio-frequency information for the first audio data being locally stored is updated.
Based on the first any possible embodiment into the 4th kind of possible embodiment and provide The 5th kind of possible embodiment in, in the memory of server, also include instructions for performing the following operations: the first sound Frequency evidence is accompaniment data.
Server provided in an embodiment of the present invention, by returning to the first audio data and first audio to the first client The first identifier of data, it is ensured that after the first client submits second audio data, server can be by first identifier and second The corresponding storage of audio data.It is with first identifier for reference since server is when being managed audio data, so that Audio data management mode is more orderly.In addition, when user will search for what other users were generated according to same first audio data Second audio data can scan for, so that search process is more time saving when being compared with carrying out K song according to first identifier.
Fig. 7 is a kind of structural schematic diagram of the client provided according to an exemplary embodiment, and the client is for executing Function performed by first client in any embodiment in embodiment corresponding to above-mentioned Fig. 2 to Fig. 4.Referring to Fig. 7, the client Include:
First sending module 701, for sending audio search requests, wherein audio search requests carry audio keyword;
Module 702 is obtained, for obtaining the first audio data and for the first identifier of the first audio data of unique identification, Wherein, the audio-frequency information of the first audio data includes audio keyword;
Generation module 703, third audio data, the first audio data and first identifier for being recorded according to user are raw At second audio data;
Second sending module 704 makes server according to first identifier for second audio data to be sent to server, Corresponding storage second audio data.
Client provided in an embodiment of the present invention, by receiving the first audio data and first audio that server returns The first identifier of data, it is ensured that when user sings according to the first audio data, after obtaining second audio data, to server Submit second audio data, allow server by first identifier it is corresponding with second audio data store.Since server exists It is with first identifier for reference, so that way to manage ratio of the server to audio data when being managed to audio data Relatively orderly.In addition, when user will search for the second audio data that other users are generated according to same first audio data, to carry out When K song compares with, server is scanned for according to first identifier, so that search process is more time saving.
In another embodiment, the first sending module 701, for sending sound respectively to server and third-party server Frequency searching request;
Module 702 is obtained, is used for
Receive the first audio data that server returns and the first identifier that server is the generation of the first audio data;
Alternatively, receiving at least one the 4th audio data that third-party server returns;Detecting user from least one After selecting the first audio data in a 4th audio data, the first audio data of selection is committed to server, makes server First identifier is generated for the first audio data;Receiving server is the first identifier that the first audio data generates;
Alternatively, receiving the first audio data that server returns and the first mark that server is the generation of the first audio data Know;Receive at least one the 4th audio data that third-party server returns;When detecting that user selects that server returns After one audio data, the first identifier returned when server is returned to the first audio data is as the first identifier got;When Detect user select an audio data at least one the 4th audio data as the first audio data after, by selection First audio data is committed to server, after generating first identifier by the first audio data that server is selection, receives service Device is the first identifier that the first audio data of selection generates, using received first identifier as the first identifier got.
In another embodiment, client further include:
Determining module, the audio-frequency information of the first audio data for being returned according to server and at least one the 4th audio The audio-frequency information of data determines the first audio data returned at least one the 4th audio data with the presence or absence of server;
Removing module, for when at least one the 4th audio data presence server return the first audio data when, From at least one the 4th audio data delete server return the first audio data, obtain it is updated at least one the 4th Audio data;
Display module, for showing the first audio data and at least one updated the 4th audio data.
In another embodiment, display module, for according to the first audio data and it is updated at least one the 4th Audio data it is different classes of, display screen different zones to the first audio data and at least one updated the 4th audio Data are shown.
In another embodiment, the first audio data is accompaniment data.
All the above alternatives can form alternative embodiment of the invention using any combination, herein no longer It repeats one by one.
Referring to FIG. 8, it illustrates the structural schematic diagram of terminal involved in the embodiment of the present invention, which may include First client, first client can be used for implementing what any embodiment in embodiment corresponding to above-mentioned Fig. 2 to Fig. 4 provided Audio data management method.Specifically:
Terminal 800 may include RF (Radio Frequency, radio frequency) circuit 110, include one or more meter The memory 120 of calculation machine readable storage medium storing program for executing, input unit 130, display unit 140, sensor 150, voicefrequency circuit 160, WiFi (Wireless Fidelity, Wireless Fidelity) module 170, the processing for including one or more than one processing core The components such as device 180 and power supply 190.It will be understood by those skilled in the art that terminal structure shown in Fig. 8 is not constituted pair The restriction of terminal may include perhaps combining certain components or different component cloth than illustrating more or fewer components It sets.Wherein:
RF circuit 110 can be used for receiving and sending messages or communication process in, signal sends and receivees, particularly, by base station After downlink information receives, one or the processing of more than one processor 180 are transferred to;In addition, the data for being related to uplink are sent to Base station.In general, RF circuit 110 includes but is not limited to antenna, at least one amplifier, tuner, one or more oscillators, uses Family identity module (SIM) card, transceiver, coupler, LNA (Low Noise Amplifier, low-noise amplifier), duplex Device etc..In addition, RF circuit 110 can also be communicated with network and other equipment by wireless communication.The wireless communication can make With any communication standard or agreement, and including but not limited to GSM (Global System of Mobile communication, entirely Ball mobile communcations system), GPRS (General Packet Radio Service, general packet radio service), CDMA (Code Division Multiple Access, CDMA), WCDMA (Wideband Code Division Multiple Access, wideband code division multiple access), LTE (Long Term Evolution, long term evolution), Email, SMS (Short Messaging Service, short message service) etc..
Memory 120 can be used for storing software program and module, and processor 180 is stored in memory 120 by operation Software program and module, thereby executing various function application and data processing.Memory 120 can mainly include storage journey Sequence area and storage data area, wherein storing program area can the (ratio of application program needed for storage program area, at least one function Such as sound-playing function, image player function) etc.;Storage data area, which can be stored, uses created number according to terminal 800 According to (such as audio data, phone directory etc.) etc..In addition, memory 120 may include high-speed random access memory, can also wrap Include nonvolatile memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts. Correspondingly, memory 120 can also include Memory Controller, to provide processor 180 and input unit 130 to memory 120 access.
Input unit 130 can be used for receiving the number or character information of input, and generate and user setting and function Control related keyboard, mouse, operating stick, optics or trackball signal input.Specifically, input unit 130 may include touching Sensitive surfaces 131 and other input equipments 132.Touch sensitive surface 131, also referred to as touch display screen or Trackpad are collected and are used Family on it or nearby touch operation (such as user using any suitable object or attachment such as finger, stylus in touch-sensitive table Operation on face 131 or near touch sensitive surface 131), and corresponding attachment device is driven according to preset formula.It is optional , touch sensitive surface 131 may include both touch detecting apparatus and touch controller.Wherein, touch detecting apparatus detection is used The touch orientation at family, and touch operation bring signal is detected, transmit a signal to touch controller;Touch controller is from touch Touch information is received in detection device, and is converted into contact coordinate, then gives processor 180, and can receive processor 180 The order sent simultaneously is executed.Furthermore, it is possible to using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves Realize touch sensitive surface 131.In addition to touch sensitive surface 131, input unit 130 can also include other input equipments 132.Specifically, Other input equipments 132 can include but is not limited to physical keyboard, function key (such as volume control button, switch key etc.), One of trace ball, mouse, operating stick etc. are a variety of.
Display unit 140 can be used for showing information input by user or the information and terminal 800 that are supplied to user Various graphical user interface, these graphical user interface can be made of figure, text, icon, video and any combination thereof. Display unit 140 may include display panel 141, optionally, can use LCD (Liquid Crystal Display, liquid crystal Show device), the forms such as OLED (Organic Light-Emitting Diode, Organic Light Emitting Diode) configure display panel 141.Further, touch sensitive surface 131 can cover display panel 141, when touch sensitive surface 131 detects touching on it or nearby After touching operation, processor 180 is sent to determine the type of touch event, is followed by subsequent processing device 180 according to the type of touch event Corresponding visual output is provided on display panel 141.Although in fig. 8, touch sensitive surface 131 and display panel 141 are conducts Two independent components realize input and input function, but in some embodiments it is possible to by touch sensitive surface 131 and display Panel 141 is integrated and realizes and outputs and inputs function.
Terminal 800 may also include at least one sensor 150, such as optical sensor, motion sensor and other sensings Device.Specifically, optical sensor may include ambient light sensor and proximity sensor, wherein ambient light sensor can be according to environment The light and shade of light adjusts the brightness of display panel 141, and proximity sensor can close display when terminal 800 is moved in one's ear Panel 141 and/or backlight.As a kind of motion sensor, gravity accelerometer can detect in all directions (generally Three axis) acceleration size, can detect that size and the direction of gravity when static, can be used to identify terminal posture application (ratio Such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap);Extremely In other sensors such as gyroscope, barometer, hygrometer, thermometer, the infrared sensors that terminal 800 can also configure, herein It repeats no more.
Voicefrequency circuit 160, loudspeaker 161, microphone 162 can provide the audio interface between user and terminal 800.Audio Electric signal after the audio data received conversion can be transferred to loudspeaker 161, be converted to sound by loudspeaker 161 by circuit 160 Sound signal output;On the other hand, the voice signal of collection is converted to electric signal by microphone 162, after being received by voicefrequency circuit 160 Audio data is converted to, then by after the processing of audio data output processor 180, such as another end is sent to through RF circuit 110 End, or audio data is exported to memory 120 to be further processed.Voicefrequency circuit 160 is also possible that earphone jack, To provide the communication of peripheral hardware earphone Yu terminal 800.
WiFi belongs to short range wireless transmission technology, and terminal 800 can help user's transceiver electronics by WiFi module 170 Mail, browsing webpage and access streaming video etc., it provides wireless broadband internet access for user.Although Fig. 8 is shown WiFi module 170, but it is understood that, and it is not belonging to must be configured into for terminal 800, it can according to need completely Do not change in the range of the essence of invention and omits.
Processor 180 is the control centre of terminal 800, utilizes each portion of various interfaces and the entire terminal of connection Point, by running or execute the software program and/or module that are stored in memory 120, and calls and be stored in memory 120 Interior data execute the various functions and processing data of terminal 800, to carry out integral monitoring to terminal.Optionally, processor 180 may include one or more processing cores;Preferably, processor 180 can integrate application processor and modem processor, Wherein, the main processing operation system of application processor, user interface and application program etc., modem processor mainly handles nothing Line communication.It is understood that above-mentioned modem processor can not also be integrated into processor 180.
Terminal 800 further includes the power supply 190 (such as battery) powered to all parts, it is preferred that power supply can pass through electricity Management system and processor 180 are logically contiguous, to realize management charging, electric discharge and power consumption by power-supply management system The functions such as management.Power supply 190 can also include one or more direct current or AC power source, recharging system, power supply event Hinder the random components such as detection circuit, power adapter or inverter, power supply status indicator.
Although being not shown, terminal 800 can also include camera, bluetooth module etc., and details are not described herein.Specifically in this reality It applies in example, the display unit of terminal is touch-screen display, and terminal further includes having memory and one or more than one Program, perhaps more than one program is stored in memory and is configured to by one or more than one processing for one of them Device executes.The one or more programs include instructions for performing the following operations:
Send audio search requests, wherein audio search requests carry audio keyword;
Obtain the first audio data and the first identifier for the first audio data of unique identification, wherein the first audio number According to audio-frequency information include audio keyword;
According to third audio data, the first audio data and first identifier that user records, second audio data is generated;
Second audio data is sent to server, makes server according to first identifier, corresponding storage second audio data.
Assuming that above-mentioned is the first possible embodiment, then provided based on the first possible embodiment Second of possible embodiment in, in the memory of terminal also include instructions for performing the following operations: send audio Searching request, comprising:
Audio search requests are sent respectively to server and third-party server;
Obtain the first audio data and the first identifier for the first audio data of unique identification, comprising:
Receive the first audio data that server returns and the first identifier that server is the generation of the first audio data;
Alternatively, receiving at least one the 4th audio data that third-party server returns;Detecting user from least one After selecting the first audio data in a 4th audio data, the first audio data of selection is committed to server, makes server First identifier is generated for the first audio data;Receiving server is the first identifier that the first audio data generates;
Alternatively, receiving the first audio data that server returns and the first mark that server is the generation of the first audio data Know;Receive at least one the 4th audio data that third-party server returns;When detecting that user selects that server returns After one audio data, the first identifier returned when server is returned to the first audio data is as the first identifier got;When Detect user select an audio data at least one the 4th audio data as the first audio data after, by selection First audio data is committed to server, after generating first identifier by the first audio data that server is selection, receives service Device is the first identifier that the first audio data of selection generates, using received first identifier as the first identifier got.
In the third the possible embodiment provided based on second of possible embodiment, terminal is deposited Also include instructions for performing the following operations in reservoir: receiving at least one the 4th audio data that third-party server returns Later, further includes:
The audio-frequency information of the first audio data returned according to server and the audio of at least one the 4th audio data are believed Breath determines the first audio data returned at least one the 4th audio data with the presence or absence of server;
If the first audio data that presence server returns at least one the 4th audio data, from least one the The first audio data that server returns is deleted in four audio datas, obtains at least one updated the 4th audio data;
Show the first audio data and at least one updated the 4th audio data.
In the 4th kind of possible embodiment provided based on the third possible embodiment, terminal is deposited It also include instructions for performing the following operations in reservoir: the first audio data of display and at least one updated the 4th audio Data, comprising:
According to the different classes of of the first audio data and at least one updated the 4th audio data, display screen not The first audio data and at least one updated the 4th audio data are shown with region.
Based on the first any possible embodiment into the 4th kind of possible embodiment and provide The 5th kind of possible embodiment in, in the memory of terminal also include instructions for performing the following operations: the first audio Data are accompaniment data.
Terminal provided in an embodiment of the present invention, the first audio data and the first audio number returned by receiving server According to first identifier, it is ensured that when user sings according to the first audio data, after obtaining second audio data, mentioned to server Hand over second audio data, allow server by first identifier it is corresponding with second audio data store.Since server is right It is with first identifier for reference, so that server compares the way to manage of audio data when audio data is managed Orderly.In addition, when user will search for the second audio data that other users are generated according to same first audio data, to carry out K When song compares with, server is scanned for according to first identifier, so that search process is more time saving.
It should be understood that server provided by the above embodiment and client be when executing audio data management method, Only the example of the division of the above functional modules, it in practical application, can according to need and by above-mentioned function distribution It is completed by different functional modules, i.e., the internal structure of equipment is divided into different functional modules, it is described above to complete All or part of function.In addition, server provided by the above embodiment and client and audio data management embodiment of the method Belong to same design, specific implementation process is detailed in embodiment of the method, and which is not described herein again.
Those of ordinary skill in the art will appreciate that realizing that all or part of the steps of above-described embodiment can pass through hardware It completes, relevant hardware can also be instructed to complete by program, the program can store in a kind of computer-readable In storage medium, storage medium mentioned above can be read-only memory, disk or CD etc..
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all in spirit of the invention and Within principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.

Claims (15)

1. a kind of audio data management method, which is characterized in that the described method includes:
Obtain audio data;
According to the audio-frequency information of the audio data, judge locally whether be stored with the audio data;
If locally not stored have the audio data, the audio data is stored, the audio data includes the first audio Data;
The audio identification for audio data described in unique identification is generated for the audio data;
The audio search requests of the first client transmission are received, the audio search requests carry audio keyword;
According to the audio keyword, first audio data is obtained and for the first audio data described in unique identification First identifier, first audio data are accompaniment data, and the audio-frequency information of first audio data includes that the audio is closed Key word;
First audio data and the first identifier are sent to first client;
When receiving the second audio data that first client is sent, according to the first identifier, corresponding storage described the Two audio datas, third audio data that the second audio data is recorded by first client according to user, described One audio data and the first identifier generate, and are stored with each client according to the first audio number under the first identifier According to the second audio data of generation.
2. the method according to claim 1, wherein the acquisition audio data, comprising:
Every the first preset duration, first kind audio data, first class audio frequency are obtained from the first audio data server Data are the audio data that sound quality is higher than specified pitch criteria;
Every the second preset duration, the second class audio frequency data, second class audio frequency are obtained from second audio data server Data are the audio data that sound quality is lower than the specified pitch criteria;
Every third preset duration, third class audio frequency data, the third class audio frequency are obtained from third audio data server Data are that the audio data that handles of noise reduction is carried out to original audio, and the sound quality of the third class audio frequency data is lower than described the The sound quality of two class audio frequency data;
Receive the 4th class audio frequency data of the second client transmission, the 4th class audio frequency data are by second client from the Tripartite's server search obtains.
3. according to the method described in claim 2, it is characterized in that, described every the first preset duration, from the first audio data On server after acquisition first kind audio data, further includes:
In the audio-frequency information according to first audio data, determine to include first audio in the first kind audio data After data, the first audio data being locally stored is replaced using the first audio data in the first kind audio data;
It is described every the second preset duration, after obtaining the second class audio frequency data from second audio data server, further includes:
In the audio-frequency information according to first audio data, determine to include first audio in the second class audio frequency data After data, judge whether the first audio data being locally stored derives from the third audio data server;
If the first audio data source being locally stored uses second assonance in the third audio data server First audio data of the frequency in replaces the first audio data being locally stored;
If the first audio data being locally stored does not derive from the third audio data server, ignore second class The first audio data in audio data;
It is described every third preset duration, after obtaining third class audio frequency data from third audio data server, further includes:
In the audio-frequency information according to first audio data, determine it is locally stored have first audio data after, ignore The first audio data in third class audio frequency data got;
After the 4th class audio frequency data for receiving the transmission of the second client, further includes:
In the audio-frequency information according to first audio data, determine it is locally stored have first audio data after, when true After the sound quality of the first audio data is better than the sound quality for the first audio data being locally stored in the fixed 4th class audio frequency data, more The first audio data being newly locally stored;
The content of the audio-frequency information of the first audio data is more than first be locally stored in determining the 4th class audio frequency data After the content of the audio-frequency information of audio data, the audio-frequency information for the first audio data being locally stored is updated.
4. a kind of audio data management method, which is characterized in that the described method includes:
Audio search requests are sent, the audio search requests carry audio keyword;
Obtain the first audio data and the first identifier for the first audio data described in unique identification, first audio data For accompaniment data, the audio-frequency information of first audio data includes the audio keyword;
According to third audio data, first audio data and the first identifier that user records, the second audio number is generated According to;
The second audio data is sent to server, makes the server according to the first identifier, corresponds to described in storage Second audio data is stored with the second audio that each client is generated according to first audio data under the first identifier Data.
5. according to the method described in claim 4, it is characterized in that, the transmission audio search requests, comprising:
Audio search requests are sent respectively to the server and third-party server;
The first audio data of the acquisition and the first identifier for the first audio data described in unique identification, comprising:
Receive the first audio data that the server the returns and server generates for first audio data first Mark;
Alternatively, receiving at least one the 4th audio data that the third-party server returns;Detect user from it is described to After selecting the first audio data in few 4th audio data, the first audio data of selection is committed to the server, The server first audio data is set to generate first identifier;It is raw for first audio data to receive the server At first identifier;
Alternatively, receiving the first audio data that the server returns and the server is what first audio data generated First identifier;Receive at least one the 4th audio data that the third-party server returns;When detect user selection described in After the first audio data that server returns, the first identifier returned when the server is returned to first audio data is made For the first identifier got;When detecting that user selects an audio data at least one described the 4th audio data to make After the first audio data, the first audio data of selection is committed to the server, is the selection by the server The first audio data generate first identifier after, receive the server be selected first audio data generate first Mark, using received first identifier as the first identifier got.
6. according to the method described in claim 5, it is characterized in that, at least one for receiving the third-party server and returning After a 4th audio data, further includes:
According to the audio-frequency information of the first audio data of server return and the sound of at least one the 4th audio data Frequency information determines the first audio data returned at least one described the 4th audio data with the presence or absence of the server;
If at least one described the 4th audio data there are the server return the first audio data, from it is described to The first audio data that the server returns is deleted in few 4th audio data, obtain it is updated at least one the 4th Audio data;
Show first audio data and at least one described updated the 4th audio data.
7. according to the method described in claim 6, it is characterized in that, after the display first audio data and the update At least one the 4th audio data, comprising:
According to the different classes of of first audio data and at least one updated the 4th audio data, in display screen Different zones first audio data and at least one described updated the 4th audio data are shown.
8. a kind of server, which is characterized in that the server includes:
Second obtains module, for obtaining audio data;
Judgment module judges locally whether be stored with the audio data for the audio-frequency information according to the audio data;
Second memory module, for storing the audio data, the audio number when local is not stored the audio data According to including the first audio data;
Generation module, for generating the audio identification for audio data described in unique identification for the audio data;
Receiving module, for receiving the audio search requests of the first client transmission, the audio search requests carry audio and close Key word;
First obtains module, for obtaining first audio data and being used for unique identification according to the audio keyword The first identifier of first audio data, first audio data are accompaniment data, the audio of first audio data Information includes the audio keyword;
Sending module, for sending first audio data and the first identifier to first client;
First memory module, for being marked according to described first when receiving the second audio data that first client is sent Know, it is corresponding to store the second audio data, the second audio data recorded by first client according to user the Three audio datas, first audio data and the first identifier generate, and are stored with each client under the first identifier The second audio data generated according to first audio data.
9. server according to claim 8, which is characterized in that described second, which obtains module, includes:
First acquisition unit, for obtaining the first class audio frequency number from the first audio data server every the first preset duration According to the first kind audio data is the audio data that sound quality is higher than specified pitch criteria;
Second acquisition unit, for obtaining the second assonance frequency from second audio data server every the second preset duration According to the second class audio frequency data are the audio data that sound quality is lower than the specified pitch criteria;
Third acquiring unit, for obtaining third assonance frequency from third audio data server every third preset duration According to the third class audio frequency data are to carry out the audio data that noise reduction is handled, the third assonance frequency to original audio According to sound quality be lower than the second class audio frequency data sound quality;
Receiving unit, for receiving the 4th class audio frequency data of the second client transmission, the 4th class audio frequency data are by described Second client searches for obtain from third-party server.
10. server according to claim 9, which is characterized in that described second obtains module further include:
First replacement unit, for determining the first kind audio data in the audio-frequency information according to first audio data In include first audio data after, be locally stored using the first audio data replacement in the first kind audio data First audio data;
Judging unit, for determining and being wrapped in the second class audio frequency data in the audio-frequency information according to first audio data After including first audio data, judge whether the first audio data being locally stored derives from the third audio data service Device;
Second replacement unit, for when the first audio data source being locally stored is when the third audio data server, The first audio data being locally stored is replaced using the first audio data in the second class audio frequency data;
First ignores unit, for not deriving from the third audio data server when the first audio data being locally stored When, ignore the first audio data in the second class audio frequency data;
Second ignoring unit, for when in the audio-frequency information according to first audio data, determine it is locally stored have it is described After first audio data, ignore the first audio data in the third class audio frequency data got;
First updating unit locally stored has described for determining in the audio-frequency information according to first audio data After one audio data, when the sound quality for determining the first audio data in the 4th class audio frequency data is better than the first sound being locally stored After the sound quality of frequency evidence, the first audio data being locally stored is updated;
Second updating unit, the content for the audio-frequency information of the first audio data in determining the 4th class audio frequency data are more After the content of the audio-frequency information for the first audio data being locally stored, the audio letter for the first audio data being locally stored is updated Breath.
11. a kind of client, which is characterized in that the client includes:
First sending module, for sending audio search requests, the audio search requests carry audio keyword;
Module is obtained, for obtaining the first audio data and for the first identifier of the first audio data described in unique identification, institute Stating the first audio data is accompaniment data, and the audio-frequency information of first audio data includes the audio keyword;
Generation module, third audio data, first audio data and the first identifier for being recorded according to user are raw At second audio data;
Second sending module makes the server according to described first for the second audio data to be sent to server Mark, it is corresponding to store the second audio data, each client is stored with according to first audio under the first identifier The second audio data that data generate.
12. client according to claim 11, which is characterized in that first sending module is used for the service Device and third-party server send audio search requests respectively;
The acquisition module, is used for
Receive the first audio data that the server the returns and server generates for first audio data first Mark;
Alternatively, receiving at least one the 4th audio data that the third-party server returns;Detect user from it is described to After selecting the first audio data in few 4th audio data, the first audio data of selection is committed to the server, The server first audio data is set to generate first identifier;It is raw for first audio data to receive the server At first identifier;
Alternatively, receiving the first audio data that the server returns and the server is what first audio data generated First identifier;Receive at least one the 4th audio data that the third-party server returns;When detect user selection described in After the first audio data that server returns, the first identifier returned when the server is returned to first audio data is made For the first identifier got;When detecting that user selects an audio data at least one described the 4th audio data to make After the first audio data, the first audio data of selection is committed to the server, is the selection by the server The first audio data generate first identifier after, receive the server be selected first audio data generate first Mark, using received first identifier as the first identifier got.
13. client according to claim 12, which is characterized in that the client further include:
Determining module, the audio-frequency information of the first audio data for being returned according to the server and it is described at least one the 4th The audio-frequency information of audio data determines first returned at least one described the 4th audio data with the presence or absence of the server Audio data;
Removing module, for when there are the first audio datas that the server returns at least one described the 4th audio data When, the first audio data that the server returns is deleted from least one described the 4th audio data, is obtained updated At least one the 4th audio data;
Display module, for showing first audio data and at least one described updated the 4th audio data.
14. client according to claim 13, which is characterized in that the display module, for according to first sound Frequency evidence and at least one updated the 4th audio data it is different classes of, display screen different zones to described One audio data and at least one described updated the 4th audio data are shown.
15. a kind of computer readable storage medium, which is characterized in that it is stored with program in the computer readable storage medium, Described program is loaded by processor and is executed to realize audio data management method as described in any one of claims 1 to 3, or The described in any item audio data management methods of person's such as claim 4 to 7.
CN201410808946.0A 2014-12-19 2014-12-19 Audio data management method, server and client Active CN104572882B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410808946.0A CN104572882B (en) 2014-12-19 2014-12-19 Audio data management method, server and client

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410808946.0A CN104572882B (en) 2014-12-19 2014-12-19 Audio data management method, server and client

Publications (2)

Publication Number Publication Date
CN104572882A CN104572882A (en) 2015-04-29
CN104572882B true CN104572882B (en) 2019-03-26

Family

ID=53088944

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410808946.0A Active CN104572882B (en) 2014-12-19 2014-12-19 Audio data management method, server and client

Country Status (1)

Country Link
CN (1) CN104572882B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111309963B (en) * 2020-01-22 2023-07-04 百度在线网络技术(北京)有限公司 Audio file processing method and device, electronic equipment and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101980198A (en) * 2010-11-01 2011-02-23 福州星网视易信息系统有限公司 Method for carrying karaoke
CN103400592A (en) * 2013-07-30 2013-11-20 北京小米科技有限责任公司 Recording method, playing method, device, terminal and system
CN103902728A (en) * 2014-04-14 2014-07-02 北京君正集成电路股份有限公司 Method and device for storing voice signals of intelligent watch
CN104157292A (en) * 2014-08-20 2014-11-19 杭州华为数字技术有限公司 Anti-howling audio signal processing method and device thereof
CN104168433A (en) * 2014-08-28 2014-11-26 广州华多网络科技有限公司 Media content processing method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101980198A (en) * 2010-11-01 2011-02-23 福州星网视易信息系统有限公司 Method for carrying karaoke
CN103400592A (en) * 2013-07-30 2013-11-20 北京小米科技有限责任公司 Recording method, playing method, device, terminal and system
CN103902728A (en) * 2014-04-14 2014-07-02 北京君正集成电路股份有限公司 Method and device for storing voice signals of intelligent watch
CN104157292A (en) * 2014-08-20 2014-11-19 杭州华为数字技术有限公司 Anti-howling audio signal processing method and device thereof
CN104168433A (en) * 2014-08-28 2014-11-26 广州华多网络科技有限公司 Media content processing method and system

Also Published As

Publication number Publication date
CN104572882A (en) 2015-04-29

Similar Documents

Publication Publication Date Title
CN104112213B (en) The method and device of recommendation information
CN104394506B (en) Place update based on push
CN104850434B (en) Multimedia resource method for down loading and device
CN105788612B (en) A kind of method and apparatus detecting sound quality
CN106778117B (en) Permission open method, apparatus and system
CN105549740B (en) A kind of method and apparatus of playing audio-fequency data
CN104426962B (en) Method, binding server, terminal and the system of multiple terminals binding
CN106815230A (en) Lyrics page generation method and device
CN107402964A (en) A kind of information recommendation method, server and terminal
CN104063400B (en) Data search method and data search device
CN107204964A (en) A kind of methods, devices and systems of rights management
CN105530239B (en) Multi-medium data acquisition methods and device
CN103631625B (en) A kind of method of data acquisition, user terminal, server and system
CN104598542B (en) The display methods and device of multimedia messages
CN105550316B (en) The method for pushing and device of audio list
CN109067981A (en) Split screen application switching method, device, storage medium and electronic equipment
CN105739839B (en) The selection method and device of multimedia menu item
CN107291326A (en) Icon processing method and terminal
CN109862430A (en) Multi-medium play method and terminal device
CN106844528A (en) The method and apparatus for obtaining multimedia file
CN105976849B (en) A kind of method and apparatus of playing audio-fequency data
CN104731806B (en) A kind of method and terminal for quickly searching user information in social networks
CN106792014B (en) A kind of method, apparatus and system of recommendation of audio
CN110196833A (en) Searching method, device, terminal and the storage medium of application program
CN104636455B (en) The acquisition methods and device of application program map information

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 510660 Guangzhou City, Guangzhou, Guangdong, Whampoa Avenue, No. 315, self - made 1-17

Applicant after: Guangzhou KuGou Networks Co., Ltd.

Address before: 510000 B1, building, No. 16, rhyme Road, Guangzhou, Guangdong, China 13F

Applicant before: Guangzhou KuGou Networks Co., Ltd.

GR01 Patent grant
GR01 Patent grant