CN109033335A - Audio recording method, apparatus, terminal and storage medium - Google Patents
Audio recording method, apparatus, terminal and storage medium Download PDFInfo
- Publication number
- CN109033335A CN109033335A CN201810804459.5A CN201810804459A CN109033335A CN 109033335 A CN109033335 A CN 109033335A CN 201810804459 A CN201810804459 A CN 201810804459A CN 109033335 A CN109033335 A CN 109033335A
- Authority
- CN
- China
- Prior art keywords
- audio
- frame
- location information
- timestamp
- accompaniment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Abstract
The invention discloses a kind of audio recording method, apparatus, terminal and storage mediums, belong to Internet technical field.This method comprises: playing multiple audio accompaniment frames of target song when receiving record command, the audio data that user sings the target song is acquired in real time;The audio data acquired in real time is stored into file destination;The timestamp for obtaining multiple audio accompaniment frame, the corresponding relationship according to the timestamp of multiple audio accompaniment frame and location information of the audio data in the file destination, between settling time stamp and location information;When receiving adjustment instruction, based on the corresponding relationship between the timestamp and location information, the location information of the audio data acquired after adjustment is determined;Based on the location information of the audio data acquired after the adjustment, audio data adjusted is stored into the file destination.The present invention is not necessarily to file mergences, improves recording efficiency, optimizes user and records experience.
Description
Technical field
The present invention relates to Internet technical field, in particular to a kind of audio recording method, apparatus, terminal and storage are situated between
Matter.
Background technique
With the development of internet technology, many music players not only support the online broadcasting of magnanimity song, can be with
K song service is provided for user, K song service refers to the musical background that song is played by music player, and user follows musical background
It is sung.During performance, the song which can also sing user is recorded, in order to rear continued broadcasting
Put the song of the performance of the user.In general, will include some parts sung without voice in song, for example, song intros
Partially, tail plays part or interlude part etc..User can be by adjusting the playback progress of song, the portion that will be sung without voice
Broadcasting is ignored in the musical background divided, to complete to record as early as possible.
In the related technology, audio recording process is general are as follows: and terminal opens the musical background that music player plays song, when
When user starts to sing, terminal, which synchronizes, to be started to record, and the audio data of the recording is written in file.Meanwhile terminal is also
The progress bar and vernier button that the song can be shown with recording interface, when the part being played in song without voice performance
When, user can directly be adjusted to the playback progress of song that voice to be needed to drill by pulling the drag operation of the vernier button
The part sung.When user executes drag operation, terminal can create a file, and the audio data acquired after dragging is written
In newly-built file.At the end of recording, the timestamp for the audio data that terminal can be stored based on each file, by multiple texts
Part merges into a complete file.
Above-mentioned audio recording process actually first generates multiple files, then is the record of a file by multiple file mergencess
Process processed, however, needing to spend more time, so that above-mentioned audio recording process when terminal executes above-mentioned merging process
Efficiency it is lower.
Summary of the invention
The embodiment of the invention provides a kind of audio recording method, apparatus, terminal and storage mediums, can solve related skill
Art sound intermediate frequency records the lower problem of efficiency.The technical solution is as follows:
On the one hand, a kind of audio recording method is provided, which comprises
When receiving record command, multiple audio accompaniment frames of target song are played, are acquired described in user's performance in real time
The audio data of target song, the record command are used to indicate the recording user and sing with the multiple audio accompaniment frame
When audio data;
The audio data acquired in real time is stored into file destination;
The timestamp for obtaining the multiple audio accompaniment frame, according to the timestamp and the sound of the multiple audio accompaniment frame
Corresponding relationship of the frequency according to the location information in the file destination, between settling time stamp and location information;
When receiving adjustment instruction, based on the corresponding relationship between the timestamp and location information, after determining adjustment
The location information of the audio data of acquisition, the adjustment instruction be used to indicate adjust the broadcasting of the multiple audio accompaniment frame into
Degree;
Based on the location information of the audio data acquired after the adjustment, audio data adjusted is stored to the mesh
It marks in file.
Optionally, described when receiving record command, multiple audio accompaniment frames of target song are played, acquisition is used in real time
The audio data that the target song is sung at family includes:
When receiving the record command, according to the song identity of the target song, the target song is obtained
Audio file, the audio file are used to store multiple audio accompaniment frames of the target song;
Play multiple audio accompaniment frames in the audio file;
When playing each audio accompaniment frame, the audio number that the user sings the target song is acquired in real time
According to.
Optionally, the timestamp and the audio data according to the multiple audio accompaniment frame is in the file destination
In location information, settling time stamp location information between corresponding relationship include:
Obtain the location information of the audio data acquired when playing each audio accompaniment frame;
For each audio accompaniment frame, according to the timestamp of each audio accompaniment frame and each accompaniment tone is played
The location information of the audio data acquired when frequency frame establishes the corresponding relationship between the timestamp and location information.
Optionally, described when receiving adjustment instruction, based on the corresponding relationship between the timestamp and location information,
Determine that the location information of audio data acquired after adjustment includes:
When receiving the adjustment instruction, the timestamp of audio accompaniment frame adjusted is obtained;
According between the timestamp and location information corresponding relationship and the audio accompaniment frame adjusted when
Between stab, obtain the location information of audio data acquired after adjustment.
Optionally, the corresponding relationship according between the timestamp and location information and the companion adjusted
The timestamp of audio frame is played, the location information for obtaining the audio data acquired after adjustment includes:
When the adjustment instruction is indicated the multiple audio accompaniment frame reverse play, according to the accompaniment adjusted
The timestamp of audio frame obtains the sound acquired after reverse play from the corresponding relationship between the timestamp and location information
The location information of frequency evidence;
Wherein, the reverse play refers to the accompaniment tone retreated to playing sequence before currently playing audio accompaniment frame
Frequency frame plays out.
Optionally, the corresponding relationship according between the timestamp and location information and the companion adjusted
The timestamp of audio frame is played, the location information for obtaining the audio data acquired after adjustment includes:
When the adjustment instruction is indicated the multiple audio accompaniment frame forward play, according to companions multiple in leaky-bucket
The timestamp of audio frame and the location information of the audio data in the leaky-bucket are played, the timestamp and location information are updated
Between corresponding relationship;
According to the corresponding relationship between updated timestamp and location information, the audio number acquired after forward play is determined
According to location information;
Wherein, the forward play, which refers to, jumps to playing sequence after currently playing audio accompaniment frame, and with institute
It states the non-conterminous audio accompaniment frame of currently playing audio accompaniment frame to play out, the leaky-bucket is the accompaniment tone before adjustment
Time interval between the timestamp of frequency frame and the timestamp of audio accompaniment frame adjusted.
Optionally, the audio in the timestamp and the leaky-bucket according to audio accompaniment frames multiple in leaky-bucket
Data, the corresponding relationship updated between the timestamp and location information include:
Obtain the timestamp of multiple audio accompaniment frames in the leaky-bucket;
Audio data in the leaky-bucket is stored into file destination;
According to the audio data in the timestamp and the leaky-bucket of audio accompaniment frames multiple in the leaky-bucket
Location information in the corresponding relationship between the timestamp and location information, adds each accompaniment tone in the leaky-bucket
The location information of the timestamp of frequency frame and the audio data in the leaky-bucket.
On the one hand, a kind of voice recorder is provided, described device includes:
Acquisition module acquires in real time for when receiving record command, playing multiple audio accompaniment frames of target song
User sings the audio data of the target song, and the record command, which is used to indicate, records the user with the multiple companion
Play audio data when audio frame is sung;
First memory module, for storing the audio data acquired in real time into file destination;
Module is established, for obtaining the timestamp of the multiple audio accompaniment frame, according to the multiple audio accompaniment frame
The location information of timestamp and the audio data in the file destination, the correspondence between settling time stamp and location information
Relationship;
Determining module, for being closed based on corresponding between the timestamp and location information when receiving adjustment instruction
System, determines the location information of the audio data acquired after adjustment, and the adjustment instruction is used to indicate the multiple accompaniment tone of adjustment
The playback progress of frequency frame;
Second memory module, for the location information based on the audio data acquired after the adjustment, by sound adjusted
Frequency is according to storing into the file destination.
Optionally, the acquisition module, for when receiving the record command, according to the song of the target song
Mark, obtains the audio file of the target song, and the audio file is used to store multiple accompaniment tones of the target song
Frequency frame;Play multiple audio accompaniment frames in the audio file;When playing each audio accompaniment frame, institute is acquired in real time
State the audio data that user sings the target song.
Optionally, the timestamp that is described to establish module, being acquired when playing each audio accompaniment frame for obtaining, for each
Audio accompaniment frame, the audio acquired according to the timestamp of each audio accompaniment frame and when playing each audio accompaniment frame
The location information of data establishes the corresponding relationship between the timestamp and location information.
Optionally, the determining module, comprising:
First acquisition unit, for when receiving the adjustment instruction, obtaining the time of audio accompaniment frame adjusted
Stamp;
Second acquisition unit, for according between the timestamp and location information corresponding relationship and the adjustment
The timestamp of audio accompaniment frame afterwards obtains the location information of the audio data acquired after adjustment.
Optionally, the second acquisition unit, for indicating to fall the multiple audio accompaniment frame when the adjustment instruction
Correspondence when moving back broadcasting, according to the timestamp of the audio accompaniment frame adjusted, between the timestamp and location information
In relationship, the location information of the audio data acquired after reverse play is obtained;
Wherein, the reverse play refer to retrogressing to playing sequence before currently playing audio accompaniment frame audio accompaniment
Frame plays out.
Optionally, the second acquisition unit, will be before the multiple audio accompaniment frame for working as adjustment instruction instruction
Into play when, according to the position of the audio data in the timestamp and the leaky-bucket of audio accompaniment frames multiple in leaky-bucket
Information updates the corresponding relationship between the timestamp and location information;According between updated timestamp and location information
Corresponding relationship, determine the location information of the audio data acquired after forward play;
Wherein, the forward play, which refers to, jumps to playing sequence after currently playing audio accompaniment frame, and with institute
It states the non-conterminous audio accompaniment frame of currently playing audio accompaniment frame to play out, the leaky-bucket is the accompaniment tone before adjustment
Time interval between the timestamp of frequency frame and the timestamp of audio accompaniment frame adjusted.
Optionally, the second acquisition unit, for obtaining the timestamp of multiple audio accompaniment frames in the leaky-bucket;
Audio data in the leaky-bucket is stored into file destination;According to audio accompaniment frames multiple in the leaky-bucket
The location information of timestamp and the audio data in the leaky-bucket, the correspondence between the timestamp and location information
In relationship, the timestamp of each audio accompaniment frame in the leaky-bucket and the audio data in the leaky-bucket are added
Location information.
On the one hand, provide a kind of terminal, the terminal includes processor and memory, be stored in the memory to
A few instruction, described instruction are loaded as the processor and are executed to realize the behaviour as performed by above-mentioned audio recording method
Make.
On the one hand, a kind of computer readable storage medium is provided, at least one instruction is stored in the storage medium,
Described instruction is loaded as processor and is executed to realize the operation as performed by above-mentioned audio recording method.
Technical solution provided in an embodiment of the present invention has the benefit that
When receiving record command, multiple audio accompaniment frames of terminal plays target song acquire user in real time and sing
The audio data of the target song, which, which is used to indicate, records the user when singing with multiple audio accompaniment frame
Audio data;Terminal stores the audio data acquired in real time into file destination;Terminal obtains the multiple audio accompaniment
The timestamp of frame, according to the position of the timestamp and the audio data of the multiple audio accompaniment frame in the file destination
Information, the corresponding relationship between settling time stamp and location information;When receiving adjustment instruction, terminal be based on the timestamp and
Corresponding relationship between location information determines the location information of the audio data acquired after adjustment;Terminal is adopted after being based on the adjustment
The location information of the audio data of collection stores audio data adjusted into the file destination, due in recording process,
For terminal by the corresponding relationship between the timestamp and location information, realizing only needs a file destination that can complete to record,
Without creating multiple files, therefore, the process of subsequent file mergences is omitted, to substantially increase recording efficiency, optimizes
The recording experience of user.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment
Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for
For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other
Attached drawing.
Fig. 1 is a kind of flow chart of audio recording method provided in an embodiment of the present invention;
Fig. 2 is a kind of flow chart of audio recording method provided in an embodiment of the present invention;
Fig. 3 is a kind of structural schematic diagram of voice recorder provided in an embodiment of the present invention;
Fig. 4 is a kind of structural schematic diagram of terminal provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, shall fall within the protection scope of the present invention.
Fig. 1 is a kind of flow chart of audio recording method provided in an embodiment of the present invention, and the executing subject of this method can be with
For terminal, as shown in Figure 1, this method comprises:
101, when receiving record command, multiple audio accompaniment frames of target song are played, user is acquired in real time and sings
The audio data of the target song, which, which is used to indicate, records when the user sings with multiple audio accompaniment frame
Audio data;
102, the audio data acquired in real time is stored into file destination;
103, the timestamp for obtaining multiple audio accompaniment frame, according to the timestamp of multiple audio accompaniment frame and the audio
Location information of the data in the file destination, the corresponding relationship between settling time stamp and location information;
104, when receiving adjustment instruction, based on the corresponding relationship between the timestamp and location information, adjustment is determined
The location information of the audio data acquired afterwards, the adjustment instruction are used to indicate the playback progress for adjusting multiple audio accompaniment frame;
105, the location information based on the audio data acquired after the adjustment, audio data adjusted is stored to the mesh
It marks in file.
Optionally, when should receive record command, multiple audio accompaniment frames of target song is played, acquire user in real time
The audio data for singing the target song includes:
When receiving the record command, according to the song identity of the target song, the audio text of the target song is obtained
Part, the audio file are used to store multiple audio accompaniment frames of the target song;
Play multiple audio accompaniment frames in the audio file;
When playing each audio accompaniment frame, the audio data that the user sings the target song is acquired in real time.
Optionally, the timestamp according to multiple audio accompaniment frame and position of the audio data in the file destination
Information, the corresponding relationship between settling time stamp and location information include:
Obtain the location information of the audio data acquired when playing each audio accompaniment frame;
For each audio accompaniment frame, according to the timestamp of each audio accompaniment frame and each audio accompaniment frame is played
When the location information of audio data that acquires, establish the corresponding relationship between the timestamp and location information.
Optionally, it when should receive adjustment instruction, based on the corresponding relationship between the timestamp and location information, determines
The location information of the audio data acquired after adjustment includes:
When receiving the adjustment instruction, the timestamp of audio accompaniment frame adjusted is obtained;
According to the time of corresponding relationship and the audio accompaniment frame adjusted between the timestamp and location information
Stamp obtains the location information of the audio data acquired after adjustment.
Optionally, this is according to the corresponding relationship and the audio accompaniment adjusted between the timestamp and location information
The timestamp of frame, the location information for obtaining the audio data acquired after adjustment include:
When the adjustment instruction is indicated multiple audio accompaniment frame reverse play, according to the audio accompaniment frame adjusted
Timestamp obtain the audio data acquired after reverse play from the corresponding relationship between the timestamp and location information
Location information;
Wherein, which refers to the audio accompaniment retreated to playing sequence before currently playing audio accompaniment frame
Frame plays out.
Optionally, this is according to the corresponding relationship and the audio accompaniment adjusted between the timestamp and location information
The timestamp of frame, the location information for obtaining the audio data acquired after adjustment include:
When the adjustment instruction is indicated multiple audio accompaniment frame forward play, according to accompaniment tones multiple in leaky-bucket
The location information of audio data in the timestamp of frequency frame and the leaky-bucket, updates pair between the timestamp and location information
It should be related to;
According to the corresponding relationship between updated timestamp and location information, the audio number acquired after forward play is determined
According to location information;
Wherein, which, which refers to, jumps to playing sequence after currently playing audio accompaniment frame, and with deserve
The non-conterminous audio accompaniment frame of the audio accompaniment frame of preceding broadcasting plays out, which is the audio accompaniment frame before adjustment
Time interval between timestamp and the timestamp of audio accompaniment frame adjusted.
Optionally, the timestamp according to audio accompaniment frames multiple in leaky-bucket and the audio number in the leaky-bucket
According to the corresponding relationship updated between the timestamp and location information includes:
Obtain the timestamp of multiple audio accompaniment frames in the leaky-bucket;
Audio data in the leaky-bucket is stored into file destination;
According to the position of the audio data in the timestamp and the leaky-bucket of audio accompaniment frames multiple in the leaky-bucket
Information, in the corresponding relationship between the timestamp and location information, add each audio accompaniment frame in the leaky-bucket when
Between stab and the audio data in the leaky-bucket location information.
Method provided in an embodiment of the present invention, when receiving record command, multiple accompaniments of terminal plays target song
Audio frame acquires user in real time and sings the audio data of the target song, which is used to indicate that record the user adjoint
Audio data when multiple audio accompaniment frame is sung;Terminal stores the audio data acquired in real time into file destination;Eventually
End obtains the timestamp of multiple audio accompaniment frame, according to the timestamp of multiple audio accompaniment frame and the audio data in the mesh
Mark the location information in file, the corresponding relationship between settling time stamp and location information;When receiving adjustment instruction, terminal
Based on the corresponding relationship between the timestamp and location information, the location information of the audio data acquired after adjustment is determined;Terminal
Based on the location information of the audio data acquired after the adjustment, audio data adjusted is stored into the file destination, by
In in recording process, for terminal by the corresponding relationship between the timestamp and location information, realizing only needs a target text
Part can be completed to record, and without creating multiple files, therefore, the process of subsequent file mergences be omitted, to greatly improve
Recording efficiency optimizes the recording experience of user.
Fig. 2 is a kind of flow chart of audio recording method provided in an embodiment of the present invention.The execution master of the inventive embodiments
Body is terminal, referring to fig. 2, this method comprises:
201, when receiving record command, multiple audio accompaniment frames of terminal plays target song acquire user in real time
Sing the audio data of the target song.
Wherein, which is used to indicate the audio data recorded when user sings with multiple audio accompaniment frame.
Multiple audio accompaniment frame refers to the background audio for cooperating the user to be sung in recording process;Terminal plays target song
Bent multiple audio accompaniment frames, so that user can cooperate the rhythm of the audio accompaniment frame to sing.In this step, work as terminal
When receiving the record command, terminal obtains the audio file of the target song according to the song identity of the target song, plays
Multiple audio accompaniment frames in the audio file.When playing each audio accompaniment frame, terminal acquires user performance in real time should
The audio data of target song.
In the embodiment of the present invention, user can play multiple accompaniment tones of song by the application program that terminal is installed in advance
Frequency frame, and in playing process, follow multiple audio accompaniment frame synchronization to sing.
It should be noted that the size of each audio accompaniment frame, which can according to need, to be configured in the audio file,
The embodiment of the present invention is not especially limited the size of each audio accompaniment frame.Certainly, each audio accompaniment frame is smaller, accuracy
It is higher.For example, can be using the audio data of a frame 26ms as an audio accompaniment frame.
Wherein, when which opens the application program, which can show that recording is pressed on the interface of the application program
Button, when the recording button is triggered, which obtains record command, and is based on the record command, obtains the song of target song
Song mark obtains the audio file of the target song to be based on the song identity.The audio file is for storing target song
Bent multiple audio accompaniment frames.In addition, terminal has the audio file of the target song.Therefore, terminal can be based on voice sound
Frequency range, record current environment in user sing voice frequency range audio data.
Wherein, which can be music player, or be equipped with the application program etc. of music plug-in unit, this
Inventive embodiments are not especially limited this.
202, terminal stores the audio data acquired in real time into file destination.
The file destination sings the audio data acquired in real time when the target song for storing user.It is recorded when receiving
When instruction, terminal can create a file destination, and terminal stores the audio data acquired when playing each audio accompaniment frame
Into file destination.Wherein, terminal can be in acquisition audio data in real time, and the audio data acquired in real time is compiled in synchronization
Code, and the audio data after coding is written in the file destination.
203, terminal obtains the timestamp of multiple audio accompaniment frame, according to the timestamp of multiple audio accompaniment frame and is somebody's turn to do
Location information of the audio data in the file destination, the corresponding relationship between settling time stamp and location information.
Wherein, which is used to indicate storage location of the audio data acquired in real time in file destination.This step
In rapid, which obtains the timestamp of each audio accompaniment frame, and the sound acquired in real time when each audio accompaniment frame of broadcasting
Frequency is according to the location information in file destination;For each audio accompaniment frame, the terminal is according to each audio accompaniment frame
Timestamp, and the location information of audio data that while playing each audio accompaniment acquires, establish the timestamp of the audio accompaniment
Corresponding relationship between the location information of the storage of the audio data acquired in real time, and store the timestamp and location information it
Between corresponding relationship.
It should be noted that the timestamp of the audio accompaniment frame, exists with the audio data acquired when playing the audio accompaniment
Timestamp in the target song is identical.In the embodiment of the present invention, terminal and is broadcast by by the timestamp of each audio accompaniment frame
The location information of the audio data acquired when putting the audio accompaniment frame is corresponded, to realize the position of audio data
Information and timestamp of the audio data in target song are associated, and ensure that the accuracy of recording.
Wherein, the terminal can the corresponding relationship between the timestamp and location information store into map (association) container,
In the map container, terminal can take the corresponding data structure of key-value (key-value) data to store the timestamp
And location information.In a kind of possible embodiment, which acquires when can play each audio accompaniment frame
Audio data file destination storage address, the terminal can by the time stamp setting of multiple audio accompaniment frames be t1, t2,
T3 ... ..., tn;Terminal is using the audio data acquired when playing each audio accompaniment frame as a frame audio data, multiframe audio
Data can be p1, p2, p3 ..., pn in the storage address of file destination respectively;Terminal is in map container, by the timestamp
It is recorded as<t1 respectively with storage address, p1>,<t2, p2>,<t3, p3>...,<tn, pn>.
It should be noted that terminal passes through the corresponding relationship stored between the timestamp and location information, so that should
The corresponding timestamp of every frame audio data and positional relationship correspond, and facilitate and subsequent search the frame sound based on timestamp
Frequency is according to corresponding storage location, when the subsequent adjustment for needing to play out progress, after which can be directly based upon adjustment
The corresponding timestamp of playback progress, obtain corresponding storage location, stored, be omitted to be directly based upon storage location
The process of new files again substantially increases recording efficiency.
In the embodiment of the present invention, which can also provide progress adjustment service for user, which can be based on
The progress adjustment service, is adjusted current playback progress, for example, the part terminated to prelude is directly adjusted, thus will
Broadcasting is ignored in the prelude part of the song, improves recorded speed.When terminal adjusts the playback progress of multiple audio accompaniment frame
When, terminal can realize recording process adjusted by executing following steps 204-205.
204, when receiving adjustment instruction, terminal is determined based on the corresponding relationship between the timestamp and location information
The location information of the audio data acquired after adjustment.
Wherein, which is used to indicate the playback progress for adjusting multiple audio accompaniment frame.In this step, work as reception
When to the adjustment instruction, terminal obtains the timestamp of audio accompaniment frame adjusted.Terminal is according to the timestamp and location information
Between corresponding relationship and the audio accompaniment frame adjusted timestamp, obtain the position of audio data acquired after adjustment
Confidence breath.
It should be noted that the adjustment instruction may be reverse play, it is also possible to be forward play, that is to say, wherein should
Reverse play refers to that audio accompaniment frame plays out before currently playing audio accompaniment frame to playing sequence for retrogressing.The advance
Broadcasting, which refers to, jumps to playing sequence after currently playing audio accompaniment frame, and the audio accompaniment frame currently playing with this is not
Adjacent audio accompaniment frame plays out.
Correspondingly, the process that terminal obtains location information may include following two situation in this step.
The first situation, when the adjustment instruction indicate by multiple audio accompaniment frame reverse play when, terminal is according to the tune
The timestamp of audio accompaniment frame after whole, from the corresponding relationship between the timestamp and location information, after obtaining reverse play
The location information of the audio data of acquisition.
Wherein, when terminal needs currently playing progress being retracted into the audio accompaniment frame recorded, due to terminal
The location information and the corresponding timestamp of the location information for the audio data that prior associated storage has been recorded, that is to say, adopt
The timestamp of the audio accompaniment frame played when collecting the audio data.Therefore, the terminal is available falls back accompaniment tone adjusted
The timestamp of frequency frame, according to the timestamp, in the corresponding relationship between timestamp and location information, obtain this it is adjusted when
Between stab corresponding location information, and using the location information as the location information of the audio data acquired after reverse play.
Second situation, when the adjustment instruction indicate by multiple audio accompaniment frame forward play when, terminal is according to gap
The location information of audio data in period in the timestamp of multiple audio accompaniment frames and the leaky-bucket, update the timestamp and
Corresponding relationship between location information;Terminal is according to the corresponding relationship between updated timestamp and location information, before determining
The location information of the audio data acquired after into broadcasting.
Wherein, which is the timestamp of the audio accompaniment frame before adjustment and the time of audio accompaniment frame adjusted
Time interval between stamp.Audio data in the leaky-bucket refers to the audio accompaniment in the normal play leaky-bucket
When frame, the audio data of corresponding voice singing part.In the embodiment of the present invention, which can be to be not necessarily in the song
The period that voice is sung.When the target song play to without voice sing part when, for example, the prelude of the target song,
The parts such as interlude, the user can be by the playback progress forward plays to the part for needing voice to sing.In general, when the gap
Audio data in section can indicate the audio data of mute state for a frame.
Wherein, the terminal is according to the audio number in the timestamp and the leaky-bucket of audio accompaniment frames multiple in leaky-bucket
According to location information, the step of updating the corresponding relationship between the timestamp and location information, can be with are as follows: terminal obtains the gap
The timestamp of multiple audio accompaniment frames, the audio data in the leaky-bucket is stored into file destination in period;Terminal root
According to the location information of the audio data in the timestamp and the leaky-bucket of audio accompaniment frames multiple in the leaky-bucket, at this
In corresponding relationship between timestamp and location information, adds the timestamp of each audio accompaniment frame in the leaky-bucket and be somebody's turn to do
The location information of audio data in leaky-bucket.
It should be noted that the audio data in the leaky-bucket can be the audio data of expression mute state, terminal
It can be by setting 0 for parameters such as the pitch of the audio data in leaky-bucket, loudness, to indicate not having voice singing part
Song recording.Further, terminal stores the preset audio data into file destination, and by the audio in leaky-bucket
Corresponding relationship between the timestamp of audio accompaniment frame in the location information of data and the leaky-bucket is stored to map container
In.
Wherein, in leaky-bucket, the corresponding frame audio data of each audio accompaniment frame, the terminal can be obtained in advance simultaneously
The audio data in the frame leaky-bucket is stored, thus subsequent when carrying out forward play, terminal directly acquires the frame gap
Audio data in period, and the quantity of the audio accompaniment frame according to included by leaky-bucket repeat to store in file destination
Audio data in identical number of frames leaky-bucket.Certainly, since the audio data in every frame leaky-bucket is all identical, shared by
Memory space size it is also identical, the terminal can also based on the quantity and size of the audio data in every frame leaky-bucket,
In file destination, when distributing multiple storage address for the multiframe audio data in leaky-bucket is corresponding, and synchronizing the gap
The location information of multiframe audio data in section and the timestamp of the audio accompaniment frame in leaky-bucket, store to map container
In.
205, location information of the terminal based on the audio data acquired after the adjustment, by audio data adjusted store to
In the file destination.
In this step, after carrying out progress adjustment, terminal, will be real-time according to the location information of the audio data acquired in real time
The location information corresponding storage location in the file destination is written in the audio data of acquisition, so that the audio data is real-time
It stores into file destination.
Wherein, when terminal is by multiple audio accompaniment frame forward play, terminal is based on multiple default sounds in leaky-bucket
The location information of frequency evidence, by the audio data acquired after forward play be written the audio data in the last one leaky-bucket it
Afterwards.When terminal is by multiple audio accompaniment frame reverse play, terminal obtains the timestamp of the audio accompaniment frame after reverse play
It, will according to the storage location of stored multiframe audio data based on acquired location information after corresponding location information
Storage location is located at the acquisition in the corresponding audio data of location information and storage location of the acquisition in the file destination
Audio data after the corresponding audio data of location information is deleted.Then, terminal is according to the audio number acquired after reverse play
According to location information, the audio data acquired in real time after reverse play is stored in the file destination to the deletion.
Wherein, terminal can indicate the storage location of the audio data acquired after adjustment by pointer, broadcast when needing to adjust
When degree of putting into, which can be moved to the pointer timestamp pair of audio accompaniment frame after adjustment by calling move function
The location information answered, the storage location indicated based on the pointer after the movement pointer are stored.In addition, when terminal is fallen back
When broadcasting, which can be by calling cutting function to fall back from the file destination according to the storage order of the audio data
The corresponding location information of timestamp afterwards starts, which is stored in after the location information and the location information
Audio data is deleted.
Method provided in an embodiment of the present invention, when receiving record command, multiple accompaniments of terminal plays target song
Audio frame acquires user in real time and sings the audio data of the target song, which is used to indicate that record the user adjoint
Audio data when multiple audio accompaniment frame is sung;Terminal stores the audio data acquired in real time into file destination;Eventually
End obtains the timestamp of multiple audio accompaniment frame, according to the timestamp of multiple audio accompaniment frame and the audio data in the mesh
Mark the location information in file, the corresponding relationship between settling time stamp and location information;When receiving adjustment instruction, terminal
Based on the corresponding relationship between the timestamp and location information, the location information of the audio data acquired after adjustment is determined;Terminal
Based on the location information of the audio data acquired after the adjustment, audio data adjusted is stored into the file destination, by
In in recording process, for terminal by the corresponding relationship between the timestamp and location information, realizing only needs a target text
Part can be completed to record, and without creating multiple files, therefore, the process of subsequent file mergences be omitted, to greatly improve
Recording efficiency optimizes the recording experience of user.
Fig. 3 is a kind of structural schematic diagram of voice recorder provided in an embodiment of the present invention.Referring to Fig. 3, the device packet
Include: acquisition module 301, the first memory module 302 establish module 303, determining module 304 and the second memory module 305.
Acquisition module 301, acquisition module, for when receiving record command, playing multiple accompaniment tones of target song
Frequency frame acquires user in real time and sings the audio data of the target song, which, which is used to indicate, records the user with should
Audio data when multiple audio accompaniment frames are sung;
First memory module 302, for storing the audio data acquired in real time into file destination;
Establish module 303, for obtaining the timestamp of multiple audio accompaniment frame, according to multiple audio accompaniment frame when
Between stab and location information of the audio data in the file destination, the corresponding relationship between settling time stamp and location information;
Determining module 304, for being closed based on corresponding between the timestamp and location information when receiving adjustment instruction
System, determines the location information of the audio data acquired after adjustment, which is used to indicate the multiple audio accompaniment frame of adjustment
Playback progress;
Second memory module 305, for the location information based on the audio data acquired after the adjustment, by sound adjusted
Frequency is according to storing into the file destination.
Optionally, acquisition module 301, for when receiving the record command, according to the song mark of the target song
Know, obtain the audio file of the target song, which is used to store multiple audio accompaniment frames of the target song;It plays
Multiple audio accompaniment frames in the audio file;When playing each audio accompaniment frame, the user is acquired in real time and sings the mesh
Mark the audio data of song.
Optionally, this establishes module 303, the timestamp acquired when playing each audio accompaniment frame for obtaining, for every
A audio accompaniment frame, the audio number acquired according to the timestamp of each audio accompaniment frame and when playing each audio accompaniment frame
According to location information, establish the corresponding relationship between the timestamp and location information.
Optionally, the determining module 304, comprising:
First acquisition unit, for when receiving the adjustment instruction, obtaining the timestamp of audio accompaniment frame adjusted;
Second acquisition unit, for according between the timestamp and location information corresponding relationship and this is adjusted
The timestamp of audio accompaniment frame obtains the location information of the audio data acquired after adjustment.
Optionally, second acquisition unit, for indicating when the adjustment instruction by multiple audio accompaniment frame reverse play
When, according to the timestamp of the audio accompaniment frame adjusted, from the corresponding relationship between the timestamp and location information, obtain
The location information of the audio data acquired after reverse play;
Wherein, the reverse play refer to retrogressing to playing sequence before currently playing audio accompaniment frame audio accompaniment frame
It plays out.
Optionally, second acquisition unit, for indicating when the adjustment instruction by multiple audio accompaniment frame forward play
When, according to the location information of the audio data in the timestamp of audio accompaniment frames multiple in leaky-bucket and the leaky-bucket, more
Corresponding relationship between the new timestamp and location information;According to the corresponding pass between updated timestamp and location information
System, determines the location information of the audio data acquired after forward play;
Wherein, which, which refers to, jumps to playing sequence after currently playing audio accompaniment frame, and with deserve
The non-conterminous audio accompaniment frame of the audio accompaniment frame of preceding broadcasting plays out, which is the audio accompaniment frame before adjustment
Time interval between timestamp and the timestamp of audio accompaniment frame adjusted.
Optionally, second acquisition unit, for obtaining the timestamp of multiple audio accompaniment frames in the leaky-bucket;It should
Audio data in leaky-bucket is stored into file destination;According to the timestamp of audio accompaniment frames multiple in the leaky-bucket,
With the location information of the audio data in the leaky-bucket, in the corresponding relationship between the timestamp and location information, addition
The location information of the timestamp of each audio accompaniment frame and the audio data in the leaky-bucket in the leaky-bucket.
Method provided in an embodiment of the present invention, when receiving record command, multiple accompaniments of terminal plays target song
Audio frame acquires user in real time and sings the audio data of the target song, which is used to indicate that record the user adjoint
Audio data when multiple audio accompaniment frame is sung;Terminal stores the audio data acquired in real time into file destination;Eventually
End obtains the timestamp of multiple audio accompaniment frame, according to the timestamp of multiple audio accompaniment frame and the audio data in the mesh
Mark the location information in file, the corresponding relationship between settling time stamp and location information;When receiving adjustment instruction, terminal
Based on the corresponding relationship between the timestamp and location information, the location information of the audio data acquired after adjustment is determined;Terminal
Based on the location information of the audio data acquired after the adjustment, audio data adjusted is stored into the file destination, by
In in recording process, for terminal by the corresponding relationship between the timestamp and location information, realizing only needs a target text
Part can be completed to record, and without creating multiple files, therefore, the process of subsequent file mergences be omitted, to greatly improve
Recording efficiency optimizes the recording experience of user.
All the above alternatives can form the alternative embodiment of the disclosure, herein no longer using any combination
It repeats one by one.
It should be understood that voice recorder provided by the above embodiment is in recording audio, only with above-mentioned each function
The division progress of module can according to need and for example, in practical application by above-mentioned function distribution by different function moulds
Block is completed, i.e., the internal structure of equipment is divided into different functional modules, to complete all or part of function described above
Energy.In addition, voice recorder provided by the above embodiment and audio recording embodiment of the method belong to same design, it is specific real
Existing process is detailed in embodiment of the method, and which is not described herein again.
Fig. 4 is a kind of structural schematic diagram of terminal provided in an embodiment of the present invention.The terminal 400 may is that smart phone,
Tablet computer, MP3 player (Moving Picture Experts Group Audio Layer III, dynamic image expert
Compression standard audio level 3), (Moving Picture Experts Group Audio Layer IV, dynamic image are special by MP4
Family's compression standard audio level 4) player, laptop or desktop computer.Terminal 400 be also possible to referred to as user equipment,
Other titles such as portable terminal, laptop terminal, terminal console.
In general, terminal 400 includes: processor 401 and memory 402.
Processor 401 may include one or more processing cores, such as 4 core processors, 8 core processors etc..Place
Reason device 401 can use DSP (Digital Signal Processing, Digital Signal Processing), FPGA (Field-
Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, may be programmed
Logic array) at least one of example, in hardware realize.Processor 401 also may include primary processor and coprocessor, master
Processor is the processor for being handled data in the awake state, also referred to as CPU (Central Processing
Unit, central processing unit);Coprocessor is the low power processor for being handled data in the standby state.?
In some embodiments, processor 401 can be integrated with GPU (Graphics Processing Unit, image processor),
GPU is used to be responsible for the rendering and drafting of content to be shown needed for display screen.In some embodiments, processor 401 can also be wrapped
AI (Artificial Intelligence, artificial intelligence) processor is included, the AI processor is for handling related machine learning
Calculating operation.
Memory 402 may include one or more computer readable storage mediums, which can
To be non-transient.Memory 402 may also include high-speed random access memory and nonvolatile memory, such as one
Or multiple disk storage equipments, flash memory device.In some embodiments, the non-transient computer in memory 402 can
Storage medium is read for storing at least one instruction, at least one instruction performed by processor 401 for realizing this Shen
Please in embodiment of the method provide audio recording method.
In some embodiments, terminal 400 is also optional includes: peripheral device interface 403 and at least one peripheral equipment.
It can be connected by bus or signal wire between processor 401, memory 402 and peripheral device interface 403.Each peripheral equipment
It can be connected by bus, signal wire or circuit board with peripheral device interface 403.Specifically, peripheral equipment includes: radio circuit
404, at least one of touch display screen 405, camera 406, voicefrequency circuit 407, positioning component 408 and power supply 409.
Peripheral device interface 403 can be used for I/O (Input/Output, input/output) is relevant outside at least one
Peripheral equipment is connected to processor 401 and memory 402.In some embodiments, processor 401, memory 402 and peripheral equipment
Interface 403 is integrated on same chip or circuit board;In some other embodiments, processor 401, memory 402 and outer
Any one or two in peripheral equipment interface 403 can realize on individual chip or circuit board, the present embodiment to this not
It is limited.
Radio circuit 404 is for receiving and emitting RF (Radio Frequency, radio frequency) signal, also referred to as electromagnetic signal.It penetrates
Frequency circuit 404 is communicated by electromagnetic signal with communication network and other communication equipments.Radio circuit 404 turns electric signal
It is changed to electromagnetic signal to be sent, alternatively, the electromagnetic signal received is converted to electric signal.Optionally, radio circuit 404 wraps
It includes: antenna system, RF transceiver, one or more amplifiers, tuner, oscillator, digital signal processor, codec chip
Group, user identity module card etc..Radio circuit 404 can be carried out by least one wireless communication protocol with other terminals
Communication.The wireless communication protocol includes but is not limited to: Metropolitan Area Network (MAN), each third generation mobile communication network (2G, 3G, 4G and 5G), wireless office
Domain net and/or WiFi (Wireless Fidelity, Wireless Fidelity) network.In some embodiments, radio circuit 404 may be used also
To include the related circuit of NFC (Near Field Communication, wireless near field communication), the application is not subject to this
It limits.
Display screen 405 is for showing UI (User Interface, user interface).The UI may include figure, text, figure
Mark, video and its their any combination.When display screen 405 is touch display screen, display screen 405 also there is acquisition to show
The ability of the touch signal on the surface or surface of screen 405.The touch signal can be used as control signal and be input to processor
401 are handled.At this point, display screen 405 can be also used for providing virtual push button and/or dummy keyboard, also referred to as soft button and/or
Soft keyboard.In some embodiments, display screen 405 can be one, and the front panel of terminal 400 is arranged;In other embodiments
In, display screen 405 can be at least two, be separately positioned on the different surfaces of terminal 400 or in foldover design;In still other reality
It applies in example, display screen 405 can be flexible display screen, be arranged on the curved surface of terminal 400 or on fold plane.Even, it shows
Display screen 405 can also be arranged to non-rectangle irregular figure, namely abnormity screen.Display screen 405 can use LCD (Liquid
Crystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode, Organic Light Emitting Diode)
Etc. materials preparation.
CCD camera assembly 406 is for acquiring image or video.Optionally, CCD camera assembly 406 include front camera and
Rear camera.In general, the front panel of terminal is arranged in front camera, the back side of terminal is arranged in rear camera.One
In a little embodiments, rear camera at least two is main camera, depth of field camera, wide-angle camera, focal length camera shooting respectively
Any one in head, to realize that main camera and the fusion of depth of field camera realize background blurring function, main camera and wide-angle
Camera fusion realizes that pan-shot and VR (Virtual Reality, virtual reality) shooting function or other fusions are clapped
Camera shooting function.In some embodiments, CCD camera assembly 406 can also include flash lamp.Flash lamp can be monochromatic warm flash lamp,
It is also possible to double-colored temperature flash lamp.Double-colored temperature flash lamp refers to the combination of warm light flash lamp and cold light flash lamp, can be used for not
With the light compensation under colour temperature.
Voicefrequency circuit 407 may include microphone and loudspeaker.Microphone is used to acquire the sound wave of user and environment, and will
Sound wave, which is converted to electric signal and is input to processor 401, to be handled, or is input to radio circuit 404 to realize voice communication.
For stereo acquisition or the purpose of noise reduction, microphone can be separately positioned on the different parts of terminal 400 to be multiple.Mike
Wind can also be array microphone or omnidirectional's acquisition type microphone.Loudspeaker is then used to that processor 401 or radio circuit will to be come from
404 electric signal is converted to sound wave.Loudspeaker can be traditional wafer speaker, be also possible to piezoelectric ceramic loudspeaker.When
When loudspeaker is piezoelectric ceramic loudspeaker, the audible sound wave of the mankind can be not only converted electrical signals to, it can also be by telecommunications
Number the sound wave that the mankind do not hear is converted to carry out the purposes such as ranging.In some embodiments, voicefrequency circuit 407 can also include
Earphone jack.
Positioning component 408 is used for the current geographic position of positioning terminal 400, to realize navigation or LBS (Location
Based Service, location based service).Positioning component 408 can be the GPS (Global based on the U.S.
Positioning System, global positioning system), the dipper system of China, Russia Gray receive this system or European Union
The positioning component of Galileo system.
Power supply 409 is used to be powered for the various components in terminal 400.Power supply 409 can be alternating current, direct current,
Disposable battery or rechargeable battery.When power supply 409 includes rechargeable battery, which can support wired charging
Or wireless charging.The rechargeable battery can be also used for supporting fast charge technology.
In some embodiments, terminal 400 further includes having one or more sensors 410.The one or more sensors
410 include but is not limited to: acceleration transducer 411, gyro sensor 412, pressure sensor 413, fingerprint sensor 414,
Optical sensor 415 and proximity sensor 416.
The acceleration that acceleration transducer 411 can detecte in three reference axis of the coordinate system established with terminal 400 is big
It is small.For example, acceleration transducer 411 can be used for detecting component of the acceleration of gravity in three reference axis.Processor 401 can
With the acceleration of gravity signal acquired according to acceleration transducer 411, touch display screen 405 is controlled with transverse views or longitudinal view
Figure carries out the display of user interface.Acceleration transducer 411 can be also used for the acquisition of game or the exercise data of user.
Gyro sensor 412 can detecte body direction and the rotational angle of terminal 400, and gyro sensor 412 can
To cooperate with acquisition user to act the 3D of terminal 400 with acceleration transducer 411.Processor 401 is according to gyro sensor 412
Following function may be implemented in the data of acquisition: when action induction (for example changing UI according to the tilt operation of user), shooting
Image stabilization, game control and inertial navigation.
The lower layer of side frame and/or touch display screen 405 in terminal 400 can be set in pressure sensor 413.Work as pressure
When the side frame of terminal 400 is arranged in sensor 413, user can detecte to the gripping signal of terminal 400, by processor 401
Right-hand man's identification or prompt operation are carried out according to the gripping signal that pressure sensor 413 acquires.When the setting of pressure sensor 413 exists
When the lower layer of touch display screen 405, the pressure operation of touch display screen 405 is realized to UI circle according to user by processor 401
Operability control on face is controlled.Operability control includes button control, scroll bar control, icon control, menu
At least one of control.
Fingerprint sensor 414 is used to acquire the fingerprint of user, collected according to fingerprint sensor 414 by processor 401
The identity of fingerprint recognition user, alternatively, by fingerprint sensor 414 according to the identity of collected fingerprint recognition user.It is identifying
When the identity of user is trusted identity out, the user is authorized to execute relevant sensitive operation, the sensitive operation packet by processor 401
Include solution lock screen, check encryption information, downloading software, payment and change setting etc..Terminal can be set in fingerprint sensor 414
400 front, the back side or side.When being provided with physical button or manufacturer Logo in terminal 400, fingerprint sensor 414 can be with
It is integrated with physical button or manufacturer Logo.
Optical sensor 415 is for acquiring ambient light intensity.In one embodiment, processor 401 can be according to optics
The ambient light intensity that sensor 415 acquires controls the display brightness of touch display screen 405.Specifically, when ambient light intensity is higher
When, the display brightness of touch display screen 405 is turned up;When ambient light intensity is lower, the display for turning down touch display screen 405 is bright
Degree.In another embodiment, the ambient light intensity that processor 401 can also be acquired according to optical sensor 415, dynamic adjust
The acquisition parameters of CCD camera assembly 406.
Proximity sensor 416, also referred to as range sensor are generally arranged at the front panel of terminal 400.Proximity sensor 416
For acquiring the distance between the front of user Yu terminal 400.In one embodiment, when proximity sensor 416 detects use
When family and the distance between the front of terminal 400 gradually become smaller, touch display screen 405 is controlled from bright screen state by processor 401
It is switched to breath screen state;When proximity sensor 416 detects user and the distance between the front of terminal 400 becomes larger,
Touch display screen 405 is controlled by processor 401 and is switched to bright screen state from breath screen state.
It will be understood by those skilled in the art that the restriction of the not structure paired terminal 400 of structure shown in Fig. 4, can wrap
It includes than illustrating more or fewer components, perhaps combine certain components or is arranged using different components.
In the exemplary embodiment, a kind of computer readable storage medium is additionally provided, the memory for example including instruction,
Above-metioned instruction can be executed by the processor in terminal to complete the audio recording method in above-described embodiment.For example, the calculating
Machine readable storage medium storing program for executing can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk and optical data storage devices
Deng.
Those of ordinary skill in the art will appreciate that realizing that all or part of the steps of above-described embodiment can pass through hardware
It completes, relevant hardware can also be instructed to complete by program, the program can store in a kind of computer-readable
In storage medium, storage medium mentioned above can be read-only memory, disk or CD etc..
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all in spirit of the invention and
Within principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.
Claims (16)
1. a kind of audio recording method, which is characterized in that the described method includes:
When receiving record command, multiple audio accompaniment frames of target song are played, user is acquired in real time and sings the target
The audio data of song, the record command, which is used to indicate, records when the user sings with the multiple audio accompaniment frame
Audio data;
The audio data acquired in real time is stored into file destination;
The timestamp for obtaining the multiple audio accompaniment frame, according to the timestamp of the multiple audio accompaniment frame and the audio number
Corresponding relationship according to the location information in the file destination, between settling time stamp and location information;
When receiving adjustment instruction, based on the corresponding relationship between the timestamp and location information, acquired after determining adjustment
Audio data location information, the adjustment instruction is used to indicate the playback progress for adjusting the multiple audio accompaniment frame;
Based on the location information of the audio data acquired after the adjustment, audio data adjusted is stored to the target text
In part.
2. the method according to claim 1, wherein described when receiving record command, broadcasting target song
Multiple audio accompaniment frames, acquire user in real time and sing the audio data of the target song and include:
When receiving the record command, according to the song identity of the target song, the audio of the target song is obtained
File, the audio file are used to store multiple audio accompaniment frames of the target song;
Play multiple audio accompaniment frames in the audio file;
When playing each audio accompaniment frame, the audio data that the user sings the target song is acquired in real time.
3. the method according to claim 1, wherein the timestamp according to the multiple audio accompaniment frame and
Location information of the audio data in the file destination, the corresponding relationship packet between settling time stamp and location information
It includes:
Obtain the location information of the audio data acquired when playing each audio accompaniment frame;
For each audio accompaniment frame, according to the timestamp of each audio accompaniment frame and each accompaniment tone is played
The location information of the audio data acquired when frequency frame establishes the corresponding relationship between the timestamp and location information.
4. being based on the time the method according to claim 1, wherein described when receiving adjustment instruction
Corresponding relationship between stamp and location information determines that the location information of the audio data acquired after adjustment includes:
When receiving the adjustment instruction, the timestamp of audio accompaniment frame adjusted is obtained;
According to the time of corresponding relationship and the audio accompaniment frame adjusted between the timestamp and location information
Stamp obtains the location information of the audio data acquired after adjustment.
5. according to the method described in claim 4, it is characterized in that, pair according between the timestamp and location information
It should be related to and the timestamp of the audio accompaniment frame adjusted, obtain the location information of the audio data acquired after adjustment
Include:
When the adjustment instruction is indicated the multiple audio accompaniment frame reverse play, according to the audio accompaniment adjusted
The timestamp of frame obtains the audio number acquired after reverse play from the corresponding relationship between the timestamp and location information
According to location information;
Wherein, the reverse play refers to the audio accompaniment frame retreated to playing sequence before currently playing audio accompaniment frame
It plays out.
6. according to the method described in claim 4, it is characterized in that, pair according between the timestamp and location information
It should be related to and the timestamp of the audio accompaniment frame adjusted, obtain the location information of the audio data acquired after adjustment
Include:
When the adjustment instruction is indicated the multiple audio accompaniment frame forward play, according to accompaniment tones multiple in leaky-bucket
The location information of the timestamp of frequency frame and the audio data in the leaky-bucket updates between the timestamp and location information
Corresponding relationship;
According to the corresponding relationship between updated timestamp and location information, the audio data acquired after forward play is determined
Location information;
Wherein, the forward play refers to that jump to playing sequence works as after currently playing audio accompaniment frame, and with described
The non-conterminous audio accompaniment frame of the audio accompaniment frame of preceding broadcasting plays out, and the leaky-bucket is the audio accompaniment frame before adjustment
Timestamp and audio accompaniment frame adjusted timestamp between time interval.
7. according to the method described in claim 6, it is characterized in that, it is described according to audio accompaniment frames multiple in leaky-bucket when
Between audio data in stamp and the leaky-bucket, the corresponding relationship updated between the timestamp and location information includes:
Obtain the timestamp of multiple audio accompaniment frames in the leaky-bucket;
Audio data in the leaky-bucket is stored into file destination;
According to the position of the audio data in the timestamp and the leaky-bucket of audio accompaniment frames multiple in the leaky-bucket
Information in the corresponding relationship between the timestamp and location information, adds each audio accompaniment frame in the leaky-bucket
Timestamp and the audio data in the leaky-bucket location information.
8. a kind of voice recorder, which is characterized in that described device includes:
Acquisition module acquires user for when receiving record command, playing multiple audio accompaniment frames of target song in real time
The audio data of the target song is sung, the record command, which is used to indicate, records the user with the multiple accompaniment tone
Audio data when frequency frame is sung;
First memory module, for storing the audio data acquired in real time into file destination;
Module is established, for obtaining the timestamp of the multiple audio accompaniment frame, according to the time of the multiple audio accompaniment frame
Stamp and location information of the audio data in the file destination, the corresponding pass between settling time stamp and location information
System;
Determining module, for when receiving adjustment instruction, based on the corresponding relationship between the timestamp and location information, really
Set the tone it is whole after the location information of audio data that acquires, the adjustment instruction is used to indicate the multiple audio accompaniment frame of adjustment
Playback progress;
Second memory module, for the location information based on the audio data acquired after the adjustment, by audio number adjusted
According to storing into the file destination.
9. device according to claim 8, which is characterized in that
The acquisition module, for according to the song identity of the target song, obtaining institute when receiving the record command
The audio file of target song is stated, the audio file is used to store multiple audio accompaniment frames of the target song;Play institute
State multiple audio accompaniment frames in audio file;When playing each audio accompaniment frame, the user is acquired in real time and is sung
The audio data of the target song.
10. device according to claim 8, which is characterized in that
The timestamp that is described to establish module, being acquired when playing each audio accompaniment frame for obtaining, for each accompaniment tone
Frequency frame, the audio data acquired according to the timestamp of each audio accompaniment frame and when playing each audio accompaniment frame
Location information establishes the corresponding relationship between the timestamp and location information.
11. device according to claim 8, which is characterized in that the determining module, comprising:
First acquisition unit, for when receiving the adjustment instruction, obtaining the timestamp of audio accompaniment frame adjusted;
Second acquisition unit, for according to corresponding relationship between the timestamp and location information and described adjusted
The timestamp of audio accompaniment frame obtains the location information of the audio data acquired after adjustment.
12. device according to claim 11, which is characterized in that
The second acquisition unit is used for when the adjustment instruction is indicated the multiple audio accompaniment frame reverse play, root
According to the timestamp of the audio accompaniment frame adjusted, from the corresponding relationship between the timestamp and location information, obtain
The location information of the audio data acquired after reverse play;
Wherein, the reverse play refer to retrogressing to playing sequence before currently playing audio accompaniment frame audio accompaniment frame into
Row plays.
13. device according to claim 11, which is characterized in that
The second acquisition unit is used for when the adjustment instruction is indicated the multiple audio accompaniment frame forward play, root
According to the location information of the audio data in the timestamp and the leaky-bucket of audio accompaniment frames multiple in leaky-bucket, institute is updated
State the corresponding relationship between timestamp and location information;According to the corresponding relationship between updated timestamp and location information,
Determine the location information of the audio data acquired after forward play;
Wherein, the forward play refers to that jump to playing sequence works as after currently playing audio accompaniment frame, and with described
The non-conterminous audio accompaniment frame of the audio accompaniment frame of preceding broadcasting plays out, and the leaky-bucket is the audio accompaniment frame before adjustment
Timestamp and audio accompaniment frame adjusted timestamp between time interval.
14. device according to claim 13, which is characterized in that
The second acquisition unit, for obtaining the timestamp of multiple audio accompaniment frames in the leaky-bucket;By the gap
Audio data in period is stored into file destination;According to the timestamp of audio accompaniment frames multiple in the leaky-bucket, and
The location information of audio data in the leaky-bucket in the corresponding relationship between the timestamp and location information, adds
Add the location information of the timestamp of each audio accompaniment frame and the audio data in the leaky-bucket in the leaky-bucket.
15. a kind of terminal, which is characterized in that the terminal includes processor and memory, is stored at least in the memory
One instruction, described instruction are loaded as the processor and are executed to realize as described in claim 1 to any one of claim 7
Audio recording method performed by operation.
16. a kind of computer readable storage medium, which is characterized in that be stored at least one instruction, institute in the storage medium
Instruction is stated to be loaded by processor and executed to realize such as claim 1 to the described in any item audio recording methods of claim 7
Performed operation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810804459.5A CN109033335B (en) | 2018-07-20 | 2018-07-20 | Audio recording method, device, terminal and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810804459.5A CN109033335B (en) | 2018-07-20 | 2018-07-20 | Audio recording method, device, terminal and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109033335A true CN109033335A (en) | 2018-12-18 |
CN109033335B CN109033335B (en) | 2021-03-26 |
Family
ID=64644748
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810804459.5A Active CN109033335B (en) | 2018-07-20 | 2018-07-20 | Audio recording method, device, terminal and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109033335B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109597721A (en) * | 2018-12-14 | 2019-04-09 | 广州势必可赢网络科技有限公司 | A kind of audio data collecting method, apparatus, equipment and storage medium |
CN110675886A (en) * | 2019-10-09 | 2020-01-10 | 腾讯科技(深圳)有限公司 | Audio signal processing method, audio signal processing device, electronic equipment and storage medium |
CN110856009A (en) * | 2019-11-27 | 2020-02-28 | 广州华多网络科技有限公司 | Network karaoke system, audio and video playing method of network karaoke and related equipment |
CN111586529A (en) * | 2020-05-08 | 2020-08-25 | 北京三体云联科技有限公司 | Audio data processing method, device, terminal and computer readable storage medium |
CN112133269A (en) * | 2020-09-22 | 2020-12-25 | 腾讯音乐娱乐科技(深圳)有限公司 | Audio processing method, device, equipment and medium |
CN112687247A (en) * | 2021-01-25 | 2021-04-20 | 北京达佳互联信息技术有限公司 | Audio alignment method and device, electronic equipment and storage medium |
CN112788366A (en) * | 2020-12-28 | 2021-05-11 | 杭州海康威视系统技术有限公司 | Video processing method and device |
CN112927666A (en) * | 2021-01-26 | 2021-06-08 | 北京达佳互联信息技术有限公司 | Audio processing method and device, electronic equipment and storage medium |
CN113050918A (en) * | 2021-04-22 | 2021-06-29 | 深圳壹账通智能科技有限公司 | Audio optimization method, device, equipment and storage medium based on remote double recording |
CN113301381A (en) * | 2021-05-17 | 2021-08-24 | 上海振华重工(集团)股份有限公司 | Multi-source data playback system |
CN114446268A (en) * | 2022-01-28 | 2022-05-06 | 北京百度网讯科技有限公司 | Audio data processing method, device, electronic equipment, medium and program product |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104778957A (en) * | 2015-03-20 | 2015-07-15 | 广东欧珀移动通信有限公司 | Song audio processing method and device |
CN105023559A (en) * | 2015-05-27 | 2015-11-04 | 腾讯科技(深圳)有限公司 | Karaoke processing method and system |
CN106686431A (en) * | 2016-12-08 | 2017-05-17 | 杭州网易云音乐科技有限公司 | Synthesizing method and equipment of audio file |
US20180174559A1 (en) * | 2016-12-15 | 2018-06-21 | Michael John Elson | Network musical instrument |
-
2018
- 2018-07-20 CN CN201810804459.5A patent/CN109033335B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104778957A (en) * | 2015-03-20 | 2015-07-15 | 广东欧珀移动通信有限公司 | Song audio processing method and device |
CN105023559A (en) * | 2015-05-27 | 2015-11-04 | 腾讯科技(深圳)有限公司 | Karaoke processing method and system |
CN106686431A (en) * | 2016-12-08 | 2017-05-17 | 杭州网易云音乐科技有限公司 | Synthesizing method and equipment of audio file |
US20180174559A1 (en) * | 2016-12-15 | 2018-06-21 | Michael John Elson | Network musical instrument |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109597721A (en) * | 2018-12-14 | 2019-04-09 | 广州势必可赢网络科技有限公司 | A kind of audio data collecting method, apparatus, equipment and storage medium |
CN110675886A (en) * | 2019-10-09 | 2020-01-10 | 腾讯科技(深圳)有限公司 | Audio signal processing method, audio signal processing device, electronic equipment and storage medium |
CN110675886B (en) * | 2019-10-09 | 2023-09-15 | 腾讯科技(深圳)有限公司 | Audio signal processing method, device, electronic equipment and storage medium |
CN110856009A (en) * | 2019-11-27 | 2020-02-28 | 广州华多网络科技有限公司 | Network karaoke system, audio and video playing method of network karaoke and related equipment |
CN110856009B (en) * | 2019-11-27 | 2021-02-26 | 广州华多网络科技有限公司 | Network karaoke system, audio and video playing method of network karaoke and related equipment |
CN111586529A (en) * | 2020-05-08 | 2020-08-25 | 北京三体云联科技有限公司 | Audio data processing method, device, terminal and computer readable storage medium |
CN112133269A (en) * | 2020-09-22 | 2020-12-25 | 腾讯音乐娱乐科技(深圳)有限公司 | Audio processing method, device, equipment and medium |
CN112133269B (en) * | 2020-09-22 | 2024-03-15 | 腾讯音乐娱乐科技(深圳)有限公司 | Audio processing method, device, equipment and medium |
CN112788366A (en) * | 2020-12-28 | 2021-05-11 | 杭州海康威视系统技术有限公司 | Video processing method and device |
CN112687247B (en) * | 2021-01-25 | 2023-08-08 | 北京达佳互联信息技术有限公司 | Audio alignment method and device, electronic equipment and storage medium |
CN112687247A (en) * | 2021-01-25 | 2021-04-20 | 北京达佳互联信息技术有限公司 | Audio alignment method and device, electronic equipment and storage medium |
CN112927666A (en) * | 2021-01-26 | 2021-06-08 | 北京达佳互联信息技术有限公司 | Audio processing method and device, electronic equipment and storage medium |
CN112927666B (en) * | 2021-01-26 | 2023-11-28 | 北京达佳互联信息技术有限公司 | Audio processing method, device, electronic equipment and storage medium |
CN113050918A (en) * | 2021-04-22 | 2021-06-29 | 深圳壹账通智能科技有限公司 | Audio optimization method, device, equipment and storage medium based on remote double recording |
CN113301381A (en) * | 2021-05-17 | 2021-08-24 | 上海振华重工(集团)股份有限公司 | Multi-source data playback system |
WO2023142413A1 (en) * | 2022-01-28 | 2023-08-03 | 北京百度网讯科技有限公司 | Audio data processing method and apparatus, electronic device, medium, and program product |
CN114446268A (en) * | 2022-01-28 | 2022-05-06 | 北京百度网讯科技有限公司 | Audio data processing method, device, electronic equipment, medium and program product |
Also Published As
Publication number | Publication date |
---|---|
CN109033335B (en) | 2021-03-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109033335A (en) | Audio recording method, apparatus, terminal and storage medium | |
CN109302538A (en) | Method for playing music, device, terminal and storage medium | |
CN109729297A (en) | The method and apparatus of special efficacy are added in video | |
CN107978323A (en) | Audio identification methods, device and storage medium | |
CN109448761B (en) | Method and device for playing songs | |
EP3618055B1 (en) | Audio mixing method and terminal, and storage medium | |
CN108922506A (en) | Song audio generation method, device and computer readable storage medium | |
CN110491358A (en) | Carry out method, apparatus, equipment, system and the storage medium of audio recording | |
CN109756784A (en) | Method for playing music, device, terminal and storage medium | |
US20230252964A1 (en) | Method and apparatus for determining volume adjustment ratio information, device, and storage medium | |
CN109300482A (en) | Audio recording method, apparatus, storage medium and terminal | |
CN109147757A (en) | Song synthetic method and device | |
WO2019127899A1 (en) | Method and device for addition of song lyrics | |
CN109346111A (en) | Data processing method, device, terminal and storage medium | |
CN109587549A (en) | Video recording method, device, terminal and storage medium | |
CN108922562A (en) | Sing evaluation result display methods and device | |
CN110266982A (en) | The method and system of song is provided in recorded video | |
CN110248236A (en) | Video broadcasting method, device, terminal and storage medium | |
CN109192218A (en) | The method and apparatus of audio processing | |
CN108831513A (en) | Method, terminal, server and the system of recording audio data | |
CN109218751A (en) | The method, apparatus and system of recommendation of audio | |
CN109743461B (en) | Audio data processing method, device, terminal and storage medium | |
CN108509620A (en) | Song recognition method and device, storage medium | |
CN108319712A (en) | The method and apparatus for obtaining lyrics data | |
CN109873905A (en) | Audio frequency playing method, audio synthetic method, device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |