CN109346111B - Data processing method, device, terminal and storage medium - Google Patents

Data processing method, device, terminal and storage medium Download PDF

Info

Publication number
CN109346111B
CN109346111B CN201811185453.0A CN201811185453A CN109346111B CN 109346111 B CN109346111 B CN 109346111B CN 201811185453 A CN201811185453 A CN 201811185453A CN 109346111 B CN109346111 B CN 109346111B
Authority
CN
China
Prior art keywords
sound effect
target
frame
audio
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811185453.0A
Other languages
Chinese (zh)
Other versions
CN109346111A (en
Inventor
罗超
谢欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN201811185453.0A priority Critical patent/CN109346111B/en
Publication of CN109346111A publication Critical patent/CN109346111A/en
Application granted granted Critical
Publication of CN109346111B publication Critical patent/CN109346111B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention discloses a data processing method, a data processing device, a terminal and a storage medium, and belongs to the technical field of data processing. According to the embodiment of the invention, based on the time axis and the plurality of sound effect controls on the sound effect setting page, the plurality of target audio clips can be determined, the sectional processing of the whole audio data is realized, each target audio clip can be respectively corresponding to the corresponding target sound effect type, so that at least one target sound effect can be added to the whole audio data, and the diversification requirement of the target sound effect is met.

Description

Data processing method, device, terminal and storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a data processing method, an apparatus, a terminal, and a storage medium.
Background
Along with the continuous development of data processing technology, people are more and more high to the demand of special audio, can be through handling audio data, and then obtain the diversified audio that accords with the demand. For example, the audio data may be processed to generate an audio effect that mimics a miniascape or warcraft speaking.
At present, the commonly used data processing method is as follows: based on the acquired audio data, performing framing, windowing and time domain transformation on the whole audio data to generate frequency domain data corresponding to the audio data, and performing algorithm processing on the whole frequency domain data to change information such as amplitude corresponding to the whole frequency domain data, so as to change an audio waveform corresponding to the audio data, thereby adding a target sound effect to the audio data.
Based on the data processing method, the acquired whole audio data needs to be processed so as to add a target sound effect to the audio data, and the added target sound effect is single in type and cannot meet the diversified requirements of users on the target sound effect.
Disclosure of Invention
The embodiment of the invention provides a data processing method, a data processing device, a terminal and a storage medium, which can solve the problem that the type of a target sound effect added to the whole audio data is single. The technical scheme is as follows:
in one aspect, a data processing method is provided, and the method includes:
displaying a sound effect setting page, wherein the sound effect setting page is provided with a time axis of a video to be edited and a plurality of sound effect controls;
determining a starting frame based on the selected operation of any time on the time axis;
when long-time pressing operation on a target control in the plurality of sound effect controls is detected, sound effect processing corresponding to a target sound effect type is carried out by taking the starting frame as a starting point until the long-time pressing operation is detected to be finished, and a target audio clip is determined based on an end frame corresponding to the finishing time of the long-time pressing operation, wherein the target sound effect type corresponds to the target control;
generating target audio data based on the target audio segment.
In one possible implementation manner, before the displaying the sound effect setting page, the method further includes:
displaying a video editing page, wherein the video editing page is provided with a sound effect setting control;
and when the clicking operation of the sound effect setting control is detected, jumping to the sound effect setting page.
In one possible implementation manner, when a long-press operation on a target control in the multiple sound effect controls is detected, performing sound effect processing corresponding to a target sound effect type with the start frame as a starting point until the long-press operation is detected to be ended, and determining a target audio segment based on an end frame corresponding to an end time of the long-press operation includes:
when long-time pressing operation on a target control in the plurality of sound effect controls is detected, acquiring a target sound effect type corresponding to the target control;
performing sound effect processing on the audio frame after the initial frame based on the target sound effect type;
when the long-press operation on the target control is detected to be finished, determining an end frame corresponding to the end moment of the long-press operation;
determining the target audio segment based on the start frame and the end frame.
In one possible implementation manner, the performing, based on the target sound-effect type, sound-effect processing on the audio frame after the start frame includes:
and carrying out inflexion algorithm processing on the audio frames after the starting frame based on the target sound effect type to obtain a plurality of target audio frames, wherein the pitch information of the target audio frames is the same as that of the target sound effect type.
In one possible implementation, the determining the target audio segment based on the start frame and the end frame includes:
merging a plurality of target audio frames between the start frame and the end frame into the target audio segment.
In one possible implementation manner, in the sound effect processing process, the method further includes:
and playing a plurality of target audio frames subjected to sound effect processing in real time on the sound effect setting page, and synchronously playing a plurality of video frames corresponding to the plurality of target audio frames.
In one aspect, a data processing apparatus is provided, the apparatus comprising:
the display module is used for displaying a sound effect setting page, and the sound effect setting page is provided with a time axis of a video to be edited and a plurality of sound effect controls;
the determining module is used for determining a starting frame based on the selected operation of any time on the time axis;
the processing module is used for performing sound effect processing corresponding to a target sound effect type by taking the initial frame as a starting point when long-time pressing operation on a target control in the plurality of sound effect controls is detected, determining a target audio clip based on an end frame corresponding to the end time of the long-time pressing operation until the long-time pressing operation is detected to be ended, wherein the target sound effect type corresponds to the target control;
and the generating module is used for generating target audio data based on the target audio fragment.
In one possible implementation, the apparatus further includes:
the display module is also used for displaying a video editing page, and the video editing page is provided with a sound effect setting control;
and the skipping module is used for skipping to the sound effect setting page when the clicking operation of the sound effect setting control is detected.
In one possible implementation, the processing module includes:
the acquisition unit is used for acquiring a target sound effect type corresponding to a target control when long-time pressing operation on the target control in the sound effect controls is detected;
the processing unit is used for performing sound effect processing on the audio frame after the starting frame based on the target sound effect type;
the determining unit is used for determining an end frame corresponding to the long-press operation end time when the long-press operation end of the target control is detected;
the determining unit is further configured to determine the target audio segment based on the start frame and the end frame.
In one possible implementation, the processing unit is further configured to:
and carrying out inflexion algorithm processing on the audio frames after the starting frame based on the target sound effect type to obtain a plurality of target audio frames, wherein the pitch information of the target audio frames is the same as that of the target sound effect type.
In one possible implementation, the determining unit is further configured to:
merging a plurality of target audio frames between the start frame and the end frame into the target audio segment.
In one possible implementation, the apparatus further includes:
and the playing module is used for playing a plurality of target audio frames subjected to sound effect processing in real time on the sound effect setting page and synchronously playing a plurality of video frames corresponding to the plurality of target audio frames.
In one aspect, a terminal is provided, and the terminal includes a processor and a memory, where at least one instruction is stored in the memory, and the instruction is loaded and executed by the processor to implement the operation performed by the data processing method.
In one aspect, a computer-readable storage medium is provided, in which at least one instruction is stored, and the instruction is loaded and executed by a processor to implement the operation performed by the data processing method.
According to the embodiment of the invention, based on the time axis and the plurality of sound effect controls on the sound effect setting page, the plurality of target audio clips can be determined, the sectional processing of the whole audio data is realized, each target audio clip can be respectively corresponding to the corresponding target sound effect type, so that at least one target sound effect can be added to the whole audio data, and the diversification requirement of the target sound effect is met.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a data processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a data processing method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a video editing page according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a sound effect setting page provided in the embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present invention;
fig. 6 is a block diagram of a terminal according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Fig. 1 is a flowchart of a data processing method according to an embodiment of the present invention. Referring to fig. 1, the embodiment includes:
101. and displaying a sound effect setting page, wherein the sound effect setting page is provided with a time axis of the video to be edited and a plurality of sound effect controls.
102. And determining a starting frame based on the selected operation at any time on the time axis.
103. When the long-press operation on the target control in the plurality of sound effect controls is detected, sound effect processing corresponding to the target sound effect type is carried out by taking the starting frame as a starting point until the long-press operation is detected to be finished, and a target audio clip is determined based on an end frame corresponding to the finishing time of the long-press operation, wherein the target sound effect type corresponds to the target control.
104. Based on the target audio segment, target audio data is generated.
In some embodiments, before the displaying the sound effect setting page, the method further includes:
displaying a video editing page, wherein the video editing page is provided with a sound effect setting control;
and when the clicking operation on the sound effect setting control is detected, jumping to the sound effect setting page.
In some embodiments, when a long-press operation on a target control in the plurality of sound effect controls is detected, performing sound effect processing corresponding to a target sound effect type with the start frame as a starting point until the long-press operation is detected to be ended, and determining a target audio segment based on an end frame corresponding to an end time of the long-press operation includes:
when detecting the long-time pressing operation on a target control in the plurality of sound effect controls, acquiring a target sound effect type corresponding to the target control;
performing sound effect processing on the audio frame after the initial frame based on the target sound effect type;
when the end of the long-press operation on the target control is detected, determining an end frame corresponding to the end moment of the long-press operation;
based on the start frame and the end frame, a target audio segment is determined.
In some embodiments, the performing, based on the target sound-effect type, sound-effect processing on the audio frame after the start frame includes:
and performing inflexion algorithm processing on the audio frames after the starting frame based on the target sound effect type to obtain a plurality of target audio frames, wherein the pitch information of the target audio frames is the same as the pitch information of the target sound effect type.
In some embodiments, the determining the target audio segment based on the start frame and the end frame includes:
merging a plurality of target audio frames between the start frame and the end frame into a target audio segment.
In some embodiments, in performing sound effect processing, the method further includes:
and playing a plurality of target audio frames subjected to sound effect processing in real time on the sound effect setting page, and synchronously playing a plurality of video frames corresponding to the plurality of target audio frames.
All the above-mentioned optional technical solutions can be combined arbitrarily to form the optional embodiments of the present invention, and are not described herein again.
Fig. 2 is a flowchart of a data processing method according to an embodiment of the present invention. The data processing method can be applied to any terminal. Referring to fig. 2, the embodiment includes:
201. and the terminal displays a video editing page, and the video editing page is provided with a sound effect setting control.
In the embodiment of the present invention, the video editing page is used for the terminal to play a current video to be edited and process video data, audio data, and the like included in the video to be edited, and the video to be edited may be a video recorded and stored by the terminal, and certainly may also be a video sent to the terminal by the server. The terminal can enter the video editing page through a corresponding application program, and can import the video to be edited into the application program, so that the video to be edited can be displayed on the video editing page.
The video editing page can be provided with a sound effect setting control, and the sound effect setting control is used for enabling the video editing page to jump to a sound effect setting page so as to process audio data corresponding to the video to be edited. In addition, the video editing page may further have other setting controls, for example, the video editing page may further have a video special effect setting control, a music clip control, and the like.
Specifically, as shown in fig. 3, the main body of the video editing page is used for playing a video to be edited and a video that is processed, a video special effect setting control, a sound effect setting control, a music editing control, a user homepage control, and the like may be sequentially arranged above the video editing page, and a lyric switch control may be arranged below the user homepage control. In addition, a storage control, a publishing control, a topic selecting control, a performance permitting control, a reminding control and the like can be arranged below the video editing page, and the controls are distributed and used for the terminal to jump to the corresponding functional page so as to further set and process the video to be edited. Of course, in other embodiments, the controls of the video editing page may also be distributed in other manners, and the distribution manner of the controls of the video editing page is not limited in this embodiment of the present invention.
202. And when the terminal detects the clicking operation on the sound effect setting control, skipping to a sound effect setting page.
In the embodiment of the invention, based on the video editing page, a user can click any control on the video editing page, for example, the user can click a sound effect setting control on the video editing page. When the terminal detects the clicking operation of the sound effect setting control, a page jump instruction is triggered, the page jump instruction can carry a sound effect setting page identifier, and the sound effect setting control is associated with the sound effect setting page, so that the terminal can jump to the sound effect setting page based on the page jump instruction. Wherein, this audio sets up the page and is used for handling this audio data that waits to edit the video correspondence, and then, the terminal can add the audio for this video of waiting to edit, and in addition, this audio sets up the page and can also be used for playing the video data and the audio data that add the sound completely.
203. The terminal displays a sound effect setting page, and the sound effect setting page is provided with a time axis and a plurality of sound effect controls of a video to be edited.
In the embodiment of the present invention, based on step 202, the sound effect setting page displayed by the terminal may be configured to add at least one sound effect to the audio data corresponding to the video to be edited, and for this reason, the sound effect setting page may have a plurality of operable controls, where the plurality of operable controls may include a time axis and a plurality of sound effect controls, where the time axis may be used for a user to select a start frame of an audio clip to be currently processed, each sound effect control corresponds to a sound effect type, the plurality of sound effect controls are used for the user to select a target sound effect type to be added to the audio data corresponding to the video to be edited, and the target sound effect type is a sound effect type to be added to the audio data corresponding to the video to be edited by the user.
Specifically, the sound effect setting page may be as shown in fig. 4, a main body portion in the middle of the sound effect setting page is a preview box, the preview box may be used for the terminal to play the video to be edited to which the target sound effect type is added, and a time axis and a plurality of sound effect controls may be provided below the preview box. The time axis is used for representing the audio time length of the video to be edited, the coordinate on the time axis is a time scale, and the time scale is used for representing the audio playing time point of the video to be edited. Of course, the time axis may be a line segment with a time scale mark, or may not have the time scale mark, as long as the time axis can be used to represent the audio duration of the video to be edited and the audio playing time point thereof, and the embodiment of the present invention does not limit the specific form of the time axis here.
Furthermore, in the sound effect setting page, each sound effect control is associated with one sound effect type, as shown in fig. 4, the sound effect type associated with each sound effect control may be a magic beast sound effect, a minions sound effect, a gramophone sound effect, and the like, and certainly, the sound effect type associated with each sound effect control may also be other sound effect types.
It should be noted that, in addition to the time axis and the plurality of sound effect controls, the sound effect setting page may further include other controls, as shown in fig. 4, and the sound effect setting page may further include a storage control, a volume setting control, and the like, where the storage control is configured to store processed audio data and video data, and the volume setting control is configured to adjust a volume corresponding to the video to be edited. In addition, besides the time axis and the distribution positions of the plurality of controls shown in fig. 4, the time axis and the plurality of controls may be distributed in other manners, which is not limited herein in the embodiment of the present invention.
204. The terminal determines a start frame based on a selection operation for any time on the time axis.
In this embodiment of the present invention, based on the time axis in step 203, the time axis may further have a movable pointer, where the movable pointer is used for a user to perform a selection operation on any time scale on the time axis, and further, the movable pointer is used for a terminal to indicate a start time of processing audio data corresponding to a video to be edited. Specifically, the user can arbitrarily drag the movable pointer to perform a selection operation on any time scale on the time axis, that is, the user can drag the movable pointer to any position on the time axis as required.
Further, when the user drags the movable pointer to any position on the time axis, the terminal may detect a selection operation of a time scale corresponding to the position on the time axis, and then, the terminal may use an audio frame of a position corresponding to the time scale in the audio data of the video to be edited as a start frame for processing the audio data corresponding to the video to be edited, where the start frame is an audio frame that is not subjected to sound effect processing.
In other embodiments, in addition to dragging the movable pointer to any position on the time axis to perform the selection operation at any time on the time axis, the user may perform the selection operation in other ways to determine the start frame. For example, the user may also perform a selection operation by clicking any position on the time axis, and the specific implementation manner of the selection operation is not limited herein.
It should be noted that, a user may perform a plurality of selection operations on the time scale on the time axis, that is, after the terminal detects that a selection operation is performed on the time scale on the time axis once, and the terminal completes sound processing on one audio clip in the audio data based on the start frame corresponding to the selection operation, the user may further perform a selection operation on the time scale on the time axis again to perform sound processing on other audio clips of the audio data corresponding to the video to be edited, where a sound type added to the other audio clips may be different from a sound type corresponding to the one audio clip, and may of course be the same. In addition, the audio segment and the other audio segments are not overlapped with each other.
205. And when the terminal detects the long-time pressing operation on the target control in the plurality of sound effect controls, acquiring the target sound effect type corresponding to the target control.
In embodiments of the present invention, each sound effect control is associated with a corresponding sound effect type, i.e., the target control is associated with a target sound effect type. The target control is a sound effect control selected by a user according to the requirement on the target sound effect type, the target sound effect type is the sound effect type which the user wants to add to the audio clip after the initial frame, and each sound effect control is associated with one sound effect type, so that the user can select the target sound effect type meeting the requirement by pressing the target control for a long time. Specifically, when the terminal can detect that the user presses the target control for a long time, a target sound effect type calling instruction can be triggered, and then the terminal can acquire the target sound effect type associated with the target control based on the target sound effect calling instruction.
It should be noted that, based on the long-press operation on the target control, the terminal may perform processing corresponding to the target sound effect type on the audio data of the video to be edited, that is, the long-press operation may provide an instruction for the terminal to start processing and end processing on the audio data. Specifically, when the terminal detects the long-press operation on the target control, the terminal starts to process the audio frame after the initial frame, and as long as the terminal does not detect the end of the long-press operation, the terminal continues to process the audio frame until the terminal detects the end of the long-press operation, and the terminal stops processing the audio frame.
Further, the long-time pressing operation may be implemented by a user holding the target control all the time, and the long-time pressing operation is ended when the user releases the target control, that is, as long as the user holds the target control all the time, the terminal processes the audio frame after the start frame. Of course, in other embodiments, the target control may be operated in other manners, so that the terminal triggers the start processing instruction and the end processing instruction, for example, the user may click the target control, that is, when the user clicks the target control, the terminal starts to process the audio frame after the start frame, and when the user clicks the target control again, the terminal stops processing the corresponding audio frame.
206. And the terminal performs sound changing algorithm processing on the audio frames after the starting frame based on the target sound effect type to obtain a plurality of target audio frames.
In the embodiment of the present invention, the pitch change algorithm processing refers to that the terminal processes pitch information of an audio frame, so that a tone of the audio frame changes, and then the terminal adds corresponding target audio effect types to a plurality of audio frames to obtain a plurality of corresponding target audio frames, wherein the pitch information corresponding to the plurality of target audio frames is the same as the pitch information corresponding to the target audio effect types.
Specifically, for example, in a time domain, the pitch period of the corresponding audio frame may be changed by the pitch-changing and non-speed-changing algorithm, so that the frequency spectrum of the audio frame is changed, and further, the duration corresponding to the audio frame is changed, so that the pitch information of the audio frame is the same as the pitch information corresponding to the target sound effect type, that is, the pitch-changing processing of the audio frame is realized.
Further, in the process of the above-mentioned pitch change, the corresponding speed of sound is also changed, so the terminal needs to resample the above-mentioned audio frame, so that the total length of the corresponding audio frame is not changed, to implement the constant speed processing of the audio frame, and finally, the pitch change is not performed on a plurality of audio frames after the start frame, so as to obtain a plurality of target audio frames. Of course, in other embodiments, the terminal may also use other inflexion algorithms to process a plurality of audio frames after the start frame, for example, the terminal may also perform inflexion algorithm processing on the plurality of audio frames in the frequency domain, as long as the obtained pitch information of the target audio frame is the same as the pitch information of the target sound effect type, which is not limited in this embodiment of the present invention.
The step 206 is a process of performing sound effect processing on the audio frame after the start frame by the terminal based on the target sound effect type, in other embodiments, the terminal may also perform other sound effect processing on the audio frame as long as the target sound effect type can be added to the audio frame, and the embodiment of the present invention is not limited herein.
207. And the terminal plays a plurality of target audio frames subjected to sound effect processing in real time on the sound effect setting page and synchronously plays a plurality of video frames corresponding to the plurality of target audio frames.
In the embodiment of the present invention, based on the preview box of the main body portion in fig. 4, the preview box is used for playing a plurality of video frames corresponding to the plurality of target audio frames. The preview frame plays the plurality of video frames synchronously with the playing of the plurality of target audio frames by the terminal, and the terminal plays the plurality of target audio frames in real time with the processing of the audio frame after the initial frame, that is, the terminal plays the target audio frame every time the terminal obtains one target audio frame while performing the pitch change algorithm processing on each audio frame after the initial frame.
It should be noted that, the frame length of each target audio frame is the same as the frame length of the audio frame before processing, so that the video frame corresponding to each target audio frame is the same as the video frame corresponding to the audio frame before processing, and further, when the terminal plays a plurality of target audio frames, the preview frame in the sound effect setting page may also play a plurality of video frames corresponding to the plurality of target audio frames simultaneously.
Furthermore, the time for the terminal to perform the inflexion algorithm processing on each audio frame is very short, in general, the time for the terminal to process one audio frame into a corresponding target audio frame is within 23ms, and based on the short processing time, the time difference between the playing process of the terminal on the target audio frame and the generation process of the target audio frame on the sound effect setting page can be ignored, and further, the generation process and the playing process can be performed synchronously.
208. And when the terminal detects that the long-press operation on the target control is finished, determining an end frame corresponding to the finishing time of the long-press operation.
In this embodiment of the present invention, based on step 205, when the terminal detects that the user starts to perform the long-press operation on the target control, the terminal starts to perform the audio algorithm processing on the audio frame after the start frame corresponding to the long-press operation, until the terminal detects that the long-press operation on the target control is finished through the relevant detection control, the terminal triggers the finishing processing command, and based on the finishing processing command, the terminal finishes processing the audio frame.
The terminal can take the audio frame associated with the moment as an end frame corresponding to the current target sound effect type, and the end frame can be an audio frame which is not processed by a sound effect algorithm.
In addition, the user may also cancel the processing of the terminal on the audio frame, for example, as shown in fig. 4, the audio setting page may have a cancel control, after the terminal processes a section of audio clip, the user may select whether to keep the current processing according to the requirement, if the user wants to process the audio clip again, the cancel control may be clicked, when the terminal detects the click operation on the cancel control, a cancel command is triggered, and the terminal cancels the processing of the current audio clip based on the cancel command, that is, restores the audio clip to which the target audio type is currently added.
209. The terminal combines a plurality of target audio frames between the start frame and the end frame into a target audio segment.
In the embodiment of the present invention, the start frame and the end frame are audio frames that are not processed by a sound effect algorithm, so that the terminal can obtain a target audio clip based on a plurality of target audio frames between the start frame and the end frame, where the plurality of target audio frames are audio frames processed by the sound effect algorithm. Specifically, the terminal may connect every two adjacent target audio frames in the plurality of target audio frames end to end, that is, starting from the first target audio frame after the start frame, and every time a target audio frame is obtained according to the time sequence, the terminal connects the head of the target audio frame to the tail of the last target audio frame until the last target audio frame before the end frame, so that the terminal merges the plurality of target audio frames between the start frame and the end frame into the target audio segment.
The step 209 is a process of determining, by the terminal, the target audio segment based on the start frame and the end frame, and certainly, in other embodiments, the terminal may also determine the target audio segment by other manners, which is not limited herein.
The foregoing steps 205 to 209 are processes of, when the terminal detects a long-press operation on a target control in the multiple sound-effect controls, performing sound-effect processing corresponding to a target sound-effect type with the start frame as a starting point until the long-press operation is detected to be ended, and determining a target audio segment based on an end frame corresponding to an end time of the long-press operation, where the processes are processes of, by the terminal, determining a target audio segment corresponding to the target sound-effect type based on one target sound-effect type, and certainly, after the terminal determines a target audio segment, determining other target sound-effect segments based on other target sound-effect types again based on the processes, that is, the terminal may perform the processes of the foregoing steps 204 to 209 many times based on multiple target sound-effect types to process audio data of a video to be edited into multiple target audio segments, each target audio clip may correspond to a target sound effect type, which is not limited herein in the embodiments of the present invention.
210. The terminal generates target audio data based on the target audio segment.
In the embodiment of the present invention, every two target audio segments in the plurality of target audio segments may be adjacent or may not be adjacent. Specifically, when every two target audio segments are adjacent, every two adjacent target sound effect segments can be connected end to end, so that the plurality of target audio segments are combined into target audio data; when every two target audio clips are not adjacent, the head of each target audio clip is respectively connected with the tail of the audio clip which is adjacent to the target audio clip and is not processed by the sound effect algorithm, and similarly, the tail of each target audio clip is respectively connected with the head of the audio clip which is adjacent to the target audio clip and is not processed by the sound effect algorithm, so that the target audio clips and the audio clips which are not processed by the sound effect algorithm are combined into target audio data.
After the target audio data is generated based on the above process, the target audio data may be saved in a folder of the terminal for subsequent calling, and of course, the terminal may also upload the target audio data to the server, and may subsequently obtain the target audio data from the server.
Specifically, as shown in fig. 4, the target audio data may be saved through a saving control on the sound effect setting page, when the terminal detects a click operation on the saving control, the terminal may jump to the saving page, and the user may select to store the target audio data in the local folder or upload the target audio data to the server based on the saving page. Of course, the target audio data may also be saved through other controls, and the embodiment of the present invention is not limited herein.
According to the embodiment of the invention, based on the time axis and the plurality of sound effect controls on the sound effect setting page, the plurality of target audio clips can be determined, the sectional processing of the whole audio data is realized, and further, each target audio clip can be respectively corresponding to the corresponding target sound effect type, so that at least one target sound effect can be added to the whole audio data, and the diversified requirements for the target sound effects are met.
All the above-mentioned optional technical solutions can be combined arbitrarily to form the optional embodiments of the present invention, and are not described herein again.
Fig. 5 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present invention. Referring to fig. 5, the apparatus includes: a display module 501, a determination module 502, a processing module 503, and a generation module 504.
The display module 501 is configured to display a sound effect setting page, where the sound effect setting page has a time axis of a video to be edited and a plurality of sound effect controls;
a determining module 502, configured to determine a starting frame based on a selected operation at any time on the time axis;
the processing module 503 is configured to, when a long-press operation on a target control in the multiple sound effect controls is detected, perform sound effect processing corresponding to a target sound effect type by using the start frame as a starting point until the long-press operation is detected to be ended, determine a target audio segment based on an end frame corresponding to an end time of the long-press operation, where the target sound effect type corresponds to the target control;
a generating module 504 is configured to generate target audio data based on the target audio segment.
In some embodiments, the apparatus further comprises:
the display module 501 is further configured to display a video editing page, where the video editing page has a sound effect setting control;
and the skipping module is used for skipping to the sound effect setting page when the clicking operation on the sound effect setting control is detected.
In some embodiments, the processing module 503 includes:
the acquisition unit is used for acquiring a target sound effect type corresponding to a target control when long-time pressing operation on the target control in the sound effect controls is detected;
the processing unit is used for carrying out sound effect processing on the audio frame after the starting frame based on the target sound effect type;
the determining unit is used for determining an end frame corresponding to the long-press operation end time when the long-press operation end of the target control is detected;
the determining unit is further configured to determine a target audio segment based on the start frame and the end frame.
In some embodiments, the processing unit is further to:
and performing inflexion algorithm processing on the audio frames after the starting frame based on the target sound effect type to obtain a plurality of target audio frames, wherein the pitch information of the target audio frames is the same as the pitch information of the target sound effect type.
In some embodiments, the determining unit is further configured to:
merging a plurality of target audio frames between the start frame and the end frame into a target audio segment.
In some embodiments, the apparatus further comprises:
and the playing module is used for playing the plurality of target audio frames subjected to sound effect processing in real time on the sound effect setting page and synchronously playing a plurality of video frames corresponding to the plurality of target audio frames.
According to the embodiment of the invention, based on the time axis and the plurality of sound effect controls on the sound effect setting page, the plurality of target audio clips can be determined, the sectional processing of the whole audio data is realized, and further, each target audio clip can be respectively corresponding to the corresponding target sound effect type, so that at least one target sound effect can be added to the whole audio data, and the diversified requirements for the target sound effects are met.
It should be noted that: in the data processing apparatus provided in the above embodiment, only the division of the above functional modules is used for illustration in data processing, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above described functions. In addition, the data processing apparatus and the data processing method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Fig. 6 is a block diagram of a terminal 600 according to an embodiment of the present invention. The terminal 600 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The terminal 600 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
In general, the terminal 600 includes: a processor 601 and a memory 602.
The processor 601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 601 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 601 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 601 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 601 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
The memory 602 may include one or more computer-readable storage media, which may be non-transitory. The memory 602 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 602 is used to store at least one instruction for execution by the processor 601 to implement the data processing method provided by the method embodiments of the present invention.
In some embodiments, the terminal 600 may further optionally include: a peripheral interface 603 and at least one peripheral. The processor 601, memory 602, and peripheral interface 603 may be connected by buses or signal lines. Various peripheral devices may be connected to the peripheral interface 603 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 604, a touch screen display 605, a camera 606, an audio circuit 607, a positioning component 608, and a power supply 609.
The peripheral interface 603 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 601 and the memory 602. In some embodiments, the processor 601, memory 602, and peripheral interface 603 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 601, the memory 602, and the peripheral interface 603 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 604 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 604 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 604 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 604 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 604 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 604 may further include NFC (Near Field Communication) related circuits, which are not limited in the present invention.
The display 605 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 605 is a touch display screen, the display screen 605 also has the ability to capture touch signals on or over the surface of the display screen 605. The touch signal may be input to the processor 601 as a control signal for processing. At this point, the display 605 may also be used to provide virtual controls and/or virtual keyboards, also referred to as soft controls and/or soft keyboards. In some embodiments, the display 605 may be one, providing the front panel of the terminal 600; in other embodiments, the display 605 may be at least two, respectively disposed on different surfaces of the terminal 600 or in a folded design; in still other embodiments, the display 605 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 600. Even more, the display 605 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 605 may be made of LCD (liquid crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 606 is used to capture images or video. Optionally, camera assembly 606 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 606 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuitry 607 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 601 for processing or inputting the electric signals to the radio frequency circuit 604 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 600. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 601 or the radio frequency circuit 604 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 607 may also include a headphone jack.
The positioning component 608 is used to locate the current geographic location of the terminal 600 to implement navigation or LBS (location based Service). The positioning component 608 can be a positioning component based on the GPS (global positioning System) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.
Power supply 609 is used to provide power to the various components in terminal 600. The power supply 609 may be ac, dc, disposable or rechargeable. When the power supply 609 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 600 also includes one or more sensors 610. The one or more sensors 610 include, but are not limited to: acceleration sensor 611, gyro sensor 612, pressure sensor 613, fingerprint sensor 614, optical sensor 615, and proximity sensor 616.
The acceleration sensor 611 may detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 600. For example, the acceleration sensor 611 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 601 may control the touch screen display 605 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 611. The acceleration sensor 611 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 612 may detect a body direction and a rotation angle of the terminal 600, and the gyro sensor 612 and the acceleration sensor 611 may cooperate to acquire a 3D motion of the user on the terminal 600. The processor 601 may implement the following functions according to the data collected by the gyro sensor 612: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 613 may be disposed on a side frame of the terminal 600 and/or on a lower layer of the touch display screen 605. When the pressure sensor 613 is disposed on the side frame of the terminal 600, a user's holding signal of the terminal 600 can be detected, and the processor 601 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 613. When the pressure sensor 613 is disposed at the lower layer of the touch display screen 605, the processor 601 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 605. The operability control comprises at least one of a control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 614 is used for collecting a fingerprint of a user, and the processor 601 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 614, or the fingerprint sensor 614 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 601 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 614 may be disposed on the front, back, or side of the terminal 600. When a physical button or vendor Logo is provided on the terminal 600, the fingerprint sensor 614 may be integrated with the physical button or vendor Logo.
The optical sensor 615 is used to collect the ambient light intensity. In one embodiment, processor 601 may control the display brightness of touch display 605 based on the ambient light intensity collected by optical sensor 615. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 605 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 605 is turned down. In another embodiment, the processor 601 may also dynamically adjust the shooting parameters of the camera assembly 606 according to the ambient light intensity collected by the optical sensor 615.
A proximity sensor 616, also known as a distance sensor, is typically disposed on the front panel of the terminal 600. The proximity sensor 616 is used to collect the distance between the user and the front surface of the terminal 600. In one embodiment, when the proximity sensor 616 detects that the distance between the user and the front surface of the terminal 600 gradually decreases, the processor 601 controls the touch display 605 to switch from the bright screen state to the dark screen state; when the proximity sensor 616 detects that the distance between the user and the front surface of the terminal 600 gradually becomes larger, the processor 601 controls the touch display 605 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 6 is not intended to be limiting of terminal 600 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Fig. 7 is a schematic structural diagram of a server 700 according to an embodiment of the present invention, where the server 700 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 701 and one or more memories 702, where the memory 702 stores at least one instruction, and the at least one instruction is loaded and executed by the processor 701 to implement the methods provided by the method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
In an exemplary embodiment, a computer-readable storage medium, such as a memory, including instructions executable by a processor in a terminal to perform the data processing method in the above-described embodiments is also provided. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (12)

1. A method of data processing, the method comprising:
displaying a sound effect setting page, wherein the sound effect setting page is provided with a time axis of a video to be edited and a plurality of sound effect controls;
determining a starting frame based on the selected operation of any time on the time axis;
when long-time pressing operation on a target control in the plurality of sound effect controls is detected, sound effect processing corresponding to a target sound effect type is carried out by taking the starting frame as a starting point until the long-time pressing operation is detected to be finished, and a target audio clip is determined based on an end frame corresponding to the finishing time of the long-time pressing operation, wherein the target sound effect type corresponds to the target control;
generating target audio data based on the target audio segment;
in the process of sound effect processing, the method also comprises the following steps:
and playing a plurality of target audio frames subjected to sound effect processing in real time on the sound effect setting page, and synchronously playing a plurality of video frames corresponding to the plurality of target audio frames.
2. The method of claim 1, wherein prior to displaying the sound effect settings page, the method further comprises:
displaying a video editing page, wherein the video editing page is provided with a sound effect setting control;
and when the clicking operation of the sound effect setting control is detected, jumping to the sound effect setting page.
3. The method according to claim 1, wherein when the long-press operation on the target control in the plurality of sound effect controls is detected, performing sound effect processing corresponding to the target sound effect type with the start frame as a starting point until the long-press operation is detected to be finished, and determining the target audio segment based on the end frame corresponding to the finishing time of the long-press operation comprises:
when the long-press operation on the target control in the plurality of sound effect controls is detected, acquiring a target sound effect type corresponding to the target control;
performing sound effect processing on the audio frame after the initial frame based on the target sound effect type;
when the long-press operation on the target control is detected to be finished, determining an end frame corresponding to the end moment of the long-press operation;
determining the target audio segment based on the start frame and the end frame.
4. The method of claim 3, wherein the performing audio-effect processing on the audio frame after the start frame based on the target audio-effect type comprises:
and carrying out inflexion algorithm processing on the audio frames after the starting frame based on the target sound effect type to obtain a plurality of target audio frames, wherein the pitch information of the target audio frames is the same as that of the target sound effect type.
5. The method of claim 4, wherein determining the target audio segment based on the start frame and the end frame comprises:
merging a plurality of target audio frames between the start frame and the end frame into the target audio segment.
6. A data processing apparatus, characterized in that the apparatus comprises:
the display module is used for displaying a sound effect setting page, and the sound effect setting page is provided with a time axis of a video to be edited and a plurality of sound effect controls;
the determining module is used for determining a starting frame based on the selected operation of any time on the time axis;
the processing module is used for performing sound effect processing corresponding to a target sound effect type by taking the initial frame as a starting point when long-time pressing operation on a target control in the plurality of sound effect controls is detected, determining a target audio clip based on an end frame corresponding to the end time of the long-time pressing operation until the long-time pressing operation is detected to be ended, wherein the target sound effect type corresponds to the target control;
a generating module for generating target audio data based on the target audio segment;
and the playing module is used for playing a plurality of target audio frames subjected to sound effect processing in real time on the sound effect setting page and synchronously playing a plurality of video frames corresponding to the plurality of target audio frames.
7. The apparatus of claim 6, further comprising:
the display module is also used for displaying a video editing page, and the video editing page is provided with a sound effect setting control;
and the skipping module is used for skipping to the sound effect setting page when the clicking operation of the sound effect setting control is detected.
8. The apparatus of claim 6, wherein the processing module comprises:
the acquisition unit is used for acquiring a target sound effect type corresponding to a target control when long-time pressing operation on the target control in the sound effect controls is detected;
the processing unit is used for performing sound effect processing on the audio frame after the starting frame based on the target sound effect type;
the determining unit is used for determining an end frame corresponding to the long-press operation end time when the long-press operation end of the target control is detected;
the determining unit is further configured to determine the target audio segment based on the start frame and the end frame.
9. The apparatus of claim 8, wherein the processing unit is configured to:
and carrying out inflexion algorithm processing on the audio frames after the starting frame based on the target sound effect type to obtain a plurality of target audio frames, wherein the pitch information of the target audio frames is the same as that of the target sound effect type.
10. The apparatus of claim 9, wherein the determining unit is further configured to:
merging a plurality of target audio frames between the start frame and the end frame into the target audio segment.
11. A terminal, characterized in that the terminal comprises a processor and a memory, in which at least one instruction is stored, which is loaded and executed by the processor to implement the operations performed by the data processing method according to any one of claims 1 to 5.
12. A computer-readable storage medium having stored therein at least one instruction which is loaded and executed by a processor to perform operations performed by a data processing method according to any one of claims 1 to 5.
CN201811185453.0A 2018-10-11 2018-10-11 Data processing method, device, terminal and storage medium Active CN109346111B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811185453.0A CN109346111B (en) 2018-10-11 2018-10-11 Data processing method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811185453.0A CN109346111B (en) 2018-10-11 2018-10-11 Data processing method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN109346111A CN109346111A (en) 2019-02-15
CN109346111B true CN109346111B (en) 2020-09-04

Family

ID=65309681

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811185453.0A Active CN109346111B (en) 2018-10-11 2018-10-11 Data processing method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN109346111B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110415723B (en) * 2019-07-30 2021-12-03 广州酷狗计算机科技有限公司 Method, device, server and computer readable storage medium for audio segmentation
CN111757013B (en) * 2020-07-23 2022-04-29 北京字节跳动网络技术有限公司 Video processing method, device, equipment and storage medium
CN112165647B (en) * 2020-08-26 2022-06-17 北京字节跳动网络技术有限公司 Audio data processing method, device, equipment and storage medium
CN112133267B (en) * 2020-09-04 2024-02-13 腾讯音乐娱乐科技(深圳)有限公司 Audio effect processing method, device and storage medium
CN112507147B (en) * 2020-11-30 2024-08-27 广州酷狗计算机科技有限公司 Text data display method, device, equipment and storage medium
CN114449339B (en) * 2022-02-16 2024-04-12 深圳万兴软件有限公司 Background sound effect conversion method and device, computer equipment and storage medium
CN115119110A (en) * 2022-07-29 2022-09-27 歌尔科技有限公司 Sound effect adjusting method, audio playing device and computer readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104735528A (en) * 2015-03-02 2015-06-24 青岛海信电器股份有限公司 Sound effect matching method and device
CN106970771A (en) * 2016-01-14 2017-07-21 腾讯科技(深圳)有限公司 Audio data processing method and device
CN107135419A (en) * 2017-06-14 2017-09-05 北京奇虎科技有限公司 A kind of method and apparatus for editing video
CN107360458A (en) * 2017-06-30 2017-11-17 广东欧珀移动通信有限公司 Control method for playing back, device, storage medium and terminal
CN107978321A (en) * 2017-11-29 2018-05-01 广州酷狗计算机科技有限公司 Audio-frequency processing method and device
CN108012090A (en) * 2017-10-25 2018-05-08 北京川上科技有限公司 A kind of method for processing video frequency, device, mobile terminal and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104735528A (en) * 2015-03-02 2015-06-24 青岛海信电器股份有限公司 Sound effect matching method and device
CN106970771A (en) * 2016-01-14 2017-07-21 腾讯科技(深圳)有限公司 Audio data processing method and device
CN107135419A (en) * 2017-06-14 2017-09-05 北京奇虎科技有限公司 A kind of method and apparatus for editing video
CN107360458A (en) * 2017-06-30 2017-11-17 广东欧珀移动通信有限公司 Control method for playing back, device, storage medium and terminal
CN108012090A (en) * 2017-10-25 2018-05-08 北京川上科技有限公司 A kind of method for processing video frequency, device, mobile terminal and storage medium
CN107978321A (en) * 2017-11-29 2018-05-01 广州酷狗计算机科技有限公司 Audio-frequency processing method and device

Also Published As

Publication number Publication date
CN109346111A (en) 2019-02-15

Similar Documents

Publication Publication Date Title
CN109346111B (en) Data processing method, device, terminal and storage medium
CN109033335B (en) Audio recording method, device, terminal and storage medium
CN108965922B (en) Video cover generation method and device and storage medium
CN111065001B (en) Video production method, device, equipment and storage medium
CN110933330A (en) Video dubbing method and device, computer equipment and computer-readable storage medium
CN110688082B (en) Method, device, equipment and storage medium for determining adjustment proportion information of volume
CN108965757B (en) Video recording method, device, terminal and storage medium
CN111061405B (en) Method, device and equipment for recording song audio and storage medium
CN109192218B (en) Method and apparatus for audio processing
CN111083526B (en) Video transition method and device, computer equipment and storage medium
CN108922506A (en) Song audio generation method, device and computer readable storage medium
CN108831424B (en) Audio splicing method and device and storage medium
CN110798327B (en) Message processing method, device and storage medium
CN110769313A (en) Video processing method and device and storage medium
CN111031394B (en) Video production method, device, equipment and storage medium
CN111402844B (en) Song chorus method, device and system
CN111276122A (en) Audio generation method and device and storage medium
CN114245218A (en) Audio and video playing method and device, computer equipment and storage medium
CN111081277B (en) Audio evaluation method, device, equipment and storage medium
CN109819314B (en) Audio and video processing method and device, terminal and storage medium
CN111092991A (en) Lyric display method and device and computer storage medium
CN113963707A (en) Audio processing method, device, equipment and storage medium
CN112086102B (en) Method, apparatus, device and storage medium for expanding audio frequency band
CN110136752B (en) Audio processing method, device, terminal and computer readable storage medium
CN112118482A (en) Audio file playing method and device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant