CN110289024B - Audio editing method and device, electronic equipment and storage medium - Google Patents
Audio editing method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN110289024B CN110289024B CN201910562867.9A CN201910562867A CN110289024B CN 110289024 B CN110289024 B CN 110289024B CN 201910562867 A CN201910562867 A CN 201910562867A CN 110289024 B CN110289024 B CN 110289024B
- Authority
- CN
- China
- Prior art keywords
- editing
- audio
- button
- display state
- control
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000012545 processing Methods 0.000 claims abstract description 58
- 230000008569 process Effects 0.000 claims description 17
- 230000009467 reduction Effects 0.000 claims description 16
- 238000012544 monitoring process Methods 0.000 claims description 6
- 230000004044 response Effects 0.000 claims description 6
- 230000006870 function Effects 0.000 description 32
- 238000004590 computer program Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 238000004891 communication Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 230000001960 triggered effect Effects 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 230000003321 amplification Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005562 fading Methods 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
Abstract
The embodiment of the disclosure discloses an audio editing method, an audio editing device, electronic equipment and a storage medium, wherein the method comprises the following steps: receiving an editing instruction through an editing control; editing the audio to be processed based on the editing instruction; and when the editing processing is finished, playing the audio after the editing processing, and simultaneously changing the display state of the editing control so as to keep the display state of the editing control consistent with the editing state of the audio to be processed. According to the technical scheme of the embodiment of the disclosure, the purpose of keeping the display state of the audio editing control consistent with the editing state of the audio is achieved, and further the user experience is improved.
Description
Technical Field
The embodiment of the disclosure relates to the technical field of audio processing, and in particular, to an audio editing method and apparatus, an electronic device, and a storage medium.
Background
The rapid development of the internet gradually changes the life style of the contemporary people, so that the demand of people on the spiritual culture is higher and higher. Singing has become one of the favorite entertainment activities of people, and particularly various Karaoke software products are popularized, so that more and more people can sing or record own singing voice anytime and anywhere.
The karaoke software product is a recorded product that can synthesize the user's singing voice in the background and accompaniment provided by the software. When the user uses the Karaoke software to record the song audio, the recorded song audio can be edited through the editing control provided by the Karaoke software, for example, the editing functions of adjusting the volume of the song audio, turning on or off a noise reduction function, selecting a reverberation type, adjusting the volume of the accompaniment, adjusting the deviation of the accompaniment, vibrato, fading in or fading out and the like are realized.
The current audio editing implementation scheme of the Karaoke software is as follows: when a user selects a certain editing function through the editing control, the display state of the corresponding editing control takes effect immediately on the display layer, that is, the user can immediately see that the display state of the editing control changes. For example, when the user presses the edit control for starting the noise reduction function, the user can immediately see that the edit control for starting the noise reduction function is pressed; or when the user adjusts the audio volume by dragging the slider in the volume slide rail, the user can immediately see that the position of the slider in the volume slide rail is changed. However, when a user selects a certain editing function, some errors or delays may occur in the software for performing the editing process of the corresponding function on the audio to be processed, so that the user may not hear the audio after the editing process when seeing that the editing control of the corresponding editing function is selected, resulting in poor user experience.
Disclosure of Invention
The embodiment of the disclosure provides an audio editing method and device, an electronic device and a storage medium, so as to keep the display state of an audio editing control consistent with the editing state of audio, and further improve user experience.
In a first aspect, an embodiment of the present disclosure provides an audio editing method, where the method includes:
receiving an editing instruction through an editing control;
editing the audio to be processed based on the editing instruction;
and when the editing processing is finished, playing the audio after the editing processing, and simultaneously changing the display state of the editing control so as to keep the display state of the editing control consistent with the editing state of the audio to be processed.
In a second aspect, an embodiment of the present disclosure further provides an audio editing apparatus, including:
the receiving module is used for receiving an editing instruction through the editing control;
the editing module is used for editing the audio to be processed based on the editing instruction;
and the response module is used for playing the audio after the editing processing is finished and simultaneously changing the display state of the editing control so as to keep the display state of the editing control consistent with the editing state of the audio to be processed.
In a third aspect, an embodiment of the present disclosure further provides an apparatus, where the apparatus includes:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement an audio editing method as in any of the embodiments of the present disclosure.
In a fourth aspect, the disclosed embodiments also provide a storage medium containing computer-executable instructions for performing the audio editing method according to any one of the disclosed embodiments when executed by a computer processor.
According to the technical scheme of the embodiment of the disclosure, when an editing instruction is received through an editing control, the audio to be processed is edited based on the editing instruction; when the editing processing is finished, the audio after the editing processing is played, and meanwhile, the display state of the editing control is changed, so that the display state of the editing control is consistent with the editing state of the audio to be processed, the purpose of keeping the display state of the audio editing control consistent with the editing state of the audio is achieved, and further the user experience is improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a schematic flowchart illustrating an audio editing method according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of an audio editing method according to a second embodiment of the disclosure;
fig. 3 is a schematic structural diagram of an audio editing apparatus according to a third embodiment of the disclosure;
fig. 4 is a schematic structural diagram of an electronic device according to a fourth embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
Example one
Fig. 1 is a schematic flowchart of an audio editing method according to an embodiment of the present disclosure, which is applicable to a situation where a user triggers an audio editing operation based on a track editing panel. The method may be performed by an audio editing apparatus, which may be implemented in the form of software and/or hardware. The device is typically integrated in a terminal, such as a smartphone or the like. As shown in fig. 1, the method includes:
and step 110, receiving an editing instruction through the editing control.
Wherein, the edit control means a specific edit button, for example, a volume adjustment button, a reverberation type selection button, a noise reduction function closing or opening button, an accompaniment offset button, an accompaniment volume adjustment button, a fade-in button, a fade-out button or a vibrato button, etc.
The editing controls are typically arranged on the track editing panel to facilitate user operation. For example, when the user wants to increase the volume of the audio, the volume adjustment button is dragged to move in a direction in which the volume is increased. And when the system receives the editing instruction triggered by the user, executing corresponding editing processing on the audio to be processed according to the specific editing instruction. For example, a user triggers a noise reduction instruction through a noise reduction function on button on the track editing panel, and when the system receives the noise reduction instruction, the system performs noise reduction on the audio to be processed. The audio to be processed may be a target audio selected by a user, or may be audio stored at a set position. For example, when a user records his own singing voice through the singing software, after the recording is completed, the recorded voice is automatically stored in the default storage position set by the singing software, and at this time, if an editing instruction is received, the just recorded singing voice is the voice to be processed. It can be understood that, when a user performs a specific editing operation on a to-be-processed audio, the user needs to first determine the to-be-processed audio, and after the user determines the to-be-processed audio, the system displays attribute information of the to-be-processed audio, such as information of a name of the to-be-processed audio, recording time, or a name of a recorded song, so as to achieve a purpose of reminding the user of confirming whether the current to-be-processed audio is the to-be-processed audio selected by the user in real time.
And 120, editing the audio to be processed based on the editing instruction.
Specifically, calling a matched editing processing program according to the editing instruction;
and editing the audio to be processed through the editing processing program.
At present, there are many audio editing processing algorithms with various functions, and the present embodiment does not limit a specific editing processing algorithm as long as an editing function can be realized. For example, when the edit instruction is an edit instruction for amplifying the audio volume, the volume amplification processing program specifically implements amplification processing of the audio volume by increasing the gain of the audio. When the editing instruction is a denoised audio editing instruction, the denoising processing program can firstly determine the noise frequency in the audio, and then filter the noise through a filter with the same noise frequency, so as to realize the denoising processing of the audio to be processed.
And step 130, when the editing processing is finished, playing the audio after the editing processing, and simultaneously changing the display state of the editing control so as to keep the display state of the editing control consistent with the editing state of the audio to be processed.
It can be understood that, the logic for performing the editing processing on the audio is more complex than the change logic for the display state of the editing control, so that it takes a long time to perform the editing processing on the audio, and if the editing processing operation on the audio and the change operation on the display state of the editing control are triggered at the same time, the completion time of the change operation on the display state of the editing control is inevitably earlier than the completion time of the editing processing operation on the audio, so that when the user sees that the display state of the editing control is the state of the corresponding function edited by the audio, the user cannot hear the audio of the corresponding edited function. For the problem, according to the technical scheme of this embodiment, when an audio editing instruction is received, an editing processing operation for performing a corresponding function on an audio to be processed is preferentially triggered, and when the editing processing operation is completed, the audio after the editing processing is played, and simultaneously, a display state of an editing control is changed, so that the display state of the editing control is consistent with the editing state of the audio to be processed, thereby enabling a user to hear the audio with the corresponding editing function to be executed when the user sees that the display state of the editing control is an active state, for example, when the user sees that a "vibrato" button is pressed, the user can hear the audio after the vibrato processing is performed. Namely, the operation flow of the technical scheme of the embodiment is as follows: and triggering an audio editing instruction- > an audio editing processing program to edit the audio to be processed- > change the display state of the editing control, and playing the audio after editing processing.
For example, the volume is controlled through the sliding rail, the volume range is 0-100, when a user drags the volume adjusting slider to 50 to stop dragging, the system calls a volume adjusting processing program to adjust the gain of the audio to be processed, and after the gain adjustment is finished, the display position of the volume adjusting slider is controlled to be changed to the position with the volume of 50 and displayed; and in the time period of incomplete gain adjustment, if the hand of the user is far away from the volume adjusting slider, the volume adjusting slider returns to the initial position, and only when the volume of the audio to be processed is increased, the position of the volume adjusting slider is displayed at the corresponding position.
According to the technical scheme of the embodiment, when an audio editing instruction triggered based on the editing control is received, the editing processing of corresponding functions is firstly carried out on the audio to be processed, when the editing processing is completed, the display state of the editing control is changed, and the audio after the editing processing is played, so that a user can hear the audio after the editing processing while seeing that the display state of the editing control changes, the purpose that the display state of the audio editing control and the editing state of the audio are kept consistent is achieved, and the user experience is improved.
Example two
Fig. 2 is a schematic flowchart of an audio editing method according to a second embodiment of the disclosure. On the basis of the above embodiment, this embodiment embodies the above step 130, "when the editing process is completed, the audio after the editing process is played, and at the same time, the display state of the editing control is changed, so that the display state of the editing control is consistent with the editing state of the audio to be processed," as shown in fig. 2, the method includes:
and step 210, receiving an editing instruction through an editing control.
And step 220, editing the audio to be processed based on the editing instruction.
And step 230, monitoring the target field matched with the editing process.
Wherein the target field refers to an identifier for indicating a completion state of the editing process, and the completion state of the editing process is identified by changing the state of the target field when the editing process is completed. For example, when the editing processing operation starts, the value of the target field is 0, when the editing processing operation is completed, the value of the target field is changed from 0 to 1, and when the value of the target field is monitored to be 1, it is determined that the current editing processing is completed.
And 240, when the state of the target field is monitored to be changed, playing the audio after the editing processing.
Specifically, the edited audio file may be added to an audio player for playing.
And 250, determining the target display state of the editing control based on the binding relationship between the state of the target field and the display state of the editing control.
The binding relationship between the state of the target field and the display state of the editing control specifically means that, for example, when the numerical value of the target field is 0, the display state of the editing control is an inactive state; and when the numerical value of the target field is 1, the display state of the editing control is an activation state. And when the state of the target field is monitored, determining the target display state of the editing control bound with the target field based on the state of the target field, and controlling the display of the editing control according to the determined target display state of the editing control. The display state of the editing control can also be a display color state of the editing control, for example, when the numerical value of the target field is 0, the display color of the editing control is gray; and when the numerical value of the target field is 1, the display color of the editing control is green.
And step 260, changing the display state of the editing control into the target display state through a display processing program.
Wherein the display state of the editing control comprises a display color state. For example, if the noise reduction function is not turned on, the display color of the noise reduction function on button is gray, and when the noise reduction processing performed on the audio to be processed is completed, the display color of the noise reduction function on button is changed to green. Or, if the 'fade-in' command is not triggered, the display color of the 'fade-in' button is gray, and when the fade-in processing of the audio to be processed is completed, the display color of the 'fade-in' button is red.
According to the technical scheme of the embodiment of the invention, the completion condition of the editing process can be timely known by monitoring the target field matched with the editing process, when the state of the target field is monitored to be changed, the target display state of the editing control is determined based on the binding relation between the state of the editing field and the display state of the editing control, the display state of the editing control is changed to be the target display state, and the audio after the editing process is played, so that the purpose of keeping the display state of the audio editing control consistent with the editing state of the audio is realized, and the user experience is improved.
EXAMPLE III
Fig. 3 is a schematic structural diagram of an audio editing apparatus according to a third embodiment of the present disclosure, where the apparatus includes: a receiving module 310, an editing module 320, and a response module 330;
the receiving module 310 is configured to receive an editing instruction through an editing control; the editing module 320 is configured to edit the audio to be processed based on the editing instruction; the response module 330 is configured to play the audio after the editing process is completed, and change the display state of the editing control at the same time, so that the display state of the editing control is consistent with the editing state of the audio to be processed.
On the basis of the above technical solution, the editing module 320 includes:
the calling unit is used for calling the matched editing processing program according to the editing instruction;
and the editing unit is used for editing the audio to be processed through the editing processing program.
On the basis of the above technical solutions, the response module 330 includes:
the monitoring unit is used for monitoring the target field matched with the editing processing;
the playing unit is used for playing the audio after the editing processing when the state of the target field is monitored to be changed;
the determining unit is used for determining the target display state of the editing control based on the binding relationship between the state of the target field and the display state of the editing control;
and the changing unit is used for changing the display state of the editing control into the target display state through a display processing program.
On the basis of the above technical solutions, the editing control includes: volume adjustment button, reverberation type selection button, noise reduction function close or open the button, at least one of accompaniment offset button, accompaniment volume adjustment button, fade-in button, fade-out button or trill button.
On the basis of the above technical solutions, the display state of the editing control includes a display color state.
According to the technical scheme of the embodiment, when the audio editing instruction triggered based on the editing control is received, the editing processing of the corresponding function is firstly carried out on the audio to be processed, when the editing processing is completed, the display state of the editing control is changed, and the audio after the editing processing is played, so that the user can hear the audio after the editing processing while seeing that the display state of the editing control changes, the purpose that the display state of the audio editing control and the editing state of the audio are kept consistent is achieved, and the user experience is improved.
The audio editing device provided by the embodiment of the disclosure can execute the audio editing method provided by any embodiment of the disclosure, and has corresponding functional modules and beneficial effects of the execution method.
It should be noted that, the units and modules included in the apparatus are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the embodiments of the present disclosure.
Example four
Referring now to fig. 4, a schematic diagram of an electronic device (e.g., the terminal device or the server of fig. 4) 400 suitable for implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, the electronic device 400 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 401 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage device 406 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Generally, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 406 including, for example, magnetic tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 4 illustrates an electronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 409, or from the storage means 406, or from the ROM 402. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 401.
The terminal provided by the embodiment of the present disclosure and the audio editing method provided by the above embodiment belong to the same inventive concept, and technical details that are not described in detail in the embodiment of the present disclosure may be referred to the above embodiment, and the embodiment of the present disclosure and the above embodiment have the same beneficial effects.
EXAMPLE five
The disclosed embodiments provide a computer storage medium having stored thereon a computer program that, when executed by a processor, implements the audio editing method provided by the above-described embodiments.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
receiving an editing instruction through an editing control;
editing the audio to be processed based on the editing instruction;
and when the editing processing is finished, playing the audio after the editing processing, and simultaneously changing the display state of the editing control so as to keep the display state of the editing control consistent with the editing state of the audio to be processed.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a cell does not in some cases constitute a limitation on the cell itself, for example, an editable content display cell may also be described as an "editing cell".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, [ example one ] there is provided an audio editing method, the method comprising:
receiving an editing instruction through an editing control;
editing the audio to be processed based on the editing instruction;
and when the editing processing is finished, playing the audio after the editing processing, and simultaneously changing the display state of the editing control so as to keep the display state of the editing control consistent with the editing state of the audio to be processed.
According to one or more embodiments of the present disclosure, [ example two ] there is provided an audio editing method, further comprising:
optionally, the editing the audio to be processed based on the editing instruction includes:
calling a matched editing processing program according to the editing instruction;
and editing the audio to be processed through the editing processing program.
According to one or more embodiments of the present disclosure, [ example three ] there is provided an audio editing method, further comprising:
optionally, when the editing process is completed, playing the audio after the editing process includes:
monitoring the target field matched with the editing processing;
and when the state of the target field is monitored to be changed, playing the audio after the editing processing.
According to one or more embodiments of the present disclosure, [ example four ] there is provided an audio editing method, further comprising:
optionally, the changing the display state of the editing control includes:
determining a target display state of the editing control based on a binding relationship between the state of the target field and the display state of the editing control;
and changing the display state of the editing control into the target display state through a display processing program.
According to one or more embodiments of the present disclosure, [ example five ] there is provided an audio editing method, further comprising:
optionally, the edit control includes: volume adjustment button, reverberation type selection button, noise reduction function close or open the button, at least one of accompaniment offset button, accompaniment volume adjustment button, fade-in button, fade-out button or trill button.
According to one or more embodiments of the present disclosure, [ example six ] there is provided an audio editing method, further comprising:
optionally, the display state of the editing control includes a display color state.
According to one or more embodiments of the present disclosure, [ example seven ] there is provided an audio editing apparatus comprising:
the receiving module is used for receiving an editing instruction through the editing control;
the editing module is used for editing the audio to be processed based on the editing instruction;
and the response module is used for playing the audio after the editing processing is finished and simultaneously changing the display state of the editing control so as to keep the display state of the editing control consistent with the editing state of the audio to be processed.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
Claims (9)
1. An audio editing method, comprising:
receiving an editing instruction through an editing control, wherein the editing control comprises: at least one of a volume adjusting button, a reverberation type selecting button, a noise reduction function closing or opening button, an accompaniment offset button, an accompaniment volume adjusting button, a fade-in button, a fade-out button or a vibrato button;
editing the audio to be processed based on the editing instruction;
and when the editing processing is finished, playing the audio after the editing processing, and simultaneously changing the display state of the editing control so as to keep the display state of the editing control consistent with the editing state of the audio to be processed.
2. The method of claim 1, wherein performing editing processing on the audio to be processed based on the editing instruction comprises:
calling a matched editing processing program according to the editing instruction;
and editing the audio to be processed through the editing processing program.
3. The method of claim 1, wherein playing the edited audio when the editing process is completed comprises:
monitoring the target field matched with the editing processing;
and when the state of the target field is monitored to be changed, playing the audio after the editing processing.
4. The method of claim 3, wherein changing the display state of the editing control comprises:
determining a target display state of the editing control based on a binding relationship between the state of the target field and the display state of the editing control;
and changing the display state of the editing control into the target display state through a display processing program.
5. The method of any of claims 1-4, wherein the display state of the editing control comprises a display color state.
6. An audio editing apparatus, comprising:
a receiving module, configured to receive an editing instruction through an editing control, where the editing control includes: at least one of a volume adjusting button, a reverberation type selecting button, a noise reduction function closing or opening button, an accompaniment offset button, an accompaniment volume adjusting button, a fade-in button, a fade-out button or a vibrato button;
the editing module is used for editing the audio to be processed based on the editing instruction;
and the response module is used for playing the audio after the editing processing is finished and simultaneously changing the display state of the editing control so as to keep the display state of the editing control consistent with the editing state of the audio to be processed.
7. The apparatus of claim 6, wherein the editing module comprises:
the calling unit is used for calling the matched editing processing program according to the editing instruction;
and the editing unit is used for editing the audio to be processed through the editing processing program.
8. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the audio editing method of any of claims 1-5.
9. A storage medium containing computer executable instructions for performing the audio editing method of any one of claims 1-5 when executed by a computer processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910562867.9A CN110289024B (en) | 2019-06-26 | 2019-06-26 | Audio editing method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910562867.9A CN110289024B (en) | 2019-06-26 | 2019-06-26 | Audio editing method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110289024A CN110289024A (en) | 2019-09-27 |
CN110289024B true CN110289024B (en) | 2021-03-02 |
Family
ID=68006258
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910562867.9A Active CN110289024B (en) | 2019-06-26 | 2019-06-26 | Audio editing method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110289024B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113721821A (en) * | 2021-09-04 | 2021-11-30 | 北京字节跳动网络技术有限公司 | Music playing method and equipment |
CN113891151A (en) * | 2021-09-28 | 2022-01-04 | 北京字跳网络技术有限公司 | Audio processing method and device, electronic equipment and storage medium |
CN116450256A (en) * | 2022-01-10 | 2023-07-18 | 北京字跳网络技术有限公司 | Editing method, device, equipment and storage medium for audio special effects |
CN117059066A (en) * | 2022-05-07 | 2023-11-14 | 北京字跳网络技术有限公司 | Audio processing method, device, equipment and storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2625816B2 (en) * | 1988-02-03 | 1997-07-02 | 松下電器産業株式会社 | Optical disc player |
JPH11185377A (en) * | 1997-12-17 | 1999-07-09 | Sony Corp | Editing device |
CN1741687A (en) * | 2004-08-25 | 2006-03-01 | 雅马哈株式会社 | Mixer controller |
CN1806289A (en) * | 2003-06-13 | 2006-07-19 | 索尼株式会社 | Edition device and method |
CN104424242A (en) * | 2013-08-27 | 2015-03-18 | 北大方正集团有限公司 | Multi-media file processing method and system |
CN107071641A (en) * | 2017-03-31 | 2017-08-18 | 李宗盛 | The electronic equipment and processing method of many tracks of real-time edition |
CN109584910A (en) * | 2017-09-29 | 2019-04-05 | 雅马哈株式会社 | It sings editor's householder method of audio and sings editor's auxiliary device of audio |
CN109739425A (en) * | 2018-04-19 | 2019-05-10 | 北京字节跳动网络技术有限公司 | A kind of dummy keyboard, pronunciation inputting method, device and electronic equipment |
CN109859776A (en) * | 2017-11-30 | 2019-06-07 | 阿里巴巴集团控股有限公司 | A kind of voice edition method and device |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7957547B2 (en) * | 2006-06-09 | 2011-06-07 | Apple Inc. | Sound panner superimposed on a timeline |
US8225207B1 (en) * | 2007-09-14 | 2012-07-17 | Adobe Systems Incorporated | Compression threshold control |
CN104111720B (en) * | 2014-06-30 | 2017-11-14 | 小米科技有限责任公司 | Control method of electronic device, device and electronic equipment |
CN105824514A (en) * | 2016-03-29 | 2016-08-03 | 乐视控股(北京)有限公司 | Music source switching method and device |
CN109213400A (en) * | 2018-08-23 | 2019-01-15 | Oppo广东移动通信有限公司 | Processing method, device, terminal and the storage medium of multimedia object |
CN109683847A (en) * | 2018-12-20 | 2019-04-26 | 维沃移动通信有限公司 | A kind of volume adjusting method and terminal |
-
2019
- 2019-06-26 CN CN201910562867.9A patent/CN110289024B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2625816B2 (en) * | 1988-02-03 | 1997-07-02 | 松下電器産業株式会社 | Optical disc player |
JPH11185377A (en) * | 1997-12-17 | 1999-07-09 | Sony Corp | Editing device |
CN1806289A (en) * | 2003-06-13 | 2006-07-19 | 索尼株式会社 | Edition device and method |
CN1741687A (en) * | 2004-08-25 | 2006-03-01 | 雅马哈株式会社 | Mixer controller |
CN104424242A (en) * | 2013-08-27 | 2015-03-18 | 北大方正集团有限公司 | Multi-media file processing method and system |
CN107071641A (en) * | 2017-03-31 | 2017-08-18 | 李宗盛 | The electronic equipment and processing method of many tracks of real-time edition |
CN109584910A (en) * | 2017-09-29 | 2019-04-05 | 雅马哈株式会社 | It sings editor's householder method of audio and sings editor's auxiliary device of audio |
CN109859776A (en) * | 2017-11-30 | 2019-06-07 | 阿里巴巴集团控股有限公司 | A kind of voice edition method and device |
CN109739425A (en) * | 2018-04-19 | 2019-05-10 | 北京字节跳动网络技术有限公司 | A kind of dummy keyboard, pronunciation inputting method, device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN110289024A (en) | 2019-09-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110289024B (en) | Audio editing method and device, electronic equipment and storage medium | |
US11621022B2 (en) | Video file generation method and device, terminal and storage medium | |
CN112911379B (en) | Video generation method, device, electronic equipment and storage medium | |
US10784830B2 (en) | Speaker volume preference learning | |
US20230353844A1 (en) | Video generation method and apparatus, electronic device, and storage medium | |
CN109547841B (en) | Short video data processing method and device and electronic equipment | |
CN104918177A (en) | Signal processing apparatus, signal processing method, and program | |
CN106790940B (en) | Recording method, recording playing method, device and terminal | |
WO2023051293A1 (en) | Audio processing method and apparatus, and electronic device and storage medium | |
US11272136B2 (en) | Method and device for processing multimedia information, electronic equipment and computer-readable storage medium | |
CN104021148A (en) | Method and device for adjusting sound effect | |
WO2020147522A1 (en) | Method and device for processing audio | |
CN105516451A (en) | Sound effect adjustment method and device | |
EP4304187A1 (en) | Application video processing method, and electronic device | |
US20240119919A1 (en) | Method and device for music play | |
JP2023520570A (en) | Volume automatic adjustment method, device, medium and equipment | |
WO2022194038A1 (en) | Music extension method and apparatus, electronic device, and storage medium | |
CN113360117A (en) | Control method and device of electronic equipment, terminal and storage medium | |
CN109375892B (en) | Method and apparatus for playing audio | |
CN113495712A (en) | Automatic volume adjustment method, apparatus, medium, and device | |
CN111045635B (en) | Audio processing method and device | |
CN111210837B (en) | Audio processing method and device | |
CN109445873B (en) | Method and device for displaying setting interface | |
CN111048108B (en) | Audio processing method and device | |
CN111145776B (en) | Audio processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |