CN113674750B - Data processing method and device - Google Patents

Data processing method and device Download PDF

Info

Publication number
CN113674750B
CN113674750B CN202110955803.2A CN202110955803A CN113674750B CN 113674750 B CN113674750 B CN 113674750B CN 202110955803 A CN202110955803 A CN 202110955803A CN 113674750 B CN113674750 B CN 113674750B
Authority
CN
China
Prior art keywords
waveform diagram
audio
target
waveform
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110955803.2A
Other languages
Chinese (zh)
Other versions
CN113674750A (en
Inventor
陈波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN202110955803.2A priority Critical patent/CN113674750B/en
Publication of CN113674750A publication Critical patent/CN113674750A/en
Application granted granted Critical
Publication of CN113674750B publication Critical patent/CN113674750B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0008Associated control or indicating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application provides a data processing method and a device, wherein the data processing method is applied to a browser and comprises the following steps: acquiring audio waveform information corresponding to audio to be processed, wherein the audio waveform information is used for generating a waveform diagram corresponding to the audio to be processed; determining target gear information of a waveform diagram corresponding to the audio to be processed in a preset gear information set, wherein the gear information is grade information set according to the width of the waveform diagram; under the condition that the target waveform diagram corresponding to the target gear information is not queried in the waveform diagram buffer area, the target waveform diagram is generated and buffered according to the audio waveform information and the target gear information, and the continuous scaling process of the waveform diagram is simulated through the gear information set by the data processing method provided by the application, the picture expression of the waveform diagram is recorded into a limited set, the smoothness of the waveform diagram in the scaling process is ensured, and the definition of the waveform diagram is ensured.

Description

Data processing method and device
Technical Field
The application relates to the technical field of computers, in particular to a data processing method. The application also relates to a data processing device, a computing device and a computer-readable storage medium.
Background
With the rapid development of computer technology, audio processing technology is also developed, when audio is played, waveform diagrams of the audio are drawn, and in the process of drawing the waveform diagrams, great performance and time cost are brought. When the waveform is continuously scaled, the audio needs to be continuously resampled and redrawn, which can lead to a jam of visual effect and poor use experience, and if only one waveform is drawn and scaled, the edge pixels of the waveform are virtual, so that the best visual effect cannot be obtained.
Disclosure of Invention
In view of this, the embodiment of the application provides a data processing method. The application also relates to a data processing device, a computing device and a computer readable storage medium, which are used for solving the problems of visual jamming and poor user experience caused by continuous scaling of a waveform diagram in the prior art.
According to a first aspect of an embodiment of the present application, there is provided a data processing method, applied to a browser, including:
acquiring audio waveform information corresponding to audio to be processed, wherein the audio waveform information is used for generating a waveform diagram corresponding to the audio to be processed;
determining target gear information of a waveform diagram corresponding to the audio to be processed in a preset gear information set, wherein the gear information is grade information set according to the width of the waveform diagram;
and under the condition that the waveform diagram buffer area does not inquire the target waveform diagram corresponding to the target gear information, generating and buffering the target waveform diagram according to the audio waveform information and the target gear information.
According to a second aspect of an embodiment of the present application, there is provided a data processing apparatus, for application to a browser, including:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is configured to acquire audio waveform information corresponding to audio to be processed, and the audio waveform information is used for generating a waveform diagram corresponding to the audio to be processed;
the determining module is configured to determine target gear information of a waveform diagram corresponding to the audio to be processed in a preset gear information set, wherein the gear information is grade information set according to the width of the waveform diagram;
the generating module is configured to generate and buffer the target waveform diagram according to the audio waveform information and the target gear information under the condition that the waveform diagram buffer area does not inquire the target waveform diagram corresponding to the target gear information.
According to a third aspect of embodiments of the present application there is provided a computing device comprising a memory, a processor and computer instructions stored on the memory and executable on the processor, the processor implementing the steps of the data processing method when executing the computer instructions.
According to a fourth aspect of embodiments of the present application, there is provided a computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the data processing method.
The data processing method provided by the application is applied to a browser and comprises the following steps: acquiring audio waveform information corresponding to audio to be processed, wherein the audio waveform information is used for generating a waveform diagram corresponding to the audio to be processed; determining target gear information of a waveform diagram corresponding to the audio to be processed in a preset gear information set, wherein the gear information is grade information set according to the width of the waveform diagram; and under the condition that the waveform diagram buffer area does not inquire the target waveform diagram corresponding to the target gear information, generating and buffering the target waveform diagram according to the audio waveform information and the target gear information. According to the data processing method provided by the embodiment of the application, the continuous zooming process of the waveform diagram is simulated through the gear information set, the picture representation of the waveform diagram is recorded into a limited set, the smoothness of the waveform diagram in the zooming process is ensured, and the definition of the waveform diagram is ensured.
And secondly, caching the generated waveform diagram to a waveform diagram cache area, and when the corresponding gear is selected again, re-drawing is not needed, so that the drawing cost of generating the waveform diagram is reduced, the continuous and smooth effect of switching the waveform diagram is achieved, and the user experience is improved.
Drawings
FIG. 1 is a flow chart of a data processing method according to an embodiment of the present application;
fig. 2 is a process flow diagram of a data processing method applied to an audio playing scene according to an embodiment of the present application;
FIG. 3a is a schematic diagram of an initial waveform diagram provided by an embodiment of the present application;
FIG. 3b is a schematic diagram of a target waveform diagram according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a data processing apparatus according to an embodiment of the present application;
FIG. 5 is a block diagram of a computing device according to one embodiment of the application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. The present application may be embodied in many other forms than those herein described, and those skilled in the art will readily appreciate that the present application may be similarly embodied without departing from the spirit or essential characteristics thereof, and therefore the present application is not limited to the specific embodiments disclosed below.
The terminology used in the one or more embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the application. As used in one or more embodiments of the application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present application refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that, although the terms first, second, etc. may be used in one or more embodiments of the application to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first may also be referred to as a second, and similarly, a second may also be referred to as a first, without departing from the scope of one or more embodiments of the application. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
First, terms related to one or more embodiments of the present application will be explained.
Audio waveform map scaling: at the visual level, the audio waveform is given a continuous zooming (up or down) effect.
Audio data analysis: and unpacking the audio file, and decoding to obtain audio source data.
Audio data resampling: the audio data, which is already a digital signal, is resampled in the audio data at a lower frequency to reduce the size of the data, thereby providing the underlying data for the drawing of the waveform diagram.
In the conventional method for drawing the corresponding waveform diagram of the audio in the browser, since the scaling action of the waveform diagram is continuous, the waveform diagram needs to be redrawn according to each scaling parameter, and the drawing of the waveform diagram consumes relatively large performance and time.
Based on this, in the present application, a data processing method is provided, and the present application relates to a data processing apparatus, a computing device, and a computer-readable storage medium, one by one, which are described in detail in the following embodiments.
Fig. 1 shows a flowchart of a data processing method according to an embodiment of the present application, where the method is applied to a browser, and specifically includes the following steps:
step 102: and acquiring audio waveform information corresponding to the audio to be processed, wherein the audio waveform information is used for generating a waveform diagram corresponding to the audio to be processed.
The audio to be processed is the audio for generating the waveform diagram, in practical application, the audio to be processed includes but is not limited to songs, recordings, audio files extracted from videos and the like, and meanwhile, the format of the audio to be processed includes but is not limited to mp3, cd, mpeg, wma, flac and the like.
The audio waveform information is binary data of the audio to be processed for generating a waveform diagram, and the audio waveform diagram corresponding to the audio to be processed can be obtained by analyzing the audio to be processed.
In general, in order to obtain a waveform diagram, the audio to be processed needs to be parsed first to obtain an audio wave diagram of the audio to be processed, and the process of parsing the audio to obtain the audio wave diagram can be completed at a terminal where a browser is located or at a server, but because the parsing of the audio to be processed needs to occupy higher resources, meanwhile, the decoding of various audio files in the browser has a problem of compatibility, in order to improve the processing efficiency, the audio to be processed can be sent to the server, and the audio to be processed is parsed at the server, so that, preferably, the process of obtaining audio waveform information corresponding to the audio to be processed includes:
receiving audio to be processed uploaded by a user;
forwarding the audio to be processed to a server so that the server transcodes the audio to be processed to generate audio waveform information;
downloading the audio waveform information from the server.
In practical application, a user opens a browser by using a terminal, browses an audio file locally stored in the terminal through the browser, selects audio to be processed for uploading, and receives the audio to be processed uploaded by the user.
After receiving the audio to be processed, uploading the audio to be processed to a server, analyzing the audio to be processed by combining an audio-video library such as FFmpeg and Lib at the server side, extracting binary audio waveform information in the audio to be processed, at the moment, usually adopting a smaller sampling frequency to compress the data size of the audio waveform information, simultaneously facilitating the subsequent acceleration of the analysis speed in a browser, generating the audio waveform information after the processing of the audio to be processed is completed by the server, and storing the audio waveform information to a preset storage position, thereby facilitating the subsequent access and downloading of the browser.
After the server processes the video to be processed, the browser downloads the audio waveform information corresponding to the video to be processed from a preset storage position and stores the audio waveform information in the memory of the current terminal.
In a specific embodiment of the present application, taking a waveform diagram of "red rose accompaniment mp3" as an example for explaining, a user uploads audio a to a browser, the browser uploads the audio a to a server, the server side transcodes the audio a, extracts corresponding audio waveform information IA, stores the audio waveform information IA in a static file server, and transmits a storage address to the browser in the form of URL address, and the browser downloads the audio waveform information according to the URL address.
Step 104: and determining target gear information of a waveform diagram corresponding to the audio to be processed in a preset gear information set, wherein the gear information is grade information set according to the width of the waveform diagram.
The preset number of gears are preset in the preset gear information set, and the gear information in the application refers to the grade information of the waveform diagram, which is set according to different waveform diagram widths in the zooming process, namely, different waveform diagram widths correspond to different gears, for example, the larger the gear is, the larger the corresponding waveform diagram width is; or the smaller the gear, the larger the width of the corresponding waveform.
Specifically, in the process of scaling the waveform diagram, the scaling change of the waveform diagram can be understood as the change of the time interval represented by the unit size (PX) of the display screen, if the time interval represented by PX is y, if y is larger, that is, the width of the waveform diagram is narrower, the details are smaller; the smaller y, the wider the width of the waveform, and the more detailed.
For example, the width of the display screen has 100 PX, when the duration of the audio to be processed is 200 seconds, and when the waveform of the audio to be processed fills up the display screen, y represents 2 seconds, and when the waveform is reduced to half the width of the display screen, that is, only 50 PX are occupied, at this time y represents 4 seconds.
In the application, the process of scaling the waveform diagram is split into a plurality of discrete gears, and each gear corresponds to the width range of the waveform diagram, namely, the true change process is replaced by confirming the value of y. The use of discrete gear steps instead of continuous means that the plotted waveform map changes from endless unpredictable to a finite set of gear step information.
The total number of gear steps in the gear step information set is preset, for example, the range in which the width of the waveform chart can be changed can be divided into 15 gear steps, that is, 15 y values, and the width of the waveform chart is assumed to be defined as w. When y is minimum, the widest width of the waveform diagram corresponds to the widest width of the waveform diagram, and when y is maximum, the narrowest width of the waveform diagram corresponds to the change of y from large to small, namely the change of w from small to large.
In a specific embodiment provided by the application, along the time interval y represented by the unit size by using the above example, the value is from 1 to 100, the width of the waveform diagram is 10, 20, 30 and … … 100 as examples, and the gear corresponding to the waveform diagram is divided into 10 gears, wherein y corresponds to 1 to 10 and w corresponds to 100 in the 1 gear; in gear 2, y corresponds to 11-20, w corresponds to 90: y corresponds to 91-100 and w corresponds to 10 at gear … ….
The target gear information specifically refers to gear information corresponding to the current waveform diagram, the waveform diagram of the audio to be processed needs to be determined according to the target gear information, and in practical application, the target gear information is determined under two conditions, one is initial gear information, and the other is adjusted gear information after adjustment.
Specifically, when the waveform diagram of the video to be processed is generated for the first time, determining target gear information of the waveform diagram corresponding to the audio to be processed in a preset gear information set includes:
and determining the initial gear information as target gear information of the waveform diagram corresponding to the audio to be processed in a preset gear information set.
The initial gear information refers to a default gear when the audio to be processed generates the waveform diagram for the first time, and in general, an appropriate initial gear is set for the waveform diagram of the audio to be processed according to the width of the screen, so that the waveform diagram can be displayed in the screen exactly.
The initial gear information may be preset fixed gear information, for example, in the case where the gear information set has 12 gears, the initial gear information may be set to 6 gears, that is, in the case where the waveform map is generated for the first time, the waveform map is generated according to the gear information to 6 gears.
In practical application, the waveform diagram is further adjusted, the adjusted gear information is determined, specifically, the target gear information of the waveform diagram corresponding to the audio to be processed is determined in a preset gear information set, and the method comprises the following steps:
receiving a waveform diagram adjustment instruction aiming at a waveform diagram corresponding to the audio to be processed;
and determining target gear information in a preset gear information set according to the waveform diagram adjustment instruction.
The waveform image adjusting instruction is an adjusting instruction for zooming the waveform image, and in the process of zooming the waveform image, the waveform image width finally determined by the waveform image adjusting instruction is used for determining target gear information, for example, the y value is adjusted from the initial 5 th gear to the 8 th gear through adjusting the waveform image, and the target gear information is the 8 th gear.
Specifically, determining target gear information in a preset gear information set according to the waveform diagram adjustment instruction includes:
acquiring a waveform diagram target width carried in the waveform diagram adjustment instruction;
and determining target gear information corresponding to the target width of the oscillogram according to a preset width gear comparison table.
In the process of adjusting the waveform diagram according to the waveform diagram adjusting instruction, determining the final target width of the waveform diagram, and determining target gear information corresponding to the current target width according to the width gear comparison table of the waveform diagram. For example, the target width of the waveform diagram is 50, the corresponding gear is 5 th gear, and then 5 th gear is the target gear information.
In an actual application, after determining the target gear information of the waveform diagram corresponding to the audio to be processed in the preset gear information set, the method further includes:
and inquiring whether a target waveform diagram corresponding to the target gear information exists in the waveform diagram buffer area.
After determining the target gear information of the waveform diagram corresponding to the audio to be processed in the preset gear information set, inquiring in the waveform diagram buffer area is needed, and inquiring whether the target waveform diagram corresponding to the target gear information is cached in advance, that is, whether the corresponding waveform diagram is generated or not under the condition that the target gear information needs to be judged.
In practical application, after the waveform diagram of the audio to be processed is adjusted before, a corresponding waveform diagram is generated, and then the waveform diagram and the target gear information need to be correspondingly cached in a waveform diagram cache region, wherein the waveform diagram cache region is used for caching the waveform diagram and the target gear information. The waveform diagram buffer area can store each gear information and the corresponding waveform diagram thereof, so that the subsequent situation of reaching the target gear information can be directly obtained and used without generating repeated waveform diagrams in a rendering way. For example, the preset gear information set has 6 pieces of gear information, namely 1-6 gears respectively, the initial gear information is 3 gears, the corresponding waveform image is P3, the 3 gears and the waveform image P3 are cached in the waveform image cache area, at the moment, the waveform image is reduced to 2 gears, the corresponding waveform image P2 is generated, and then the 2 gears and the waveform image P2 are cached in the waveform image cache area; and so on, finally, each gear information and the corresponding waveform diagram thereof are cached in the waveform diagram cache area. When the waveform diagram is adjusted, the corresponding waveform diagram can be directly obtained from the waveform diagram buffer area according to the gear information without repeated rendering.
Based thereon, the method further comprises:
and under the condition that the waveform diagram buffer area inquires a target waveform diagram corresponding to the target gear information, acquiring the target waveform diagram.
If the target waveform diagram corresponding to the target gear information is searched in the waveform diagram cache area, the target waveform diagram can be directly obtained in the waveform diagram cache.
In a specific embodiment of the present application, taking the above example as an example, taking the waveform diagram of the audio a drawn for the first time as the 5 th gear, determining that the target gear information is 5 th gear, determining whether the waveform diagram buffer area has the waveform diagram corresponding to 5 th gear at this time, if so, directly obtaining, if not, performing the subsequent processing steps.
In another specific embodiment provided by the application, taking the example of receiving the waveform diagram adjustment instruction as an example, after adjustment, the width of the waveform diagram is narrowed, the corresponding y value is enlarged, the corresponding gear of the adjusted waveform diagram is determined to be 7 gears after the adjustment of the width gear comparison table, the 7 gears are the target gear information, at this time, whether the waveform diagram buffer area has the waveform diagram corresponding to the 7 gears is judged, if yes, the waveform diagram buffer area is directly obtained, and if not, the subsequent processing step is carried out.
Step 106: and under the condition that the waveform diagram buffer area does not inquire the target waveform diagram corresponding to the target gear information, generating and buffering the target waveform diagram according to the audio waveform information and the target gear information.
The above has determined that, if the target waveform is queried in the waveform buffer, the target waveform can be directly obtained, and if the target waveform is not queried, the corresponding target waveform needs to be generated, and in practical application, the target waveform needs to be generated according to the audio waveform information and the target gear information.
It should be noted that after the target waveform diagram is generated, the target waveform diagram and the target gear information are correspondingly cached in the waveform diagram cache region, so that when the target waveform diagram is adjusted to the gear information later, the corresponding target waveform diagram can be directly obtained, drawing expenditure is saved, and the continuous and smooth effect of the picture expression is achieved.
Specifically, generating the target waveform diagram according to the audio waveform information and the target gear information includes:
determining a waveform diagram target width corresponding to the target gear information according to a preset width gear comparison table;
resampling the audio waveform information according to the waveform diagram target width to generate target waveform information;
and drawing and generating a target waveform diagram based on the target waveform information.
In practical application, the waveform diagram target width corresponding to the target gear information can be determined according to a preset width gear comparison table, resampling is performed on the audio waveform information based on the waveform diagram target width, for example, 100 unit sizes (px) in the target width are needed to be drawn, the problem of pixel blurring is avoided, the target waveform information is obtained after resampling, the target waveform information is used for drawing and generating a target waveform diagram, a canvas with the width being the waveform diagram target width is created, and line drawing is performed on the canvas according to the target waveform information, so that the target waveform diagram corresponding to the target gear information is obtained.
In practical application, the method further comprises:
and displaying the target waveform diagram on an audio display interface of the browser.
In practical application, whether the generated target waveform diagram is drawn or the target waveform diagram obtained from the waveform diagram buffer area is required to be displayed on an audio display interface of the browser and provided for a user, so that the user can intuitively see the change of the target waveform diagram, and the use experience of the user is improved.
The data processing method provided by the application is applied to a browser and comprises the following steps: acquiring audio waveform information corresponding to audio to be processed, wherein the audio waveform information is used for generating a waveform diagram corresponding to the audio to be processed; determining target gear information of a waveform diagram corresponding to the audio to be processed in a preset gear information set, wherein the gear information is grade information set according to the width of the waveform diagram; and under the condition that the waveform diagram buffer area does not inquire the target waveform diagram corresponding to the target gear information, generating and buffering the target waveform diagram according to the audio waveform information and the target gear information. According to the data processing method provided by the embodiment of the application, the continuous zooming process of the waveform diagram is simulated through the gear information set, and the picture representation of the waveform diagram is recorded into a limited set, so that the smoothness of the waveform diagram in the zooming process can be ensured, and the definition of the waveform diagram can be ensured.
And secondly, caching the generated waveform diagram to a waveform diagram cache area, and when the corresponding gear is selected again, re-drawing is not needed, so that the drawing cost of generating the waveform diagram is reduced, the continuous and smooth effect of switching the waveform diagram is achieved, and the user experience is improved.
The application of the data processing method provided by the present application to an audio playing scene is taken as an example in the following description with reference to fig. 2. Fig. 2 shows a process flow chart of a data processing method applied to an audio playing scene according to an embodiment of the present application, which specifically includes the following steps:
step 202: forwarding the audio to be processed uploaded by the user to a server, so that the server transcodes the audio to be processed to generate audio waveform information.
In a specific embodiment of the present application, taking audio to be processed as audio V as an example, receiving video V uploaded by a user, forwarding the video V to a server S, transcoding the audio V in the server S, extracting audio waveform data, performing data compression, generating binary audio waveform information Iv, and storing the binary audio waveform information Iv in the server S.
Step 204: downloading the audio waveform information from the server.
In one embodiment of the present application, the above example is used to download the audio waveform information Iv from the server S.
Step 206: and determining initial gear information in a preset gear information set.
In a specific embodiment of the present application, following the above example, 10 gears are included in the preset gear information set, where 5 gears are initial gears, and gear information I5 of 5 gears is obtained.
Step 208: generating an initial waveform diagram of the audio to be processed according to the audio waveform information and the initial gear information, and caching the initial waveform diagram and the initial gear information into a waveform diagram cache area.
In a specific embodiment of the present application, following the above example, an initial waveform P5 of the video V is generated according to the audio waveform information Iv and the gear information I5, referring to fig. 3a, fig. 3a shows a schematic diagram of the initial waveform provided by the embodiment of the present application, and the initial waveform P5 is buffered in the waveform buffer.
Step 210: and receiving a waveform diagram adjustment instruction for the waveform diagram corresponding to the audio to be processed.
In an embodiment of the present application, the above example is used to monitor the scaling operation of the user on the waveform diagram, that is, to receive the waveform diagram adjustment instruction sent by the user on the waveform diagram P5.
Step 212: and determining target gear information in a preset gear information set according to the waveform diagram adjustment instruction.
In a specific embodiment of the present application, in the above example, the waveform diagram adjustment instruction is an enlarged waveform diagram, and the corresponding gear is changed to 4 th gear, so as to obtain the gear information I4 of 4 th gear.
Step 214: and inquiring whether a target waveform diagram corresponding to the target gear information exists in the waveform diagram buffer area, if so, executing step 216, and if not, executing step 218.
In one embodiment of the present application, following the above example, it is queried whether the waveform map P4 corresponding to the gear information I4 is cached in the waveform map cache area, if so, step 216 is executed, and if not, step 218 is executed.
Step 216: and acquiring the target waveform diagram.
In an embodiment of the present application, if the waveform map buffer queries the waveform map P4, the waveform map P4 is obtained from the waveform map buffer.
Step 218: generating a target waveform diagram according to the audio waveform information and the target gear information, and caching the target waveform diagram and the target gear information into a waveform diagram cache area.
In a specific embodiment of the present application, if the waveform map buffer does not find the waveform map P4, the target waveform map P4 is generated according to the audio waveform information Iv and the target gear information I4, referring to fig. 3b, fig. 3b shows a schematic diagram of the target waveform map provided by the embodiment of the present application, and the waveform map P4 is buffered in the waveform map buffer.
Step 220: and displaying the target waveform diagram on an audio display interface of the browser.
In a specific embodiment of the present application, along with the above example, the waveform P4 is shown on the audio display interface of the browser.
The data processing method provided by the application is applied to a browser and comprises the following steps: acquiring audio waveform information corresponding to audio to be processed, wherein the audio waveform information is used for generating a waveform diagram corresponding to the audio to be processed; determining target gear information of a waveform diagram corresponding to the audio to be processed in a preset gear information set, wherein the gear information is grade information set according to the width of the waveform diagram; and under the condition that the waveform diagram buffer area does not inquire the target waveform diagram corresponding to the target gear information, generating and buffering the target waveform diagram according to the audio waveform information and the target gear information. According to the data processing method provided by the embodiment of the application, the continuous zooming process of the waveform diagram is simulated through the gear information set, the picture representation of the waveform diagram is recorded into a limited set, the smoothness of the waveform diagram in the zooming process is ensured, and the definition of the waveform diagram is ensured.
And secondly, caching the generated waveform diagram to a waveform diagram cache area, and when the corresponding gear is selected again, re-drawing is not needed, so that the drawing cost of generating the waveform diagram is reduced, the continuous and smooth effect of switching the waveform diagram is achieved, and the user experience is improved.
Corresponding to the above-mentioned data processing method embodiment, the present application further provides an embodiment of a data processing apparatus, and fig. 4 shows a schematic structural diagram of a data processing apparatus according to an embodiment of the present application. As shown in fig. 4, the apparatus includes:
an obtaining module 402, configured to obtain audio waveform information corresponding to audio to be processed, where the audio waveform information is used to generate a waveform diagram corresponding to the audio to be processed;
a determining module 404, configured to determine target gear information of a waveform diagram corresponding to the audio to be processed in a preset gear information set, where the gear information refers to level information set according to a width of the waveform diagram;
the generating module 406 is configured to generate and cache the target waveform diagram according to the audio waveform information and the target gear information when the waveform diagram cache area does not query the target waveform diagram corresponding to the target gear information.
Optionally, the determining module 404 is further configured to:
and determining the initial gear information as target gear information of the waveform diagram corresponding to the audio to be processed in a preset gear information set.
Optionally, the determining module 404 is further configured to:
receiving a waveform diagram adjustment instruction aiming at a waveform diagram corresponding to the audio to be processed;
and determining target gear information in a preset gear information set according to the waveform diagram adjustment instruction.
Optionally, the determining module 404 is further configured to:
acquiring a waveform diagram target width carried in the waveform diagram adjustment instruction;
and determining target gear information corresponding to the target width of the oscillogram according to a preset width gear comparison table.
Optionally, the apparatus further includes:
and the inquiring module is configured to inquire whether the target waveform diagram corresponding to the target gear information exists in the waveform diagram buffer area.
Optionally, the apparatus further includes:
the waveform diagram acquisition module is configured to acquire the target waveform diagram under the condition that the waveform diagram buffer area inquires the target waveform diagram corresponding to the target gear information.
Optionally, the obtaining module 402 is further configured to:
receiving audio to be processed uploaded by a user;
forwarding the audio to be processed to a server so that the server transcodes the audio to be processed to generate audio waveform information;
downloading the audio waveform information from the server.
Optionally, the generating module 406 is further configured to:
determining a waveform diagram target width corresponding to the target gear information according to a preset width gear comparison table;
resampling the audio waveform information according to the waveform diagram target width to generate target waveform information;
and drawing and generating a target waveform diagram based on the target waveform information.
Optionally, the apparatus further includes:
and the display module is configured to display the target waveform graph on an audio display interface of the browser.
The data processing device provided by the application is applied to a browser and comprises: acquiring audio waveform information corresponding to audio to be processed, wherein the audio waveform information is used for generating a waveform diagram corresponding to the audio to be processed; determining target gear information of a waveform diagram corresponding to the audio to be processed in a preset gear information set, wherein the gear information is grade information set according to the width of the waveform diagram; and under the condition that the waveform diagram buffer area does not inquire the target waveform diagram corresponding to the target gear information, generating and buffering the target waveform diagram according to the audio waveform information and the target gear information. According to the data processing device provided by the embodiment of the application, the continuous zooming process of the waveform diagram is simulated through the gear information set, the picture representation of the waveform diagram is recorded into the limited set, the smoothness of the waveform diagram in the zooming process is ensured, and the definition of the waveform diagram is ensured.
And secondly, caching the generated waveform diagram to a waveform diagram cache area, and when the corresponding gear is selected again, re-drawing is not needed, so that the drawing cost of generating the waveform diagram is reduced, the continuous and smooth effect of switching the waveform diagram is achieved, and the user experience is improved.
The above is a schematic solution of a data processing apparatus of the present embodiment. It should be noted that, the technical solution of the data processing apparatus and the technical solution of the data processing method belong to the same conception, and details of the technical solution of the data processing apparatus, which are not described in detail, can be referred to the description of the technical solution of the data processing method.
Fig. 5 illustrates a block diagram of a computing device 500, provided in accordance with an embodiment of the present application. The components of the computing device 500 include, but are not limited to, a memory 510 and a processor 520. Processor 520 is coupled to memory 510 via bus 530 and database 550 is used to hold data.
Computing device 500 also includes access device 540, access device 540 enabling computing device 500 to communicate via one or more networks 560. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. The access device 540 may include one or more of any type of network interface, wired or wireless (e.g., a Network Interface Card (NIC)), such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the application, the above-described components of computing device 500, as well as other components not shown in FIG. 5, may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device illustrated in FIG. 5 is for exemplary purposes only and is not intended to limit the scope of the present application. Those skilled in the art may add or replace other components as desired.
Computing device 500 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smart phone), wearable computing device (e.g., smart watch, smart glasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 500 may also be a mobile or stationary server.
Wherein the processor 520, when executing the computer instructions, implements the steps of the data processing method.
The foregoing is a schematic illustration of a computing device of this embodiment. It should be noted that, the technical solution of the computing device and the technical solution of the data processing method belong to the same concept, and details of the technical solution of the computing device, which are not described in detail, can be referred to the description of the technical solution of the data processing method.
An embodiment of the application also provides a computer-readable storage medium storing computer instructions which, when executed by a processor, implement the steps of a data processing method as described above.
The above is an exemplary version of a computer-readable storage medium of the present embodiment. It should be noted that, the technical solution of the storage medium and the technical solution of the data processing method belong to the same concept, and details of the technical solution of the storage medium which are not described in detail can be referred to the description of the technical solution of the data processing method.
The foregoing describes certain embodiments of the present application. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The computer instructions include computer program code that may be in source code form, object code form, executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
It should be noted that, for the sake of simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily all required for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
The preferred embodiments of the application disclosed above are intended only to assist in the explanation of the application. Alternative embodiments are not intended to be exhaustive or to limit the application to the precise form disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and the practical application, to thereby enable others skilled in the art to best understand and utilize the application. The application is limited only by the claims and the full scope and equivalents thereof.

Claims (12)

1. A data processing method, applied to a browser, comprising:
acquiring audio waveform information corresponding to audio to be processed, wherein the audio waveform information is used for generating a waveform diagram corresponding to the audio to be processed;
determining target gear information of a waveform diagram corresponding to the audio to be processed in a preset gear information set, wherein the gear information is grade information set according to the width of the waveform diagram, and the target gear information comprises initial gear information or gear information determined based on a waveform diagram adjustment instruction;
and under the condition that the waveform diagram buffer area does not inquire the target waveform diagram corresponding to the target gear information, generating and buffering the target waveform diagram according to the audio waveform information and the target gear information.
2. The method for processing data according to claim 1, wherein determining target gear information of a waveform diagram corresponding to the audio to be processed in a preset gear information set includes:
and determining the initial gear information as target gear information of the waveform diagram corresponding to the audio to be processed in a preset gear information set.
3. The method for processing data according to claim 1, wherein determining target gear information of a waveform diagram corresponding to the audio to be processed in a preset gear information set includes:
receiving a waveform diagram adjustment instruction aiming at a waveform diagram corresponding to the audio to be processed;
and determining target gear information in a preset gear information set according to the waveform diagram adjustment instruction.
4. The data processing method according to claim 3, wherein determining target gear information in a preset set of gear information according to the waveform diagram adjustment instruction includes:
acquiring a waveform diagram target width carried in the waveform diagram adjustment instruction;
and determining target gear information corresponding to the target width of the oscillogram according to a preset width gear comparison table.
5. The data processing method according to claim 1, further comprising, after determining target gear information of a waveform diagram corresponding to the audio to be processed in a preset gear information set:
and inquiring whether a target waveform diagram corresponding to the target gear information exists in the waveform diagram buffer area.
6. The data processing method of claim 5, wherein the method further comprises:
and under the condition that the waveform diagram buffer area inquires a target waveform diagram corresponding to the target gear information, acquiring the target waveform diagram.
7. The data processing method according to claim 1, wherein acquiring audio waveform information corresponding to audio to be processed includes:
receiving audio to be processed uploaded by a user;
forwarding the audio to be processed to a server so that the server transcodes the audio to be processed to generate audio waveform information;
downloading the audio waveform information from the server.
8. The data processing method according to claim 1, wherein generating the target waveform pattern from the audio waveform information and the target gear information includes:
determining a waveform diagram target width corresponding to the target gear information according to a preset width gear comparison table;
resampling the audio waveform information according to the waveform diagram target width to generate target waveform information;
and drawing and generating a target waveform diagram based on the target waveform information.
9. A data processing method according to any one of claims 1 to 8, wherein the method further comprises:
and displaying the target waveform diagram on an audio display interface of the browser.
10. A data processing apparatus for use in a browser, comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is configured to acquire audio waveform information corresponding to audio to be processed, and the audio waveform information is used for generating a waveform diagram corresponding to the audio to be processed;
the determining module is configured to determine target gear information of a waveform diagram corresponding to the audio to be processed in a preset gear information set, wherein the gear information is grade information set according to the width of the waveform diagram, and the target gear information comprises initial gear information or gear information determined based on a waveform diagram adjusting instruction;
the generating module is configured to generate and buffer the target waveform diagram according to the audio waveform information and the target gear information under the condition that the waveform diagram buffer area does not inquire the target waveform diagram corresponding to the target gear information.
11. A computing device comprising a memory, a processor, and computer instructions stored on the memory and executable on the processor, wherein the processor, when executing the computer instructions, performs the steps of the method of any one of claims 1-9.
12. A computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the method of any one of claims 1-9.
CN202110955803.2A 2021-08-19 2021-08-19 Data processing method and device Active CN113674750B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110955803.2A CN113674750B (en) 2021-08-19 2021-08-19 Data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110955803.2A CN113674750B (en) 2021-08-19 2021-08-19 Data processing method and device

Publications (2)

Publication Number Publication Date
CN113674750A CN113674750A (en) 2021-11-19
CN113674750B true CN113674750B (en) 2023-11-03

Family

ID=78544101

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110955803.2A Active CN113674750B (en) 2021-08-19 2021-08-19 Data processing method and device

Country Status (1)

Country Link
CN (1) CN113674750B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0519799A (en) * 1991-07-11 1993-01-29 Nec Corp Voice synthesizing circuit
CN102629470A (en) * 2011-02-02 2012-08-08 Jvc建伍株式会社 Consonant-segment detection apparatus and consonant-segment detection method
CN105679348A (en) * 2016-01-14 2016-06-15 深圳市柯达科电子科技有限公司 Audio and video player and method
CN106328161A (en) * 2016-08-22 2017-01-11 维沃移动通信有限公司 Audio data processing method and mobile terminal

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9728225B2 (en) * 2013-03-12 2017-08-08 Cyberlink Corp. Systems and methods for viewing instant updates of an audio waveform with an applied effect
US9793879B2 (en) * 2014-09-17 2017-10-17 Avnera Corporation Rate convertor
CN112015603A (en) * 2019-05-30 2020-12-01 鸿富锦精密电子(郑州)有限公司 User terminal hardware detection method, device, computer device and storage medium
CN112533041A (en) * 2019-09-19 2021-03-19 百度在线网络技术(北京)有限公司 Video playing method and device, electronic equipment and readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0519799A (en) * 1991-07-11 1993-01-29 Nec Corp Voice synthesizing circuit
CN102629470A (en) * 2011-02-02 2012-08-08 Jvc建伍株式会社 Consonant-segment detection apparatus and consonant-segment detection method
CN105679348A (en) * 2016-01-14 2016-06-15 深圳市柯达科电子科技有限公司 Audio and video player and method
CN106328161A (en) * 2016-08-22 2017-01-11 维沃移动通信有限公司 Audio data processing method and mobile terminal

Also Published As

Publication number Publication date
CN113674750A (en) 2021-11-19

Similar Documents

Publication Publication Date Title
JP6735927B2 (en) Video content summarization
CN110085244B (en) Live broadcast interaction method and device, electronic equipment and readable storage medium
CN110784754A (en) Video display method and device and electronic equipment
CN111562895B (en) Multimedia information display method and device and electronic equipment
JP2006085681A (en) File conversion and sharing system and method thereof
CA2700452A1 (en) Mobile browser with zoom operations using progressive image download
WO2007118424A1 (en) Web search on mobile devices
CN112418058A (en) Video processing method and device
CN111028828A (en) Voice interaction method based on screen drawing, screen drawing and storage medium
US11887277B2 (en) Removing compression artifacts from digital images and videos utilizing generative machine-learning models
CN113674750B (en) Data processing method and device
JP2006060540A (en) Device and method for decoding and reducing image
CN110446118B (en) Video resource preprocessing method and device and video resource downloading method and device
WO2023024803A1 (en) Dynamic cover generating method and apparatus, electronic device, medium, and program product
CN111190665A (en) Full-screen image display method, intelligent terminal and storage medium
CN110311980B (en) Data downloading method and device
CN113409208A (en) Image processing method, device, equipment and storage medium
RU2690888C2 (en) Method, apparatus and computing device for receiving broadcast content
US20150249722A1 (en) Content providing apparatus and method, and computer program product
CN113676765B (en) Interactive animation display method and device
CN113676751B (en) Video thumbnail processing method and device
CN112800360B (en) Object control method and device
CN110087145B (en) Method and apparatus for processing video
CN113992866B (en) Video production method and device
CN103942227A (en) Method, device and system for displaying information pushing process in rendering mode

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant