CN113792179A - Recording waveform processing method and device, electronic terminal equipment and storage medium - Google Patents

Recording waveform processing method and device, electronic terminal equipment and storage medium Download PDF

Info

Publication number
CN113792179A
CN113792179A CN202111098304.2A CN202111098304A CN113792179A CN 113792179 A CN113792179 A CN 113792179A CN 202111098304 A CN202111098304 A CN 202111098304A CN 113792179 A CN113792179 A CN 113792179A
Authority
CN
China
Prior art keywords
waveform
memory address
file
target
physical memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111098304.2A
Other languages
Chinese (zh)
Inventor
吕鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spreadtrum Semiconductor Nanjing Co Ltd
Original Assignee
Spreadtrum Semiconductor Nanjing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spreadtrum Semiconductor Nanjing Co Ltd filed Critical Spreadtrum Semiconductor Nanjing Co Ltd
Priority to CN202111098304.2A priority Critical patent/CN113792179A/en
Publication of CN113792179A publication Critical patent/CN113792179A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/64Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1009Address translation using page tables, e.g. page table structures
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/10009Improvement or modification of read or write signals
    • G11B20/10018Improvement or modification of read or write signals analog processing for digital recording or reproduction

Abstract

The embodiment of the application provides a recording waveform processing method, a recording waveform processing device, electronic terminal equipment and a storage medium, and relates to the field of audio data processing, wherein the method is applied to the electronic terminal equipment and comprises the following steps: playing the audio content of the target sound recording file according to the playing instruction of the target sound recording file; when audio content is played, under a user mode, acquiring a first virtual memory address mapped by a target waveform file corresponding to a target sound recording file; searching a physical memory address corresponding to the first virtual memory address, wherein the physical memory address corresponding to the first virtual memory address is the same as a physical memory address mapped by the second virtual memory address in the kernel mode; searching waveform data of a target waveform file from a physical memory address; and displaying the waveform corresponding to the currently played audio content according to the searched waveform data. Therefore, when the long-time recording is played, the waveform corresponding to the recording content can be rapidly displayed.

Description

Recording waveform processing method and device, electronic terminal equipment and storage medium
[ technical field ] A method for producing a semiconductor device
The embodiment of the application relates to the field of audio data processing, in particular to a recording waveform processing method and device, electronic terminal equipment and a storage medium.
[ background of the invention ]
Recorder applications (apps) that can be used to record and play audio files. When a user uses or processes a sound recording, some audio information presented by the sound recorder App, such as volume, frequency, etc. information in the audio, is generally needed to be used or processed.
Among them, displaying the volume waveform of a recording file is an important additional function that can help a user to identify a valuable recording area in time.
Although there are many current recorder apps for terminal devices that can provide a function of displaying a volume amplitude waveform, the limitation is large. For example, in a recording scene facing a very long time (for example, more than 24 hours), long-time recording is not supported due to poor compatibility, or when a long-time recording is played, display contents are loaded slowly and stuck.
[ summary of the invention ]
The embodiment of the application provides a recording waveform processing method and device, electronic terminal equipment and a storage medium, which can improve the processing limitations of the existing recorder App and terminal equipment, have good compatibility even facing a long-time recording scene, and can provide good waveform display when long-time recording is played.
In a first aspect, an embodiment of the present application provides a recording waveform processing method, which is applied to an electronic terminal device, and the method includes:
playing the audio content of the target sound recording file according to the playing instruction of the target sound recording file;
when the audio content is played, acquiring a first virtual memory address mapped by a target waveform file corresponding to the target sound recording file in a user mode;
searching a physical memory address corresponding to the first virtual memory address, wherein the physical memory address corresponding to the first virtual memory address is the same as a physical memory address mapped by a second virtual memory address in a kernel mode;
searching waveform data of the target waveform file from the physical memory address;
and displaying the waveform corresponding to the currently played audio content according to the searched waveform data.
In the method, when the audio content of the target audio file needs to be played, waveform data matched with the currently played audio content can be obtained by searching the waveform data in the physical memory address corresponding to the first virtual memory address directly in a user mode, so that waveform display is performed. Wherein, when acquiring the waveform data, the physical memory address searched is a memory address mapped by the first virtual memory address in the user mode and the second virtual memory address in the kernel mode, and the physical memory address corresponding to the first virtual memory address in the user mode is the same as the physical memory address mapped by the second virtual memory address in the kernel mode, when the waveform data in the target waveform file is read aiming at the target audio file, the complex processing process required by the unavailable data due to the different space of the user space and the kernel space can be avoided, the method can not only realize the efficient reading of the waveform data and improve the problems of slow waveform loading and unsmooth waveform loading, but also improve the problem of memory overflow easily caused by the processing process of the factor group when processing long-time recording and waveform files in the traditional technology. In addition, the method can be suitable for electronic terminal equipment with various operating system versions, has better compatibility when processing long-time recording files and waveforms corresponding to long-time recording, and can stably run in terminal equipment with lower performance and less hardware resources.
In one possible implementation manner, the searching for the waveform data of the target waveform file from the physical memory address includes: switching from the user mode to the kernel mode when the waveform data cannot be found from the physical memory address; under the kernel mode, according to a pre-established first mapping relation, determining a physical storage address of the target waveform file mapped by the first virtual memory address, and copying waveform data of the target waveform file from the physical storage address to the physical memory address mapped by the first virtual memory address, wherein the first mapping relation comprises the mapping relation between the physical storage address of the target waveform file and the first virtual memory address; and switching from the kernel mode to the user mode, and searching the physical memory address to obtain the waveform data in the user mode.
Through the implementation mode, even if the required partial waveform data cannot be obtained at present, the required waveform data can be directly written into the physical memory corresponding to the first virtual memory address only by copying once, waveform display can be rapidly realized, display is not required to be started until all the waveform data of a complete waveform file is loaded in a waveform array like a traditional scheme, the waveform data is not required to be copied and cached to a part of specific physical memory divided into the kernel space, and then the waveform data cached in the kernel space is copied to another part of specific physical memory divided into the user space for step-by-step reading processing.
In one possible implementation manner, the searching for the physical memory address corresponding to the first virtual memory address includes: when the physical memory address corresponding to the first virtual memory address cannot be found, switching from the user mode to the kernel mode; in the kernel mode, selecting an idle physical memory address and the first virtual memory address to perform address mapping so that the physical memory address is mapped by the first virtual memory address and the second virtual memory address together; the searching the waveform data of the target waveform file from the physical memory address comprises: in the kernel mode, determining a physical storage address of the target waveform file according to a pre-established first mapping relation, and copying waveform data of the target waveform file from the physical storage address to the physical memory address mapped by the first virtual memory address, wherein the first mapping relation comprises the mapping relation between the physical storage address of the target waveform file and the first virtual memory address; and switching from the kernel mode to the user mode, and searching the physical memory address to obtain the waveform data in the user mode.
Through the implementation mode, even if the waveform data is not cached in the kernel space in advance, the waveform corresponding to the currently played audio content can be quickly loaded and displayed.
In one possible implementation manner, the target waveform file is in a binary format.
The above embodiment stores in binary format, so that the space occupied by the whole target waveform file can be optimized, and the occupation of the actual memory in the waveform reading process can be reduced when the target waveform file in binary format is read by the above method.
In one possible implementation manner, the displaying a waveform corresponding to the currently played audio content according to the found waveform data includes: in the user mode, binary data with a set byte length is used as floating-point waveform data, and the waveform data obtained from the physical memory address is subjected to content analysis; and displaying the waveform content determined by analysis according to the playing time point corresponding to the currently played audio content.
Through the implementation mode, the waveform corresponding to the currently played audio content can be quickly loaded, analyzed and displayed, and real-time waveform drawing under a recording processing scene for a super-long time (for example, more than 12 hours, 24 hours and the like) is facilitated.
In one possible implementation manner, the method further includes: responding to marking operation initiated by a user, and acquiring a waveform marking position corresponding to the marking operation; and positioning and marking the corresponding waveform data at the waveform marking position.
By the implementation mode, random access is supported, richer information can be added to the recording playing process and the target waveform file, and more dimensionality recording use is facilitated.
In one possible implementation manner, the method further includes: responding to noise filtering operation initiated by a user, and acquiring a filtering area corresponding to the noise filtering operation; and deleting the audio data and the waveform data in the filtering area.
By the implementation mode, invalid information can be quickly removed from a large amount of audio data and waveform data, and more valuable recording contents are left.
In a second aspect, an embodiment of the present application provides a recording waveform processing apparatus, which is applied to an electronic terminal device, and the apparatus includes:
the audio processing module is used for playing the audio content of the target recording file according to the playing instruction of the target recording file;
the waveform processing module is used for acquiring a first virtual memory address mapped by a target waveform file corresponding to the target sound recording file in a user mode when the audio content is played;
the waveform processing module is further configured to search for a physical memory address corresponding to the first virtual memory address, where the physical memory address corresponding to the first virtual memory address is the same as a physical memory address mapped by a second virtual memory address in a kernel mode;
the waveform processing module is further configured to search the waveform data of the target waveform file from the physical memory address;
the waveform processing module is further configured to display a waveform corresponding to the currently played audio content according to the found waveform data.
In one possible implementation manner, the waveform processing module is further configured to: switching from the user mode to the kernel mode when the waveform data cannot be found from the physical memory address; under the kernel mode, according to a pre-established first mapping relation, determining a physical storage address of the target waveform file mapped by the first virtual memory address, and copying waveform data of the target waveform file from the physical storage address to the physical memory address mapped by the first virtual memory address, wherein the first mapping relation comprises the mapping relation between the physical storage address of the target waveform file and the first virtual memory address; and switching from the kernel mode to the user mode, and searching the physical memory address to obtain the waveform data in the user mode.
In one possible implementation manner, the waveform processing module is further configured to: when the physical memory address corresponding to the first virtual memory address cannot be found, switching from the user mode to the kernel mode; in the kernel mode, selecting an idle physical memory address and the first virtual memory address to perform address mapping so that the physical memory address is mapped by the first virtual memory address and the second virtual memory address together; in the kernel mode, determining a physical storage address of the target waveform file according to a pre-established first mapping relation, and copying waveform data of the target waveform file from the physical storage address to the physical memory address mapped by the first virtual memory address, wherein the first mapping relation comprises the mapping relation between the physical storage address of the target waveform file and the first virtual memory address; and switching from the kernel mode to the user mode, and searching the physical memory address to obtain the waveform data in the user mode.
In one possible implementation manner, the waveform processing module is further configured to: in the user mode, binary data with a set byte length is used as floating-point waveform data, and the waveform data obtained from the physical memory address is subjected to content analysis; and displaying the waveform content determined by analysis according to the playing time point corresponding to the currently played audio content.
In one possible implementation manner, the waveform processing module is further configured to: responding to marking operation initiated by a user, and acquiring a waveform marking position corresponding to the marking operation; and positioning and marking the corresponding waveform data at the waveform marking position.
In one possible implementation manner, the waveform processing module is further configured to: responding to noise filtering operation initiated by a user, and acquiring a filtering area corresponding to the noise filtering operation; the audio processing module is further configured to delete the audio data in the filtering region, and the waveform processing module is further configured to delete the waveform data in the filtering region.
In a third aspect, an embodiment of the present application provides an electronic terminal device, including:
an audio processing component;
a display component;
at least one processor; and
at least one memory communicatively coupled to the processor, wherein:
the audio processing component is used for playing the audio content of the target sound recording file;
the display component is used for displaying a waveform corresponding to the audio content;
the memory has stored therein program instructions executable by the processor, the program instructions being capable of performing the method of the first aspect when called by the processor.
In a fourth aspect, an embodiment of the present application provides a storage medium, on which a computer program is stored, and the computer program, when executed by a processor, performs the method of the first aspect.
It should be understood that the second to fourth aspects of the embodiment of the present application are consistent with the technical solution of the first aspect of the embodiment of the present application, and beneficial effects obtained by the aspects and the corresponding possible implementation are similar, and are not described again.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present specification, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a recording waveform processing method according to an embodiment of the present application;
FIG. 2 is a flowchart of another recorded waveform processing method according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a recording waveform processing method according to another embodiment of the present disclosure;
fig. 4 is a schematic diagram of reading waveform data in an application scenario according to an embodiment of the present application;
FIG. 5 is a partial flowchart of a recording waveform processing method according to an embodiment of the present application;
fig. 6 is another partial flowchart of a recording waveform processing method according to an embodiment of the present application;
FIG. 7 is a flowchart of another portion of a recording waveform processing method according to an embodiment of the present application;
fig. 8 is a functional block diagram of a recorded waveform processing apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic terminal device according to an embodiment of the present application.
[ detailed description ] embodiments
For better understanding of the technical solutions in the present specification, the following detailed description of the embodiments of the present application is provided with reference to the accompanying drawings.
It should be understood that the described embodiments are only a few embodiments of the present specification, and not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step are within the scope of the present specification.
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the specification. As used in the examples of this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
In the prior art, there are many recorder Application programs (apps) that can be used to record and play audio files and provide a function of displaying sound volume amplitude waveforms, but all have certain limitations, such as poor compatibility, inability to support long-time recording, inability to display waveform diagrams corresponding to recording contents in real time when playing long-time recording, and the like.
In the prior art, some recorder apps exist, and when long-time recording is performed, waveform loading is slow, so that recorded content is played for a period of time, but the waveform is not displayed. Moreover, for a device model with low configuration and less memory resources, for example, a terminal device with an operating system of Android go version, when recording for an over-long time exceeding 24 hours is loaded, the phenomenon of memory overflow and breakdown is occasionally caused, and because of this, many recorder apps on the market select a recording-time-limit (for example, limit a single recording time to be several minutes, no more than half an hour, no more than several hours, etc.) for being compatible with a model with poor configuration performance.
The inventor has found that, in the prior art, the waveform data collected in the recording stage is converted into a text in ASCII code format, and when the waveform data is stored, the waveform data is stored in a manner of writing one waveform data per line, so as to obtain a waveform text file. In the recording and playing stage, a waveform array for containing all waveform data in the whole waveform text file is created first, the waveform array is continuously expanded, carrier waveform data is added into the waveform array, and after all the waveform data are loaded, the waveform data are taken from the array at fixed time intervals to start waveform display to draw the recording waveform. The disadvantages of the prior art in playing a long-time recording are caused by at least the following two reasons:
first, in the prior art, when a recording is played, a waveform array needs to be created, all waveform data in the entire waveform text file is loaded into the waveform array, and the waveform is drawn and displayed only after all waveform data in the entire waveform text file is loaded into the waveform array. The loading process of the waveform data herein relates to a series of processes such as space allocation, memory capacity expansion, text file reading and the like of the waveform array, and multiple data copies are required in the processes of waveform array addition of the waveform data, traditional read-write file transmission and the like. In the case of a short recording time, for example, the recording time is only several minutes, half an hour, 1 hour, etc., the total time of waiting for time is not obvious to the user, but when recording an ultra-long audio, for example, recording an ultra-long recording time exceeding 12 hours and exceeding 24 hours, the total time of waiting caused by the aforementioned series of time-consuming operations is necessarily amplified, resulting in obvious situations of slow waveform loading and jamming, which is serious in a low-configuration model.
Secondly, taking the Android operating system as an example, in order to maintain normal operation Of the multitasking environment Of the device, the Android operating system may set an upper limit for the heap size Of each application program, where the upper limit for the actual heap size Of different devices depends on the total available operating Memory size Of the device, so that when an application process attempts to allocate more Memory after reaching the upper limit Of the heap capacity, an Out Of Memory Error (that is, Memory shortage or Memory overflow) phenomenon may occur. Under the normal condition, the available memory of the Android go version device is relatively small, and when a long-time recording file is played on a similar device model, if a waveform array for storing waveform data is continuously expanded to contain all waveform data of the whole waveform text file, the distributed memory space is easily limited to reach the size of an application heap, so that the operation breakdown is caused by the abnormal condition of memory overflow.
In view of this, the inventors propose the following examples to improve.
By the recording waveform processing method and device, the electronic terminal device and the storage medium, even in a scene that long-time recording needs to be processed, waveform data matched with currently played audio content can be quickly read and displayed in a waveform mode, user waiting time can be reduced, the off-board memory can be reasonably utilized to process the waveform data corresponding to the long-time recording file, the memory occupation condition is relieved, and low-memory equipment can be effectively supported to achieve a good recording waveform presenting effect.
For ease of understanding, some concepts in the embodiments of the present application will be described below.
Kernel: is the internal core of the operating system, and the kernel provides the core management call to the whole computer device to the outside. The address space in which the kernel resides may be referred to as kernel space, and may be collectively referred to as an external hypervisor outside the kernel, most of which relates to management and interface operations for peripheral devices, and the address space occupied by the external hypervisor and user processes may be referred to as external space or user space.
In order to limit the access capability between different programs and avoid randomly acquiring the memory data of other programs or randomly acquiring the data of peripheral equipment, processing modes with different permission levels are divided: user mode and kernel mode. A piece of code that is in kernel space may be considered to be in kernel mode when executed, and a piece of code that is in user mode when executed by a program in user space may be considered to be in user mode. The kernel mode and the user mode are two running levels of the operating system.
Referring to fig. 1, fig. 1 is a flowchart illustrating a recording waveform processing method according to an embodiment of the present disclosure.
The method can be applied to electronic terminal equipment. The operating system of the electronic terminal device may be, but is not limited to: iOS, Windows, macOS, Linux, Android, harmony os, and the like.
As shown in fig. 1, the method may include the steps of:
s110: and playing the audio content of the target sound recording file according to the playing instruction of the target sound recording file.
The target recording file comprises audio data, the target recording file is provided with a target waveform file corresponding to the target recording file, and the target waveform file comprises waveform data corresponding to the audio data. The target waveform file and the target waveform file may be files generated and stored by the electronic terminal device in the recording stage, or files obtained from an external device. The target waveform file and the target waveform file may be stored in different physical storage locations in a non-volatile memory of the electronic terminal device.
The playing instruction of the target sound recording file may be generated when the electronic terminal device senses that the user triggers the designated function button, or may be received by the electronic terminal device from an external device.
When the electronic terminal equipment receives a playing instruction of the target recording file, the contents of the target recording file and the target waveform file can be read, so that the audio playing and the waveform display are carried out at the same time.
The electronic terminal equipment can call the audio processing component to play the audio of the target recording file, so as to play the audio content of the target recording file, and can start the appointed application process to perform waveform processing, so that the waveform matched with the currently played audio content is displayed in the whole process of playing the recording.
S120: and when the audio content is played, acquiring a first virtual memory address mapped by a target waveform file corresponding to the target sound recording file in a user mode.
The first virtual memory address is a section of logical address of the user space.
In this embodiment of the present application, the target audio file may be initialized before being played for the first time and the target waveform file is read, and the initialization process may create, allocate, or update a virtual memory address for the target waveform file as a first virtual memory address mapped by the target waveform file, and may further create a mapping relationship between the first virtual memory address and a physical storage address of the target waveform file, and record the mapping relationship as a first mapping relationship. Of course, the initialization process may also be executed when the electronic terminal device receives a play instruction.
Under the condition that the first virtual memory address mapped by the target waveform file is determined through initialization, when the electronic terminal device obtains the aforementioned play instruction, the first virtual memory address mapped by the target waveform file can be directly obtained in the user mode, the first virtual memory address is accessed through the memory pointer, and the operation S130 is continuously executed.
S130: and searching a physical memory address corresponding to the first virtual memory address, wherein the physical memory address corresponding to the first virtual memory address is the same as a physical memory address mapped by a second virtual memory address in a kernel mode.
In this embodiment, regarding the physical memory address corresponding to the first virtual memory address in S130, the physical memory address is also mapped by the second virtual memory address in the kernel mode, and the physical memory address can be regarded as shared by the user space and the kernel space. Therefore, the read-write operation of the electronic terminal equipment to the physical memory address in the kernel mode can be directly known from the user mode. And the second virtual memory address in the kernel mode is a virtual memory for performing memory management in the kernel mode.
The above-described S130 is a step performed in the user mode. When the physical memory address corresponding to the first virtual memory address can be found directly from the user mode through S130, S140 may be continuously executed in the user mode to obtain the required waveform data, so as to execute S150 to perform waveform display. If the physical memory address corresponding to the first virtual memory address is not found currently through S130, refer to S131-S132 described later.
S140: and searching the waveform data of the target waveform file from the physical memory address.
Wherein if the currently required waveform data is directly available through S140, execution 150 may continue. If the required waveform data is not found when the physical memory address is currently searched through S140, refer to S141b-S143b described later.
The waveform data to be searched from the physical memory address includes waveform data matched with the currently played audio data. The waveform data searched from the physical memory address may be a part of data pre-read from the target waveform file in the playing stage, or may be a part of data temporarily read.
S150: and displaying the waveform corresponding to the currently played audio content according to the searched waveform data.
The audio data in the target audio record file and the waveform data in the target waveform file are matched at the playing time point, so that which part of the waveform data corresponding to the currently played audio content is determined, and the waveform display is performed.
It should be noted that the above method may be executed multiple times and circularly during the playing process, and after S150, the next part of the waveform data may be read and displayed by the above method.
In the method of S110 to S150, when the audio content of the target audio file needs to be played, the waveform data matched with the currently played audio content may be obtained by searching the waveform data in the physical memory address corresponding to the first virtual memory address directly in the user mode, so as to perform waveform display. When the waveform data is obtained, the physical memory address searched for is a memory address mapped by the first virtual memory address in the user mode and the second virtual memory address in the kernel mode together, and the physical memory address corresponding to the first virtual memory address in the user mode is the same as the physical memory address mapped by the second virtual memory address in the kernel mode, so that when the waveform data in the target waveform file is read for the target audio file, the current required waveform data is obtained in the shared physical memory address and displayed on the basis of the obtained waveform data, and the complicated processing processes required by the fact that the user space and the kernel space are different and the data are not communicated in the traditional scheme can be avoided Cumbersome copying of different spaces, etc. By the method, the waveform data can be efficiently read, the problems of slow waveform loading and unsmooth waveform loading are solved, and the problem of memory overflow easily caused by processing of long-time recording and waveform files in the prior art can be solved. In addition, the method can be suitable for electronic terminal equipment with various operating system versions, has better compatibility when processing long-time recording files and waveforms corresponding to long-time recording, can stably run in terminal equipment with lower performance and more tense hardware resources, and is favorable for drawing corresponding waveforms in real time in the process of playing long-time audio contents.
As an implementation, referring to fig. 2, if this occurs while performing the above S130: if the physical memory address corresponding to the first virtual memory address is not found, the following steps S131-S132 and S141a-S142a are executed if the shared physical memory is not found. S130 may include S131-S132, and S140 may include S141a-S142 a.
S131: and when the physical memory address corresponding to the first virtual memory address cannot be found, switching from the user mode to the kernel mode.
S132: and in the kernel mode, selecting an idle physical memory address and the first virtual memory address to perform address mapping so that the physical memory address is mapped by the first virtual memory address and the second virtual memory address together.
The free physical memory address refers to a currently unoccupied and unused memory address.
In the kernel mode, the physical memory can be managed according to the virtual memory address of the kernel space, and based on the management mode, the idle physical memory address mapped by the second virtual memory address can be mapped to the first virtual memory address in the user mode in the kernel mode, so that the same physical memory address can be mapped by different virtual memory addresses together.
In this embodiment, when the initialization process does not involve physical memory allocation, when the target waveform file is read for the first time, mode switching may be performed through S131, and physical memory allocation is performed in the kernel mode through S132, so that the first virtual memory address in the user mode and the second virtual memory address in the kernel mode have a common mapped physical memory address.
After S132 is executed, since the free physical memory is allocated in the kernel mode and the currently required waveform data is not written in the allocated physical memory, S141a may be directly executed in the kernel mode to load the currently required waveform data into the allocated physical memory.
S141 a: and under the kernel mode, determining a physical storage address of the target waveform file according to a pre-established first mapping relation, and copying waveform data of the target waveform file from the physical storage address to the physical memory address mapped by the first virtual memory address.
The first mapping relationship comprises a mapping relationship between a physical storage address of the target waveform file and the first virtual memory address.
The first mapping relationship is a relationship of memory mapped files. The memory mapping refers to associating a virtual memory area with an object in an actual physical storage location to initialize the content of the virtual memory area, and this process is called memory mapping, for example, mapping a target waveform file stored on a disk and a first virtual memory address may be regarded as a memory mapping file, and it is advantageous to directly operate a virtual memory mapped by a file in a user mode to read and write the file content based on the memory mapping file.
Since the physical memory address allocated in the kernel mode is mapped by the first virtual memory address and the second virtual memory address together, when the physical storage address of the target waveform file is determined based on the first mapping relationship, the waveform data in the target waveform file can be directly written into the physical memory address mapped by the first virtual memory address and the second virtual memory address together through the first mapping relationship, so that the electronic terminal device can directly obtain the required waveform data from the physical memory address when switching back to the user mode through the following S142a without copying the same part of waveform data again.
S142 a: and switching from the kernel mode to the user mode, and searching the physical memory address to obtain the waveform data in the user mode.
S150 may be performed after S142a for waveform display.
Through the implementation manners of the above S131, S132, S141a, S142a, and S150, when the corresponding physical memory address cannot be found to obtain the waveform data, for example, when the target audio file is played for the first time and the target waveform file is read for the first time, mode switching may be performed first, then the idle physical memory address is found in the kernel mode to perform address mapping, so that the idle physical memory address is shared by the kernel mode and the user mode, then the physical storage location where the waveform data is located is determined, then the waveform data is copied and read, so that the waveform data is directly loaded into the physical memory address corresponding to the first virtual memory address, and then the waveform data is obtained from the physical memory address to perform waveform display in the user mode. Through the implementation process, even if the waveform data is not cached in the kernel space in advance, the waveform corresponding to the currently played audio content can be quickly loaded and displayed.
As another implementation manner, referring to fig. 3, in the user mode, when the physical memory address corresponding to the first virtual memory address can be found through the foregoing S130, but the required waveform data cannot be found in the process of continuing to execute S140, S141b-S143b may be executed.
That is, S140 may include: s141b-S143 b.
S141 b: and when the waveform data cannot be found from the physical memory address, switching from the user mode to the kernel mode.
When the physical memory address corresponding to the first virtual memory address can be found in the user mode, but the required waveform data cannot be found from the physical memory address in the user mode, S141b may be executed to perform mode switching, S142b is executed to load the required waveform data, and S143b is executed to switch back to the user mode, where the required waveform data is obtained from the physical memory address in the user mode.
S142 b: and under the kernel mode, determining the physical storage address of the target waveform file mapped by the first virtual memory address according to a pre-established first mapping relation, and copying the waveform data of the target waveform file from the physical storage address to the physical memory address mapped by the first virtual memory address.
The first mapping relationship comprises a mapping relationship between a physical storage address of the target waveform file and the first virtual memory address.
S143 b: and switching from the kernel mode to the user mode, and searching the physical memory address to obtain the waveform data in the user mode.
For the contents of S142b-S143b related to mapping and copying, refer to the contents of S141a-S142 a. S150 may be executed after S143b is executed to perform waveform display.
Through the implementation manners of S130, S141b-S143b, and S150, when the required waveform data cannot be found from the physical memory address corresponding to the first virtual memory address, mode switching is performed, based on the pre-established first mapping relationship, the physical storage location where the waveform data is located is determined first, then the waveform data is copied and read, so that the waveform data is directly loaded into the physical memory address corresponding to the first virtual memory address, and then the waveform data loaded in the physical memory address is obtained in the user mode, so as to perform waveform display. In the process, even if the required partial waveform data cannot be obtained at present, the required waveform data can be directly written into the physical memory corresponding to the first virtual memory address only by copying once, waveform display can be rapidly realized, the waveform data does not need to be displayed until the waveform array loads all the waveform data of a complete waveform file like the traditional scheme, the waveform data is copied and cached to a part of specific physical memory divided into the kernel space, and then the waveform data cached in the kernel space is copied to another part of specific physical memory divided into the user space for step-by-step reading processing.
Illustratively, the operating system of the electronic terminal device may be Android go. The operating memory capacity of the electronic terminal device may be less than or equal to 1 GB.
It should be noted that the contents of fig. 1, fig. 2, and fig. 3 may be used in combination in one recording and playing process.
In an application scenario, after recording is finished and before a target waveform file is read for the first time and waveform display is performed, a storage path of the target waveform file in the disk corresponding to the target recording file stored in the disk can be acquired, so that a physical storage location of the target waveform file can be determined. A file descriptor fd of the target waveform file in the kernel is obtained through the storage path, and based on the file descriptor fd, a first mapping relationship between a first virtual memory address (e.g., "B" in fig. 4) of a specified application process and a physical storage location (e.g., "a" in fig. 4) of the target waveform file may be established in an initialization process by using a mapping method such as mmap (memory Mapped files). The map is a Memory Mapped Files method, and is used for mapping a file or other objects to an address space of a process, so as to realize a one-to-one mapping relation between a file disk address and a section of virtual address in the process virtual address space.
When the data of the target waveform file is read for the first time, the first virtual Memory address is accessed, but the shared physical Memory is not determined at this time, so that the first virtual Memory address needs to be address mapped by using an MMU (Memory Management Unit). At this time, since the physical memory address corresponding to the first virtual memory address cannot be found, a page fault is triggered, and the user mode is switched to the kernel mode. In addition, in the kernel mode, the physical memory may be allocated in a paging-on-demand manner, and idle frames of the physical memory may be searched from all physical memories managed by the kernel based on the virtual memory address of the kernel space, so as to search for an idle physical memory address, and an idle physical memory address (e.g., "D" in fig. 4) corresponding to the searched second virtual memory address (e.g., "C" in fig. 4) is mapped to the first virtual memory address through the MMU. Then, continuing to fill the part of waveform data required this time from the physical storage location of the target waveform file to the allocated free physical Memory address by the previously established first mapping relationship in the kernel mode by adopting a DMA (Direct Memory Access) technology. And then switching back to the user mode from the kernel mode, and acquiring the waveform data in the physical memory address for analysis and display. The mode of paging and allocating according to requirements can save the memory occupied by the application process, and the physical memory is allocated and the waveform data is loaded into the physical memory when the first virtual memory address is accessed and the page fault is triggered, so that the content of the whole target waveform file does not need to be read into the physical memory at one time when the recording content of a super-long time is played.
When copying the waveform data in the kernel mode, the data is copied in a pre-reading manner, where the pre-reading is not limited to write all the data of the entire target waveform file into the physical memory once, but means that the waveform data required next time or more subsequent times can be written into the physical memory address in advance. For example, when the waveform data is read for the first time, only 4 bytes of waveform data need to be parsed, but in practical applications, after the physical memory is allocated, a part of the waveform data exceeding 4 bytes is written in more, for example, 12 bytes, 40 bytes or more of waveform data are written in more, so that the waveform data required in the allocated physical memory can be quickly obtained for display next time or more. The pre-read part of the waveform data can be managed by the kernel, and the application process only needs to acquire the part of the waveform data needing to be displayed at this time in a user mode for processing, so that the data processing efficiency can be improved, and a physical memory allocation process and a disk data reading process are not needed to be performed each time a point of waveform data is read. When the application process finishes analyzing and displaying the pre-read waveform data in the user mode and continues to access the allocated physical memory in the user mode, the page fault can be triggered again, and the kernel mode is entered again to copy new waveform data to the allocated physical memory address, so that the electronic terminal equipment can continue to obtain new waveform data for waveform display after switching back to the user mode.
When the target waveform file is not read for the first time but new waveform data needs to be obtained from the target waveform file for display, a situation that the currently required waveform data does not exist in the allocated physical memory address may occur, for example, when the part of the waveform data read for the last time is already displayed, audio data at the next time point is played next, and a next set of new waveform data is needed, a page fault is triggered again because the physical memory address corresponding to the first virtual memory address is not searched for the required waveform data, at this time, only new waveform data needs to be written into the previously allocated physical memory address in the kernel mode, and the waveform data written into the kernel mode can be directly switched to the user mode for analyzing and displaying the waveform data.
Optionally, the target waveform file in the embodiment of the present application may be in a binary format.
In an application scene, when the electronic terminal equipment records sound, audio sampling is carried out according to a set time interval, and original audio data and volume amplitude information are obtained. The set time interval may be, but is not limited to: 100 milliseconds, 200 milliseconds. The method comprises the following steps of continuously acquiring original audio data and volume amplitude information which correspond to each other in the recording process, and respectively storing the two acquired contents until the recording process is finished to obtain two files: a target sound recording file obtained based on the original audio data and a target waveform file obtained based on the volume amplitude information. The target sound recording file and the target waveform file can be files in binary format.
Taking waveform processing as an example, the volume amplitude information can be expressed as a signal-to-noise ratio, and can be converted into floating point type data in binary format with decibel (db) as a unit for storage during storage, so as to obtain a target waveform file with all waveform data in binary format.
Optionally, one waveform data in the embodiment of the present application may be a floating-point type waveform data of 4 bytes, and a binary representation form of one waveform data may be exemplified as: 00111110001000000000000000000000 ═ 0.1562. In this example, the 32-bit floating-point waveform data conforms to the IEEE754 specification, the highest bit is the sign bit, the next 8 bits are index information, and the remaining 23 bits are significant numbers. All waveform data in a target waveform file includes consecutive binary floating point type data, each floating point type data being 4 bytes.
Compared with the conventional scheme in which waveform data is stored in an ASCII (American Standard Code for Information exchange) text file format, the above-described embodiment stores waveform data in a binary format, so that the size of space occupied by the entire target waveform file can be optimized, and when the target waveform file in the binary format is read by the above-described method, the occupation of an actual memory in the waveform reading process can be reduced, thereby facilitating fast reading and analysis.
As an embodiment, as shown in fig. 5, the S150 may include:
s151: and under the user mode, taking binary data with a set byte length as floating-point waveform data, and analyzing the content of the waveform data obtained from the physical memory address.
Taking the example of setting the byte length to 4 bytes, the content analysis of the waveform data obtained from the physical memory address in this way can realize efficient analysis.
S152: and displaying the waveform content determined by analysis according to the playing time point corresponding to the currently played audio content.
The playing time point corresponding to the currently played audio content and the playing display point of the waveform content may be set to be the same, or may be set to be played under an allowed error delay condition, and the display delay occurring under the allowed error delay condition by the user may be regarded as being imperceptible.
In an application scenario, a part of waveform data in the target waveform file is already loaded into a physical memory address corresponding to the first virtual memory address, but a part of waveform data is not loaded into the physical memory from the disk, and at this time, the part of waveform data loaded into the physical memory address can be subjected to content analysis according to 4 bytes each time, so that a waveform matched with the currently played audio content is obtained and displayed. The process of parsing and displaying can be regarded as converting binary floating point data into a waveform of volume amplitude, and performing time synchronization with a play time node of the audio play stream, so that the waveform displayed on the screen is synchronized with the played audio content.
Through the implementation manner of the above S151-S152, the waveform corresponding to the currently played audio content can be quickly loaded, analyzed, and displayed, which is beneficial to realizing real-time waveform drawing in a recording processing scene for a very long time (for example, more than 12 hours, 24 hours, etc.). And the waveform processing is carried out by the binary floating point type data of 4 bytes, which is beneficial to displaying small volume change at specific time points and has better detail presentation capability.
Optionally, as shown in fig. 6, the recording waveform processing method provided in the embodiment of the present application may further include:
s161: and responding to marking operation initiated by a user, and acquiring a waveform marking position corresponding to the marking operation.
S162: and positioning and marking the corresponding waveform data at the waveform marking position.
In an application scenario, a user can perform a marking operation on an operation interface of the electronic terminal device during playing of a recording, the electronic terminal device can acquire a waveform marking position corresponding to the marking operation and audio content actually corresponding to the waveform marking position according to the marking operation of the user, and can perform positioning marking on waveform data corresponding to the waveform marking position, so that the waveform marked at the position and the audio content corresponding to the waveform can be quickly found out according to the positioning marking.
Through the implementation mode of S161-S162, random access is supported, richer information can be added to the recording playing process and the target waveform file, and more dimensionality recording use is facilitated. In the practical application process, the waveform data may be specifically marked by using a digital mark, and the playing time point may be located in detail by using the digital mark, so as to add richer information to the audio.
The comparison results of several data transmission methods in terms of read-write processing speed are given in table 1 below, wherein four file read-write modes are adopted for comparison in the comparison process, and the files transmitted by the four read-write modes are all 40MB files.
TABLE 1
Figure BDA0003269817660000211
It should be noted that, both the "normal input stream" and the "buffered input stream" in table 1 are completed by an array, and it is necessary to create a waveform array, perform array capacity expansion, and load all waveform data of the entire waveform file into the array, so that a memory overflow condition easily occurs on a low-configuration terminal device with a low available memory. Therefore, in order to avoid memory overflow as much as possible, and achieve the effect of reading data randomly without using the array processing process of the conventional scheme, only a memory mapped file or a random file can be selected to access the random access file class to provide a random access function. The Random Access File class is a File content Access class with rich functions in a Java input/output stream system, provides a plurality of method functions to Access File contents, and supports accessing data at any position in a File. As can be seen from table 1, the data read-write speed performed by the memory mapping file is faster, and therefore, when the recording waveform processing method provided by the embodiment of the present application is used for performing the random positioning marking on the recording process, the processing speed is faster, and the method can also have better compatibility and stability when applied to a device with lower performance configuration and lower available memory.
Optionally, as shown in fig. 7, the recording waveform processing method provided in the embodiment of the present application may further include:
s171: and responding to noise filtering operation initiated by a user, and acquiring a filtering area corresponding to the noise filtering operation.
S172: and deleting the audio data and the waveform data in the filtering area.
In an application scenario, a user can perform noise filtering operation on an operation interface of the electronic terminal device during playing of a recording, the electronic terminal device can acquire a filtering region corresponding to the noise filtering operation according to the noise filtering operation of the user, and then delete audio data and waveform data in the filtering region, thereby leaving audio data and waveform data which are interesting to the user.
Through the implementation mode of S171-S172, invalid information can be quickly removed from a large amount of audio data and waveform data, and more valuable recording contents are left. It will be appreciated that based on this principle, a segment of audio data and waveform data of a valid, valuable region may also be screened out, and the screened-out content may be additionally stored.
According to the recording waveform processing method provided by the embodiment of the application, the mode of storing waveform data by using an array is abandoned, the waveform data are directly read from the physical storage position of the waveform file to the physical memory address mapped by the first virtual memory address of the application process, and the time consumption caused by the array copying process is avoided in the mode of directly reading the file. In addition, the heap memory space required by the array for storing the waveform data is saved, the memory occupied by the application process is optimized by reasonably utilizing the out-of-heap memory, and the problem of memory overflow is solved. In addition, in the process of reading the target waveform File, a File reading and writing mode supported by a memory mapping File provided by a Java nio library is introduced, so that data required by Random Access can be quickly accessed in the recording and playing process, the processing speed is higher than that of a Random Access File type Random File Access mode, a large amount of time consumption in the data copying process is reduced, and a user can conveniently and quickly add more abundant information to the target waveform File. If the target waveform file for representing the volume amplitude information is stored in the binary format in the recording process, the space for storing the target waveform file can be reduced, the target waveform file can be more conveniently read and analyzed, and the waveform pattern can be drawn in real time while the recording and playing process is facilitated. The scheme can stably run even on Android go equipment with low performance configuration and low available memory resources, supports various operating systems, and can be used for processing recording contents and corresponding waveforms for a very long time.
The foregoing description describes certain embodiments of this specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Referring to fig. 8, fig. 8 is a functional structure block diagram of a recording waveform processing apparatus according to an embodiment of the present application. The device can be arranged in the electronic terminal equipment.
As shown in fig. 8, the recorded waveform processing apparatus may include: an audio processing module 210 and a waveform processing module 220.
The audio processing module 210 is configured to play the audio content of the target sound recording file according to a play instruction for the target sound recording file.
The waveform processing module 220 is configured to, when the audio content is played, obtain, in a user mode, a first virtual memory address mapped by a target waveform file corresponding to the target audio record file.
The waveform processing module 220 is further configured to search for a physical memory address corresponding to the first virtual memory address, where the physical memory address corresponding to the first virtual memory address is the same as a physical memory address mapped by the second virtual memory address in the kernel mode.
The waveform processing module 220 is further configured to search the waveform data of the target waveform file from the physical memory address.
The waveform processing module 220 is further configured to display a waveform corresponding to the currently played audio content according to the found waveform data.
Optionally, the waveform processing module 220 is further configured to: and when the waveform data cannot be found from the physical memory address, switching from the user mode to the kernel mode. In the kernel mode, determining a physical storage address of the target waveform file mapped by the first virtual memory address according to a pre-established first mapping relation, and copying waveform data of the target waveform file from the physical storage address to the physical memory address mapped by the first virtual memory address, wherein the first mapping relation comprises the mapping relation between the physical storage address of the target waveform file and the first virtual memory address. And switching from the kernel mode to the user mode, and searching the physical memory address to obtain the waveform data in the user mode.
Optionally, the waveform processing module 220 is further configured to: and when the physical memory address corresponding to the first virtual memory address cannot be found, switching from the user mode to the kernel mode. And in the kernel mode, selecting an idle physical memory address and the first virtual memory address to perform address mapping so that the physical memory address is mapped by the first virtual memory address and the second virtual memory address together. And for: in the kernel mode, determining a physical storage address of the target waveform file according to a pre-established first mapping relation, and copying waveform data of the target waveform file from the physical storage address to the physical memory address mapped by the first virtual memory address, where the first mapping relation includes a mapping relation between the physical storage address of the target waveform file and the first virtual memory address. And switching from the kernel mode to the user mode, and searching the physical memory address to obtain the waveform data in the user mode.
The target waveform file may be in a binary format.
Optionally, the waveform processing module 220 is further configured to: in the user mode, binary data with a set byte length is used as floating-point waveform data, and the waveform data obtained from the physical memory address is subjected to content analysis; and displaying the waveform content determined by analysis according to the playing time point corresponding to the currently played audio content.
Optionally, the waveform processing module 220 is further configured to: responding to marking operation initiated by a user, and acquiring a waveform marking position corresponding to the marking operation; and positioning and marking the corresponding waveform data at the waveform marking position.
Optionally, the waveform processing module 220 is further configured to: responding to noise filtering operation initiated by a user, and acquiring a filtering area corresponding to the noise filtering operation; the audio processing module 210 is further configured to delete the audio data in the filtering area, and the waveform processing module 220 is further configured to delete the waveform data in the filtering area.
The recording waveform processing apparatus provided in the embodiment shown in fig. 8 may be configured to execute the technical solutions of the method embodiments shown in fig. 1 to fig. 7 in this specification, and regarding other details of the recording waveform processing apparatus and the electronic terminal device, reference may be made to the contents related to the recording waveform processing method and the electronic terminal device in the embodiments, and reference may be made to the relevant descriptions in the method embodiments for the same implementation principles and technical effects, which will not be described herein again.
In this embodiment of the application, the electronic terminal device may be, but is not limited to, an intelligent electronic device such as a smart phone, a tablet computer or a notebook computer, and an intelligent wearable device.
For example, please refer to fig. 9, fig. 9 is a schematic diagram of an electronic terminal device 300 according to an embodiment of the present application, and fig. 9 illustrates a structural schematic diagram of the electronic terminal device 300 by taking a smart phone as an example.
As shown in fig. 9, the electronic terminal device 300 may include: audio processing component 310, display component 320, processor 330, memory 340, power module 350, communications module 360, speaker 311, headphones 312, microphone 313, and headset interface 314.
The audio processing component 310, the display component 320, the memory 340, the power module 350, the communication module 360, the speaker 311, the receiver 312, the microphone 313, and the headset interface 314 may each be in direct or indirect communication with the processor 330.
Memory 340 may be used, among other things, to store computer-executable program code, which includes instructions. The memory 340 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (e.g., audio data, a phonebook) created during use of the electronic terminal device 300, and the like. Further, the memory 340 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 330 executes various functional applications of the electronic terminal device 300 and data processing by executing instructions stored in the memory 340 and/or instructions stored in a memory provided in the processor. For example, the processor 330 may execute various functional applications and data processing by executing program instructions stored in the memory 340, such as implementing the recorded waveform method provided by the embodiments of the present application.
Processor 330 may include one or more processing units, such as: the Processor 330 may include an Application Processor (AP), a modem Processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband Processor, and/or a Neural-Network Processing Unit (NPU), among others. The different processing units may be separate devices or may be integrated into one or more processors. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
An internal memory may also be provided in the processor 330 for storing instructions and data. In some embodiments, the memory in the processor 330 is a cache memory, which can hold instructions or data that have just been used or recycled by the processor 330, and can be called directly from the memory if the processor 330 needs to reuse the instructions or data, thereby avoiding repeated accesses, reducing the latency of the processor 330, and thus increasing the efficiency of the system.
In some embodiments, the processor 330 may include one or more interfaces for communicatively coupling various components within the electronic terminal device 300. For example, the interface may include an integrated circuit (I2C) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The electronic terminal device 300 may implement audio functions, such as audio playing, recording, etc., through the audio processing component 310, the speaker 311, the receiver 312, the microphone 313, the earphone interface 314, and the application processor, etc.
The audio processing component 310 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio processing component 310 may also be used to encode and decode audio signals. In some embodiments, the audio processing component 310 may be disposed in the processor 330, or some functional modules of the audio processing component 310 may be disposed in the processor 330.
The speaker 311, also called a "horn", is used to convert an audio electrical signal into an acoustic signal. The electronic terminal device 300 can listen to music, play a recording, or listen to a hands-free call through the speaker 311.
A receiver 312, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic terminal device 300 receives a call or voice information, it can receive voice by placing the receiver 312 close to the human ear.
The microphone 313, also called "microphone", is used to convert a sound signal into an electrical signal. The electronic terminal device 300 may be provided with one or more microphones 313 for collecting sound signals, reducing noise, identifying sound sources, implementing directional recording functions, and the like.
The headphone interface 314 is used to connect headphones. The headset interface 314 may be a USB interface, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The electronic terminal device 300 can implement display functions through the GPU, the display component 320, and the application processor. The GPU is an image processing microprocessor coupled to the display component 320 and the application processor for performing mathematical and geometric calculations for graphics rendering. Processor 330 may include one or more GPUs that execute program instructions to generate or alter display information.
The display component 320 is used to display images, videos, etc., such as may be used to display waveform diagrams, waveform animations. The display assembly 320 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic terminal device 300 may include 1 or N display components 320, N being a positive integer greater than 1.
In the embodiment of the present application, the audio processing component 310 is configured to play the audio content of the target sound recording file. The display component 320 is configured to display a waveform corresponding to the audio content. The memory 340 stores program instructions executable by the processor 330, and the program instructions when called by the processor 330 can execute the recorded waveform processing method according to the foregoing embodiment.
The power module 350 may be used to receive input from a battery and/or a charging module, to power the processor 330, the memory 340, the display assembly 320, the communication module 360, and the like. The power module 350 may also be used to monitor parameters such as battery capacity, battery cycle number, battery state of health (leakage, impedance), etc.
The communication module 360 may provide wired or wireless communication functions for the electronic terminal device 300. The wireless communication function of the electronic terminal device 300 may be implemented by an antenna, a communication module 360, a modem processor, a baseband processor, and the like. The antenna is used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic terminal device 300 may be used to cover a single or multiple communication bands. The mobile communication module in the communication module 360 may provide a solution including wireless communication of 2G/3G/4G/5G, etc. applied on the electronic terminal device 300. In some embodiments, at least part of the functional modules of the mobile communication module may be provided in the processor 330. In some embodiments, at least some of the functional modules of the mobile communication module may be disposed in the same device as at least some of the modules of the processor 330.
It is to be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation to the electronic terminal device 300. In other embodiments of the present application, the electronic terminal device 300 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware. For example, the electronic terminal device 300 may further include a camera, a motor, a key, and the like.
The electronic terminal device 300 may be connected to an external memory card, such as a Micro SD card, to extend the storage capability of the electronic terminal device 300. The external memory card communicates with the processor 330 through an external memory interface to implement a data storage function. For example, a sound recording file and a waveform file are stored in an external memory card.
In addition to the foregoing embodiments, the present application provides a storage medium having a computer program stored thereon, where the computer program is executed by a processor to execute the recording waveform processing method of the foregoing embodiments. The storage medium may be a non-transitory computer readable storage medium.
The storage media described above may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, Radio Frequency (RF), etc., or any suitable combination of the foregoing.
In the description of embodiments of the invention, reference to the description of the terms "embodiment," "example," etc., means that a particular feature, structure, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the specification. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature.
In the several embodiments provided in the present specification, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present description may be integrated into one processing unit, or each unit may exist alone physically, or two or more functional units may be integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods described in the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only a preferred embodiment of the present disclosure, and should not be taken as limiting the present disclosure, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (10)

1. A recording waveform processing method is applied to an electronic terminal device, and comprises the following steps:
playing the audio content of the target sound recording file according to the playing instruction of the target sound recording file;
when the audio content is played, acquiring a first virtual memory address mapped by a target waveform file corresponding to the target sound recording file in a user mode;
searching a physical memory address corresponding to the first virtual memory address, wherein the physical memory address corresponding to the first virtual memory address is the same as a physical memory address mapped by a second virtual memory address in a kernel mode;
searching waveform data of the target waveform file from the physical memory address;
and displaying the waveform corresponding to the currently played audio content according to the searched waveform data.
2. The method of claim 1, wherein the searching the waveform data of the target waveform file from the physical memory address comprises:
switching from the user mode to the kernel mode when the waveform data cannot be found from the physical memory address;
under the kernel mode, according to a pre-established first mapping relation, determining a physical storage address of the target waveform file mapped by the first virtual memory address, and copying waveform data of the target waveform file from the physical storage address to the physical memory address mapped by the first virtual memory address, wherein the first mapping relation comprises the mapping relation between the physical storage address of the target waveform file and the first virtual memory address;
and switching from the kernel mode to the user mode, and searching the physical memory address to obtain the waveform data in the user mode.
3. The method of claim 1, wherein the searching for the physical memory address corresponding to the first virtual memory address comprises:
when the physical memory address corresponding to the first virtual memory address cannot be found, switching from the user mode to the kernel mode;
in the kernel mode, selecting an idle physical memory address and the first virtual memory address to perform address mapping so that the physical memory address is mapped by the first virtual memory address and the second virtual memory address together;
the searching the waveform data of the target waveform file from the physical memory address comprises:
in the kernel mode, determining a physical storage address of the target waveform file according to a pre-established first mapping relation, and copying waveform data of the target waveform file from the physical storage address to the physical memory address mapped by the first virtual memory address, wherein the first mapping relation comprises the mapping relation between the physical storage address of the target waveform file and the first virtual memory address;
and switching from the kernel mode to the user mode, and searching the physical memory address to obtain the waveform data in the user mode.
4. The method of claim 1, wherein the target waveform file is in binary format.
5. The method according to claim 4, wherein the displaying the waveform corresponding to the currently played audio content according to the found waveform data comprises:
in the user mode, binary data with a set byte length is used as floating-point waveform data, and the waveform data obtained from the physical memory address is subjected to content analysis;
and displaying the waveform content determined by analysis according to the playing time point corresponding to the currently played audio content.
6. The method according to any one of claims 1-5, further comprising:
responding to marking operation initiated by a user, and acquiring a waveform marking position corresponding to the marking operation;
and positioning and marking the corresponding waveform data at the waveform marking position.
7. The method according to any one of claims 1-5, further comprising:
responding to noise filtering operation initiated by a user, and acquiring a filtering area corresponding to the noise filtering operation;
and deleting the audio data and the waveform data in the filtering area.
8. A recording waveform processing apparatus, applied to an electronic terminal device, the apparatus comprising:
the audio processing module is used for playing the audio content of the target recording file according to the playing instruction of the target recording file;
the waveform processing module is used for acquiring a first virtual memory address mapped by a target waveform file corresponding to the target sound recording file in a user mode when the audio content is played;
the waveform processing module is further configured to search for a physical memory address corresponding to the first virtual memory address, where the physical memory address corresponding to the first virtual memory address is the same as a physical memory address mapped by a second virtual memory address in a kernel mode;
the waveform processing module is further configured to search the waveform data of the target waveform file from the physical memory address;
the waveform processing module is further configured to display a waveform corresponding to the currently played audio content according to the found waveform data.
9. An electronic terminal device, comprising:
an audio processing component;
a display component;
at least one processor; and
at least one memory communicatively coupled to the processor, wherein:
the audio processing component is used for playing the audio content of the target sound recording file;
the display component is used for displaying a waveform corresponding to the audio content;
stored in the memory are program instructions executable by the processor, which when called by the processor are capable of performing the method of any one of claims 1 to 7.
10. A storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, performs the method of any one of claims 1 to 7.
CN202111098304.2A 2021-09-18 2021-09-18 Recording waveform processing method and device, electronic terminal equipment and storage medium Pending CN113792179A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111098304.2A CN113792179A (en) 2021-09-18 2021-09-18 Recording waveform processing method and device, electronic terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111098304.2A CN113792179A (en) 2021-09-18 2021-09-18 Recording waveform processing method and device, electronic terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113792179A true CN113792179A (en) 2021-12-14

Family

ID=78879069

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111098304.2A Pending CN113792179A (en) 2021-09-18 2021-09-18 Recording waveform processing method and device, electronic terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113792179A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114579017A (en) * 2022-02-10 2022-06-03 优视科技(中国)有限公司 Method and device for displaying audio

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114579017A (en) * 2022-02-10 2022-06-03 优视科技(中国)有限公司 Method and device for displaying audio

Similar Documents

Publication Publication Date Title
CN106951521B (en) Method, device and system for reading and writing log file
KR20150024650A (en) Method and apparatus for providing visualization of sound in a electronic device
CN116244067B (en) Virtual memory management method and electronic equipment
CN111382087A (en) Memory management method and electronic equipment
WO2021185352A1 (en) Version upgrade method and related apparatus
WO2019219059A1 (en) Method, apparatus and system for storing data, and method for reading data, apparatus, and system
RU2656727C1 (en) Compression control surfaces supported by virtual memory
CN113792179A (en) Recording waveform processing method and device, electronic terminal equipment and storage medium
CN111045732A (en) Data processing method, chip, device and storage medium
WO2021218370A1 (en) Memory management method and terminal device
CN113485969B (en) Storage fragmentation method and device, terminal and computer storage medium
CN117130541A (en) Storage space configuration method and related equipment
CN112835610A (en) Method and device for constructing application program resource package and terminal equipment
CN114489471B (en) Input and output processing method and electronic equipment
RU2635255C2 (en) System coherent cache with possibility of fragmentation/ defragmentation
CN113760192B (en) Data reading method, data reading apparatus, storage medium, and program product
CN112783418B (en) Method for storing application program data and mobile terminal
CN114253737A (en) Electronic device, memory recovery method thereof and medium
CN115858046B (en) Method for preloading memory pages, electronic equipment and chip system
CN110750465A (en) System upgrading method, data processing method, device and equipment
WO2023051178A1 (en) Task scheduling method, electronic device, chip system, and storage medium
CN113973226B (en) Screen recording method, device, equipment and storage medium
CN116804915B (en) Data interaction method, processor, device and medium based on memory
WO2023051036A1 (en) Method and apparatus for loading shader
CN115840528A (en) Method for setting waterline of storage disc, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination