CN115022442B - Audio fault time positioning method, electronic equipment and storage medium - Google Patents

Audio fault time positioning method, electronic equipment and storage medium Download PDF

Info

Publication number
CN115022442B
CN115022442B CN202111540680.2A CN202111540680A CN115022442B CN 115022442 B CN115022442 B CN 115022442B CN 202111540680 A CN202111540680 A CN 202111540680A CN 115022442 B CN115022442 B CN 115022442B
Authority
CN
China
Prior art keywords
audio
file
pcm
audio data
pcm file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111540680.2A
Other languages
Chinese (zh)
Other versions
CN115022442A (en
Inventor
赵俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202111540680.2A priority Critical patent/CN115022442B/en
Publication of CN115022442A publication Critical patent/CN115022442A/en
Application granted granted Critical
Publication of CN115022442B publication Critical patent/CN115022442B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/24Arrangements for testing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2457Query processing with adaptation to user needs
    • G06F16/24573Query processing with adaptation to user needs using data annotations, e.g. user-defined metadata
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2474Sequence data queries, e.g. querying versioned data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R29/00Monitoring arrangements; Testing arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Acoustics & Sound (AREA)
  • Library & Information Science (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Software Systems (AREA)
  • Telephone Function (AREA)

Abstract

The application provides an audio fault time positioning method, electronic equipment and a storage medium. The audio fault time positioning method comprises the following steps: the first electronic device stores the audio data in a PCM file during playback of the audio data, the PCM file having an attribute value that includes a file creation time. The second electronic device analyzes the PCM file stored in the first electronic device, determines the relative time of the first audio fault in the PCM file, and determines the absolute time of the first audio fault according to the relative time and the file creation time. The first electronic device may be a cell phone and the second electronic device may be a personal computer. Thus, maintenance personnel or research personnel can acquire the first electronic equipment log of the corresponding time point according to the specific time point of the occurrence of the audio fault, and then can be combined with the log to quickly and effectively locate the audio fault, so that the solving speed of the audio fault is accelerated, and the user experience is improved.

Description

Audio fault time positioning method, electronic equipment and storage medium
Technical Field
The application relates to the field of intelligent terminals, in particular to an audio fault time positioning method, electronic equipment and a storage medium.
Background
During the audio playing process, the intelligent terminal (such as a mobile phone) may malfunction, for example, POP sound, noise, etc. may occur. The POP sound refers to the sound of explosion generated by transient impact caused by various operations after the audio device is powered on and powered off instantly and is powered on stably. The noise may be a galvanic sound, a metallic sound, background noise, etc.
In order to facilitate audio fault location, audio data corresponding to each processing link in the audio playing process is generally written into a PCM (Pulse Code Modulation ) file for storage. Stored in the PCM file is a binary sequence directly formed from the analog audio signal via analog-to-digital conversion. However, when a serviceman or a research and development person analyzes the PCM file to perform audio fault location, a specific time point when the audio fault occurs cannot be confirmed, and thus rapid and effective audio fault location cannot be performed.
Disclosure of Invention
In order to solve the technical problems, the application provides an audio fault time positioning method, electronic equipment and a storage medium.
In a first aspect, the present application provides an audio failure time localization method. The method comprises the following steps: a second electronic device (such as a personal computer) acquires the PCM file stored in the first electronic device (such as a mobile phone); the PCM file comprises audio data stored by the first electronic device in the process of playing the audio data, and the attribute value of the PCM file comprises file creation time; the second electronic device determines the relative time of the occurrence of the first audio fault in the PCM file by parsing the PCM file; the second electronic device determines an absolute time at which the first audio failure occurred based on the relative time and the file creation time. Thus, maintenance personnel or research personnel can acquire the first electronic equipment log of the corresponding time point according to the specific time point of the occurrence of the audio fault, and then can be combined with the log to quickly and effectively locate the audio fault, so that the solving speed of the audio fault is accelerated, and the user experience is improved.
In one application scenario, a first electronic device is an electronic device that has an audio failure reported by a user. In another application scenario, the first electronic device is an electronic device that needs quality detection.
Illustratively, the attribute value of the "creation time" attribute of the PCM file stores its file creation time.
According to the first aspect, the relative time is from the ith second to the jth second; the file creation time is t 0; the second electronic device determining an absolute time of occurrence of the first audio fault based on the relative time and the file creation time, comprising: the second electronic device determines the absolute time at which the first audio failure occurs as (time t0 + i seconds) to (time t0 + j seconds).
According to the first aspect, or any implementation manner of the first aspect, the file name of the PCM file includes a file creation time. In this way, the file creation time of the PCM file is highly readable.
According to a first aspect, or any implementation manner of the first aspect, the audio fault includes at least one of: POP sound fault and noise fault.
According to the first aspect, or any implementation manner of the first aspect, after the second electronic device determines an absolute time when the audio fault occurs, the method further includes: the second electronic equipment acquires a first electronic equipment log corresponding to the absolute time of occurrence of the audio fault; and the second electronic equipment combines the first electronic equipment log to locate the audio fault.
In a second aspect, the present application provides an audio data storage method. The method comprises the following steps: in the process of playing audio data by the electronic equipment, the electronic equipment acquires first audio data; the electronic equipment processes the first audio data to obtain second audio data; the second audio data is used for audio playing; the electronic equipment determines a first PCM file corresponding to the second audio data and writes the second audio data into the first PCM file; wherein the attribute value of the first PCM file includes a file creation time. Therefore, the PCM file storing the audio data in the process of playing the audio by the electronic equipment carries the file creation time, and further, the specific time point of the audio fault can be determined according to the relative time of the audio fault in the PCM file and the creation time of the PCM file, so that maintenance personnel or research personnel can acquire a first electronic equipment log of the corresponding time point according to the specific time point of the audio fault, and the audio fault is quickly and effectively positioned by combining the log.
Wherein the first PCM file corresponding to the second audio data refers to the first PCM file for storing the second audio data. The electronic device is a mobile phone, for example.
The processing performed by the electronic device on the first audio data may be a reading processing, or may be one or more of a resampling processing, a mixing processing, an audio processing, and the like. Thus, the first audio data and the second audio data may be the same or different. The first audio data and the second audio data may be audio data in any link involved in the audio playing process of the electronic device.
The second audio data is used for playing audio, and may be the second audio data directly played by the electronic device or the audio data obtained after the second audio data is processed by the electronic device.
The first audio data may be audio data issued by a media playing application, or may be audio data after resampling and mixing.
Also, as an example, the second audio data may be resampled and/or audio data after mixing processing, or may be audio data after sound effect processing.
According to a second aspect, the audio data storage method further comprises: the electronic device determines a second PCM file corresponding to the first audio data and writes the first audio data to the second PCM file. Wherein the second PCM file corresponding to the first audio data refers to the second PCM file for storing the first audio data. Thus, the audio data in any link involved in the audio playing process of the electronic equipment are written into the corresponding PCM file, and further, the specific time point of occurrence of the audio fault in each link can be determined.
According to a second aspect, or any implementation of the second aspect above, the file name of the PCM file includes a file creation time. In this way, the file creation time of the PCM file is highly readable.
According to a second aspect, or any implementation manner of the second aspect, the electronic device determines a first PCM file corresponding to the second audio data, including: the electronic equipment judges whether a first PCM file corresponding to the second audio data exists or not according to the source of the second audio data; if not, the electronic device creates a new PCM file as the first PCM file corresponding to the second audio data.
Similarly, the electronic device determining a second PCM file corresponding to the first audio data, comprising: the electronic equipment judges whether a second PCM file corresponding to the first audio data exists or not according to the source of the first audio data; if not, the electronic device creates a new PCM file as a second PCM file corresponding to the first audio data.
According to a second aspect, or any implementation manner of the second aspect above, the electronic device creates a new PCM file as a first PCM file corresponding to the second audio data, including: the electronic equipment determines a first character string; wherein the first character string is used for representing the source of the second audio data; the electronic equipment acquires the current time of the system, and splices the first character string and the current time of the system to obtain a second character string; the electronic device creates a new PCM file as a first PCM file corresponding to the second audio data based on the second string. In this way, the electronic device can add the file creation time to the PCM file name, improving the legibility of the file creation time of the PCM file.
Similarly, the electronic device creates a new PCM file as a second PCM file corresponding to the first audio data, including: the electronic equipment determines a first character string; wherein the first character string is used for representing a source of the first audio data; the electronic equipment acquires the current time of the system, and splices the first character string and the current time of the system to obtain a second character string; the electronic device creates a new PCM file as a second PCM file corresponding to the first audio data based on the second string.
According to a second aspect, or any implementation manner of the second aspect, the audio data storage method further includes: the electronic equipment acquires a second audio data name and an ID corresponding to the second audio data name; the electronic device determining whether a first PCM file corresponding to the second audio data exists according to a source of the second audio data, including: the electronic equipment judges whether a target PCM file exists or not, wherein the file name of the target PCM file comprises a second audio data name and a spliced character string of an ID corresponding to the second audio data name; the electronic device determines a first string comprising: and the electronic equipment splices the second audio data name and the ID corresponding to the second audio data name to obtain a first character string.
Similarly, the audio data storage method further includes: the electronic equipment acquires a first audio data name and an ID corresponding to the first audio data name; the electronic device determining whether a second PCM file corresponding to the first audio data exists according to a source of the first audio data, including: the electronic equipment judges whether a target PCM file exists or not, wherein the file name of the target PCM file comprises a first audio data name and a spliced character string of an ID corresponding to the first audio data name; the electronic device determines a first string comprising: the electronic equipment splices the first audio data name and the ID corresponding to the first audio data name to obtain a first character string.
According to a second aspect, or any implementation of the second aspect above, the source of the first audio data or the second audio data comprises at least one of: audio data issued by the media playing application, audio data after resampling and mixing processing, and audio data after sound effect processing.
According to a second aspect, or any implementation manner of the second aspect, before the playing of the audio data by the electronic device, the method further includes: the electronic equipment responds to the received first operation and carries out ROOT processing on the electronic equipment; and the electronic equipment responds to the received second operation, and starts a PCM file storage function in the audio authority of the electronic equipment. Therefore, only after the electronic equipment is ROOT and the PCM file storage function of the electronic equipment is started, the audio data can be stored in the process of playing the audio data by the electronic equipment, so that research personnel or after-sales personnel can conveniently position the audio fault problem based on the PCM file stored in the electronic equipment.
In a third aspect, the present application provides an audio data storage method. The method comprises the following steps: in the process of playing audio data by the electronic equipment, the electronic equipment acquires first audio data issued by a media playing application; the electronic equipment determines a first PCM file corresponding to the first audio data and writes the first audio data into the first PCM file; wherein the attribute value of the first PCM file comprises the file creation time of the first PCM file; the electronic equipment resamples and/or mixes the first audio data to obtain second audio data; the electronic device determines a second PCM file corresponding to the second audio data and writes the second audio data into the second PCM file; wherein the attribute value of the second PCM file includes a file creation time of the second PCM file; the electronic equipment performs sound effect processing on the second audio data to obtain third audio data; the electronic equipment determines a third PCM file corresponding to the third audio data, and writes the third audio data into the third PCM file; wherein the attribute value of the third PCM file includes a file creation time of the third PCM file. In this way, the PCM file storing the audio data in each link in the process of playing the audio by the electronic equipment carries the file creation time, and further, the specific time point of the audio fault can be determined according to the relative time of the audio fault in the PCM file and the creation time of the PCM file, so that maintenance personnel or research personnel can acquire a first electronic equipment log of the corresponding time point according to the specific time point of the audio fault, and the audio fault can be quickly and effectively positioned by combining the log.
In a fourth aspect, the present application provides an electronic device. The electronic device includes: one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored on the memory, which when executed by the one or more processors, cause the electronic device to perform the audio time-to-failure localization method of any of the first aspect and the first aspect.
Any implementation manner of the fourth aspect and any implementation manner of the fourth aspect corresponds to any implementation manner of the first aspect and any implementation manner of the first aspect, respectively. Technical effects corresponding to any implementation manner of the fourth aspect may be referred to the technical effects corresponding to any implementation manner of the first aspect, and are not described herein.
In a fifth aspect, the present application provides an electronic device. The electronic device includes: one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored on the memory, which when executed by the one or more processors, cause the electronic device to perform the audio data storage method of any of the second aspect and the second aspect, or cause the electronic device to perform the audio data storage method of any of the third aspect and the third aspect.
Any implementation manner of the fifth aspect and any implementation manner of the fifth aspect corresponds to any implementation manner of the second aspect and any implementation manner of the second aspect, or corresponds to any implementation manner of the third aspect and any implementation manner of the third aspect, respectively. Technical effects corresponding to any implementation manner of the fifth aspect may be referred to technical effects corresponding to any implementation manner of the second aspect or the second aspect, or technical effects corresponding to any implementation manner of the third aspect or the third aspect may be referred to, which are not repeated here.
In a sixth aspect, the present application provides a computer readable storage medium comprising a computer program which, when run on an electronic device, causes the electronic device to perform the audio time-to-failure localization method of any one of the first aspect and the first aspect, or causes the electronic device to perform the audio data storage method of any one of the second aspect and the second aspect, or causes the electronic device to perform the audio data storage method of any one of the third aspect and the third aspect.
Any implementation manner of the sixth aspect corresponds to any implementation manner of the first aspect and the first aspect, respectively, or corresponds to any implementation manner of the second aspect and the second aspect, respectively, or corresponds to any implementation manner of the third aspect and the third aspect, respectively. Technical effects corresponding to any implementation manner of the fifth aspect may be referred to the technical effects corresponding to any implementation manner of the first aspect, or technical effects corresponding to any implementation manner of the second aspect, or technical effects corresponding to any implementation manner of the third aspect, which are not described herein.
Drawings
FIG. 1 is a schematic illustration of an exemplary application scenario;
fig. 2 is a schematic diagram of a hardware structure of an exemplary electronic device;
FIG. 3 is a schematic diagram of a software architecture of an exemplary electronic device;
fig. 4 is a schematic diagram of module interaction provided in an embodiment of the present application;
fig. 5 is a flowchart illustrating storing audio data in a PCM file according to an embodiment of the present invention;
fig. 6 is a flowchart of an audio failure time positioning method according to an embodiment of the present application;
FIG. 7 is a schematic diagram illustrating an exemplary PCM file parsing;
FIG. 8 is another exemplary PCM file parsing scheme;
fig. 9 is a schematic view of an exemplary device.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone.
The terms first and second and the like in the description and in the claims of embodiments of the present application are used for distinguishing between different objects and not necessarily for describing a particular sequential order of objects. For example, the first target object and the second target object, etc., are used to distinguish between different target objects, and are not used to describe a particular order of target objects.
In the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" means two or more. For example, the plurality of processing units refers to two or more processing units; the plurality of systems means two or more systems.
In one application scenario, when a user uses an electronic device (such as a mobile phone) to play audio, a POP sound, a noise, and other faults may occur. At the moment, the user can deliver the electronic equipment with the audio fault to a research and development personnel or after-sales personnel for fault analysis and processing in a mode of changing the machine and the like. As shown in fig. 1, a developer or after-sales person may perform a ROOT process on a mobile phone having an audio fault through a ROOT software. For example, after the ROOT rights are obtained, a developer or after-market person may turn on the PCM file storage function in the audio rights, referring to fig. 1.
When the PCM file storage function of the mobile phone is started, in the process of playing audio data by the mobile phone, the audio data processed in each link are stored in the corresponding PCM file. When the mobile phone plays the audio data, the audio player reads the audio data issued by the media playing application and sends the read audio data to the resampling and mixing module; the resampling and mixing module resamples and/or mixes the received audio data and sends the processed audio data to the sound effect processing module; and the sound effect processing module performs sound effect processing on the received audio data and sends the processed audio data to the audio hardware abstraction layer for playing.
For example, during the process of playing audio data by the mobile phone, the audio data issued by the media playing application is stored in the PCM file 1. For example, the name of the PCM file 1 is "recording audio data issued by an application". Subsequently, the resampled and mixed audio data is stored in the PCM file 2. For example, the PCM file 2 is named "recording audio data after the resampling and mixing process". Then, the audio data after the sound effect processing is saved in the PCM file 3. For example, the name of the PCM file 3 is "audio data after sound effect processing".
Therefore, after audio fault reproduction is carried out by research personnel or after-sales personnel, audio fault analysis can be carried out according to the audio data recorded in the PCM files stored in the mobile phone. For example, a developer or after-market personnel may parse the PCM files through specialized software to locate audio failure problems, such as confirming which link failed to cause handset audio anomalies.
In another possible application scenario, when a user uses an electronic device (such as a mobile phone) to play audio, a POP sound, a noise, and other faults may occur. At this time, the user can start the PCM file storage function in the mobile phone audio authority to store the audio data processed in each link in the process of playing the audio data abnormally. Furthermore, when the user delivers the mobile phone to a research and development personnel or after-sales personnel for fault processing in a mode of changing the mobile phone, the research and development personnel or after-sales personnel can directly analyze audio faults according to the PCM files stored in the mobile phone without audio fault reproduction. In the application scene, under the ordinary user permission, the PCM file storage function in the mobile phone audio permission can be started or closed. Therefore, the user can autonomously select whether to start the PCM file storage function in the mobile phone audio authority. In order to save the storage space of the mobile phone, the PCM file storage function in the audio authority of the mobile phone defaults to a closed state.
However, the research personnel or after-sales personnel cannot directly confirm the specific time point when the audio abnormality occurs according to the PCM files stored in the mobile phone, and further cannot quickly and effectively locate the audio fault, so that the problem of the user cannot be quickly solved, and the mobile phone is returned to the user as soon as possible.
Fig. 2 shows a schematic structural diagram of the electronic device 100. Alternatively, the electronic device 100 may be a terminal, which may also be referred to as a terminal device, and the terminal may be a cellular phone (cellular phone) or a tablet computer (pad), which is not limited in this application. It should be noted that the schematic structural diagram of the electronic device 100 may be applied to the mobile phone in fig. 1. It should be understood that the electronic device 100 shown in fig. 2 is only one example of an electronic device, and that the electronic device 100 may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in fig. 2 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The electronic device 100 may include: processor 110, external memory interface 120, internal memory 121, universal serial bus (universal serial bus, USB) interface 130, charge management module 140, power management module 141, battery 142, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headset interface 170D, sensor module 180, keys 190, motor 191, indicator 192, camera 193, display 194, and subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor, a gyroscope sensor, a barometric sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the processor 110 may include one or more interfaces, such as a PCM interface, a universal serial bus (universal serial bus, USB) interface, or the like. PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface to implement a function of answering a call through the bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication. The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transfer data between the electronic device 100 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
The charge management module 140 is configured to receive a charge input from a charger. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142. The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 100. In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The display screen 194 is used to display images, videos, and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like. The camera 193 is used to capture still images or video. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121, so that the electronic device 100 implements the audio data storage method in the embodiments of the present application. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 100 may listen to music, or to hands-free conversations, through the speaker 170A.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When electronic device 100 is answering a telephone call or voice message, voice may be received by placing receiver 170B in close proximity to the human ear.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may also be provided with three, four, or more microphones 170C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor is used for sensing a pressure signal and can convert the pressure signal into an electric signal. In some embodiments, the pressure sensor may be provided on the display screen 194. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions.
The keys 190 include a power-on key, a volume key, etc. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In this embodiment, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
Fig. 3 is a software configuration block diagram of the electronic device 100 according to the embodiment of the present application.
The layered architecture of the electronic device 100 divides the software into several layers, each with a distinct role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, a hardware abstraction layer (hardware abstraction layer, HAL), and a kernel layer, respectively.
The application layer may include a series of application packages.
As shown in fig. 3, the application package may include a media play application. The media playing application may be any application capable of playing audio.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 3, the application framework layer may include an audio player, a resampling and mixing module, a sound effect processing module, a PCM file storage module, and the like.
The audio player is used for reading audio data to be played by the media playing application. The audio data may be an undecoded audio file (e.g., MP3 file) or a decoded audio file (e.g., PCM file). For example, the audio Player may be a Media Player, audioTrack, or the like.
The resampling and mixing module is used for carrying out resampling operation and/or mixing operation on audio data to be played by the media playing application. The resampling and mixing module can also be composed of a resampling module and a mixing module, wherein the resampling module resamples the audio data, and the mixing module mixes at least two paths of audio data.
The sound effect processing module is used for performing sound effect processing on the resampled and mixed audio data. For example, the sound effect processing module may add dolby sound effects, subwoofer sound effects, and the like to the audio data.
The PCM file storage module is used for writing the audio data into the PCM file, such as the audio data read by the audio player, the audio data processed by the resampling and mixing module and the audio data processed by the sound effect processing module are respectively written into the corresponding PCM file.
The HAL layer is an interface layer between the operating system kernel and the hardware circuitry. HAL layers include, but are not limited to: an audio hardware abstraction layer (audio HAL). Wherein the audio HAL is used for processing the audio data, for example, noise reduction, directional enhancement, etc. of the audio data.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver. Wherein the audio driver is used to drive audio playback hardware such as speakers, headphones, etc.
It will be appreciated that the layers and components contained in the layers in the software structure shown in fig. 3 do not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer layers than shown, and more or fewer components may be included in each layer, as the present application is not limited.
It will be appreciated that the electronic device, in order to implement the audio data storage method of the present application, includes corresponding hardware and/or software modules that perform the various functions. The steps of an algorithm for each example described in connection with the embodiments disclosed herein may be embodied in hardware or a combination of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application in conjunction with the embodiments, but such implementation is not to be considered as outside the scope of this application.
The embodiment of the application provides an audio data storage method. Specifically, when the PCM file storage function of an electronic device (such as a mobile phone) is turned on, in the process of playing audio data by the electronic device, the audio data processed in each link is stored in a corresponding PCM file. Wherein, the attribute value of each PCM file comprises file creation time, and the file creation time can be expressed in the form of a time stamp. Illustratively, the file creation time is stored in the attribute value of the "creation time" attribute of each PCM file. Further exemplary, the file name of each PCM file includes a file creation time, such as adding its creation timestamp to its file name. The creation time stamp refers to a time stamp corresponding to the PCM file creation time, for example, "1104212941397", which represents 11 months 4 days 21 points 29 minutes 41 seconds 397 milliseconds.
Thus, based on the created time stamp of the PCM file and the relative time (i.e., i-th to j-th seconds (i < j)) at which the audio fault (e.g., POP sound, noise, etc.) occurs in the PCM file, a specific time (also referred to as absolute time) at which the audio fault occurs can be calculated. Furthermore, the research personnel or after-sales personnel can combine the electronic equipment log in the specific time to quickly and effectively locate the audio fault so as to quickly solve the problem of the user and return the mobile phone to the user as soon as possible.
In the following, an electronic device is taken as an example of a mobile phone, and a flow of writing the audio data processed in each link into a PCM file in a process of playing the audio data by a media playing application (such as a music playing application) is explained.
Fig. 4 is a schematic diagram showing interaction between the modules of the electronic device. Referring to fig. 4, an audio data playing flow corresponding to the audio data storage method provided in the present application specifically includes:
s401, responding to the received user operation, the media playing application creates an audio player.
The user operation refers to an operation triggered by a user and used for indicating the media playing application to play audio. By way of example, the user operation may be an operation to point to a play key in a media play application display interface.
In response to the received user operation, the media playing application creates a matching type of audio player in the application framework layer according to the file type of the audio to be played. For example, if the audio to be played is an undecoded audio file, such as an MP3, WAV, etc., the audio Player created in the application framework layer by the Media playing application is a Media Player. Still further exemplary, if the audio to be played is a decoded audio file, which may be a PCM file, for example, the audio player created in the application framework layer by the media playing application is AudioTrack. If the audio player created by the media playing application is AudioTrack, the media playing application also needs to specify playing parameters of the audio data, such as sampling rate, channel number, bit width, etc.
It should be noted that when the audio playing of the media playing application is completed, or when the media playing application exits, the audio player created by the media playing application at this time may be destroyed. When the media playing application needs to play the audio again in response to the received user operation, the audio player is re-created.
The audio players created by different media playing applications, or the audio players created by the same media playing application at different moments, can be distinguished by Identity (ID). Multiple audio players created by different media playback applications may exist simultaneously. Optionally, the IDs of the audio players are numbered sequentially according to the creation time. Illustratively, when the media playing application needs to create an audio player, the media playing application determines the ID of the audio player it is to create according to the currently cached audio player ID, and creates an audio player according to the ID. For example, if the currently cached audio player IDs are 1 through n, the media playing application determines the ID of the audio player it is to create as n+1. For another example, if there is no cached audio player ID currently, the media playback application may determine the ID of the audio player it is to create as 1.
S402, the audio player reads the audio data to be played of the media playing application, and calls the PCM file storage module to write the audio data to be played into the PCM file 1, wherein the PCM file 1 is marked as a PCM file for recording the audio data issued by the application.
After the media playing application creates the audio player, the media playing application may send the storage address of the audio file corresponding to the audio data to be played, for example, may be a URL (Uniform Resource Locator ) or a data storage path in the media playing application, to the audio player. And the audio player reads the audio data to be played of the media playing application according to the reading address, and calls the PCM file storage module to write the audio stream read by the audio player into the PCM file 1.
The audio player may call the PCM file storage module via a file write call function (e.g., fileWrite function) to complete writing of audio data to the PCM file. Exemplary parameters of the file write function may include, but are not limited to: audio data name, ID, and audio data to be stored.
In this embodiment, when the audio player invokes the PCM file storage module to perform an operation of writing audio data into the PCM file, parameters of the file writing function may include: the name of the audio data is "audio data issued by a recording application", such as PCM_FROM_APP; the ID is the ID of the audio player; the audio data to be stored is the audio data to be played of the media playing application read by the audio player. The "recording the audio data issued by the application" is merely an exemplary expression of storing the audio data in the audio data link issued by the audio player reading application, which is not limited in this embodiment.
In this embodiment, the PCM file storage module may perform a data writing process as shown in fig. 5 in response to a call of the audio player. Referring to fig. 5, the process of storing audio data in an audio data link issued by an audio player by using the PCM file storage module provided in this embodiment specifically includes:
s501, the PCM file storage module acquires the name and ID of the audio data in the file writing calling function.
When the PCM file storage module is called by the audio player, the name of the audio data acquired by the PCM file storage module is 'recording the audio data (such as PCM_FROM_APP) issued by the application', and the ID acquired by the PCM file storage module is the ID of the audio player.
S502, the PCM file storage module splices the audio data name and the ID to determine a first character string.
Assuming that the ID of the audio player is 1, the PCM file storage module concatenates the audio data name and the ID, and may obtain a concatenated string of the audio data names "pcm_from_app" and ID "1" as the first string. The first character string is used for representing a source of audio data to be stored. The present embodiment does not specifically limit the spliced character, and may be "_", "&" or the like, for example. Illustratively, the splice string may be "pcm_from_app_1".
S503, the PCM file storage module judges whether the PCM file containing the first character string in the file name is created, if not, S504 is executed, if so, S506 is executed.
In this embodiment, the file name of the PCM file includes a creation time stamp of the PCM file in addition to the splice string including the audio data name and the ID. Wherein the splice string of audio data name and ID is used to indicate the source of audio data stored in the PCM file; the creation time stamp is used to indicate the creation time of the PCM file.
For example, the PCM file created in the audio data link issued by the audio player reading application may have a file name: pcm_from_app_1_1104212941397.PCM. The PCM_FROM_APP_1 is a spliced character string of an audio data name and an ID, and is used for indicating that the audio data stored in the PCM file is the audio data issued by the media playing application, and the corresponding audio player ID is 1; "1104212941397" is a creation time stamp of the PCM file, and is used to indicate the creation time of the PCM file, specifically, the creation time of the PCM file is 11 months 4 days 21 minutes 41 seconds 397 milliseconds. Here, "1104212941397" is merely an exemplary expression of the time stamp, and the expression form of the time stamp is not limited in this embodiment.
The PCM file storage module determines whether a PCM file having a file name containing a first string has been created in the system, such as determining whether a PCM file having a file name containing a string "pcm_from_app_1" has been created. For example, if a PCM file having a file name of "pcm_from_app_1_1104212941397.PCM" already exists in the system, the PCM file storage module determines that a PCM file having a file name including the first string "pcm_from_app_1" has been created in the system. At this time, the PCM file storage module may directly write the audio data to be stored by the audio player into a PCM file having a file name of "pcm_from_app_1_1104212941397. PCM".
If the PCM file storage module determines that the PCM file with the file name containing the character string is not created in the system, if the PCM file with the file name containing the character string "pcm_from_app_1" is not created, the PCM file storage module needs to create the PCM file with the file name containing the character string "pcm_from_app_1" at this time.
S504, the PCM file storage module acquires the current time of the system, and splices the first character string and the current time of the system to obtain a second character string.
When the PCM file storage module needs to create a PCM file with a file name containing a first character string, the current time of the system is acquired, and the current time and the first character string are spliced, for example, a timestamp corresponding to the current time is spliced on the first character string, and then a second character string is obtained.
If the PCM file storage module judges that the PCM file with the file name containing the first character string 'PCM_FROM_APP_1' is not created in the system, the PCM file storage module acquires the current time of the system, and the timestamp '1104221001258' corresponding to the current time is spliced to the first character string to obtain a second character string. The present embodiment does not specifically limit the spliced character, and may be "_", "&" or the like, for example. Illustratively, the second string after the concatenation time stamp may be "pcm_from_app_1_110422100158".
S505, the PCM file storage module creates a PCM file according to the second character string.
Illustratively, the PCM file storage module concatenates the PCM file suffix after the second string to determine a file name and creates the PCM file from the file name.
And the PCM file storage module splices the PCM file suffix ". PCM" after the second character string, and then the file name of the PCM file can be obtained. For example, the string after the splicing timestamp is "pcm_from_app_1_1104221001158", and the PCM file storage module splices the PCM file suffix ". PCM" after the second string, so as to obtain the file name "pcm_from_app_1_1104221001258.PCM" of the PCM file.
After determining the file name of the PCM file, the PCM file storage module then creates the PCM file according to the file name. Illustratively, after the PCM file is created, there is a PCM file with a file name "pcm_from_app_1_1104221001158. PCM" in the system, that is, a PCM file with a created file name including a concatenation string "pcm_from_app_1" in the system. Furthermore, the PCM file storage module may write the audio data to be stored in the PCM file, that is, in the PCM file whose file name contains the character string "pcm_from_app_1".
S506, the PCM file storage module writes the audio data to be stored into the corresponding PCM file.
In this step, the PCM file in which the audio data is written refers to a PCM file in which a splice string of the audio data name and the ID is included in the file name. If the PCM file storage module judges that the PCM file exists in the system, the audio data to be stored is directly written into the PCM file. If the PCM file storage module judges that the PCM file does not exist in the system, the audio data to be stored is written into the PCM file after the corresponding PCM file is newly built.
It should be noted that, each time any one media playing application creates an audio player, the PCM file storage module creates a PCM file corresponding to the audio player, so as to store audio data issued by the media playing application read by the audio player. For example, in the process of playing audio by the media playing application, the file name of the PCM file created in the audio data link issued by the audio player reading application may be recorded as "pcm_from_app_id_create timestamp.
An electronic device (such as a personal computer) may derive a PCM file including the string "pcm_from_app" in a file name FROM the mobile phone according to a storage address of the PCM file, that is, derive a PCM file created in an audio data link issued by an audio player reading application. Furthermore, the electronic device analyzes the PCM files through professional software to determine whether audio faults, such as POP sounds, noise and the like, exist in the PCM files.
If any one of the PCM files has an audio fault, the electronic device can acquire not only the relative time (i.e. the ith second to the jth second (i < j)) of the audio fault in the PCM file, but also the creation time of the PCM file according to the file name of the PCM file. Thus, the electronic device can calculate the absolute time of occurrence of the audio fault according to the creation time of the PCM file and the relative time of occurrence of the audio fault in the PCM file. That is, based on the PCM file creation time, the relative time of the occurrence of the audio fault in the PCM file is accumulated, so as to obtain the absolute time of the occurrence of the audio fault.
Illustratively, the time stamp corresponding to the creation time of the PCM file is "1104221001258", the relative time of occurrence of the audio fault in the PCM file is 3 rd to 5 th seconds, and the absolute time of occurrence of the audio fault is a period from the time stamp "1104221004258" to the time stamp "1104221006258", that is, a period from 258 milliseconds at the 11 month 4 day 22 point 10 minutes 4 seconds to 258 milliseconds at the 11 month 4 day 22 point 10 minutes 6 seconds 258 milliseconds. Furthermore, the electronic equipment can derive the mobile phone log in the corresponding time period according to the absolute time of the occurrence of the audio fault, so that research personnel or after-sales personnel can quickly and effectively locate the mobile phone audio fault in the audio data link issued by the audio player reading application by combining the mobile phone log, and the problem of a user is solved as early as possible.
S403, the audio player sends the read audio data to be played of the media playing application to the resampling and mixing module.
The present embodiment does not limit the timing of S402 and S402.
S404, the resampling and mixing module performs resampling and/or mixing processing on the received audio data, and calls the PCM file storage module to write the audio data output by the resampling and mixing module into the PCM file 2, wherein the PCM file 2 is marked as a PCM file for recording the audio data subjected to the resampling and mixing processing.
In response to a received user operation, the media player application creates an audio player, and the thread management service in the system application framework layer creates an audio player thread for playing audio data. The audio player, the resampling and mixing module and the sound effect processing module all run on the audio playing thread.
It should be noted that the audio playback thread, when created, would bind to an OUTPUT (OUTPUT) of the audio HAL. After the audio playback thread binds to a certain OUTPUT of the audio HAL, the sampling rate of the audio data allowed to be played on the audio playback thread is also determined, i.e. the sampling rate matching the OUTPUT. For example, if the number of the audio HAL OUTPUT bound by the audio playing thread is 29, only audio data with a sampling rate of 48000 is allowed to be played on the audio playing thread, and audio data with other sampling rates is not allowed to be played on the audio playing thread. After the resampling and mixing module and the audio processing module are both operated on the audio playing thread, the resampling and mixing module and the audio processing module can acquire the serial numbers of the audio HAL OUTPUT bound by the audio playing thread.
In this step, the resampling and mixing module performs corresponding processing on the received audio data according to the actual situation, which may be separate resampling processing or separate mixing processing, which may be both resampling processing and mixing processing, or may not perform any processing.
For example, if the resampling and mixing module receives only the audio data sent by one audio player, the resampling and mixing module determines whether to need to resample the audio data according to the sampling rate of the audio data. If the sampling rate of the audio data is not consistent with the sampling rate allowed by the audio playing thread, the resampling and mixing module needs to resample the audio data, otherwise, the resampling process of the audio thread is not needed.
Also, for example, if the resampling and mixing module receives audio data sent by at least two audio players (audio players created by different media playing applications respectively) that are simultaneously present, the resampling and mixing module needs to determine whether to perform resampling processing on the audio data according to whether the sampling rate of the audio data matches the sampling rate allowed by the audio playing thread, and also needs to perform mixing processing on the audio data sent by the at least two audio players.
The resampling and mixing module calls the PCM file storage module to write the audio data output by the PCM file storage module into the PCM file 2.
It should be noted that, because the resampling and mixing module performs corresponding processing on the received audio data according to the actual situation, the audio data output by the resampling and mixing module may be the audio data after the resampling and/or mixing processing, or may be the audio data that is transmitted by the resampling and mixing module.
Similarly, the resampling and mixing module may call the PCM file storage module via a file write call function (e.g., fileWrite function) to complete writing of audio data to the PCM file. Exemplary parameters of the file write function may include, but are not limited to: audio data name, ID, and audio data to be stored.
In this embodiment, when the resampling and mixing module invokes the PCM file storage module to perform the operation of writing the audio data into the PCM file, the parameters of the file writing function may include: the name of the audio data is "recording the audio data AFTER resampling and mixing processing", such as pcm_ter_mix; the ID is the number of the audio HAL OUTPUT bound with the audio playing thread; the audio data to be stored is the audio data output by the resampling and mixing module. Here, the "recording the audio data after the resampling and mixing process" is merely an exemplary expression of storing the audio data in the audio resampling and mixing link, which is not limited in this embodiment.
In this embodiment, the PCM file storage module may execute the data writing process shown in fig. 5 in response to the call of the resampling and mixing module. Referring to fig. 5, the process of storing audio data in the audio resampling and mixing link by the PCM file storage module provided in this embodiment still includes steps as described in S501 to S506, which are not described herein again.
When the PCM file storage module is called by the resampling and mixing module, the name of the audio data acquired by the PCM file storage module is recorded as the audio data (such as PCM_AFTER_MIX) AFTER the resampling and mixing process, and the ID acquired by the PCM file storage module is the number of the audio HAL OUTPUT bound with the audio playing thread.
Assuming that the number of the audio HAL OUTPUT bound by the audio playing thread is 29, the PCM file storage module concatenates the audio data name and ID, and may obtain a concatenated string of the audio data names "pcm_header_mix" and ID "29". The present embodiment does not specifically limit the spliced character, and may be "_", "&" or the like, for example. The splice string may be, for example, "pcm_header_mix_29".
For example, the PCM file created in the audio resampling and mixing link may have a file name: pcm_after_mix_29_1104212942348.PCM. The PCM_AFTER_MIX_29 is a spliced character string of an audio data name and an ID, and is used for indicating that the audio data stored in the PCM file is the audio data OUTPUT by a resampling and mixing module, and the number of the audio HAL OUTPUT bound with an audio playing thread is 29; "1104212942348" is a creation time stamp of the PCM file, and is used to indicate the creation time of the PCM file, specifically, the creation time of the PCM file is 11 months 4 days 21 points 29 minutes 42 seconds 348 milliseconds.
The PCM file storage module determines whether a PCM file having a file name containing the character string has been created in the system, such as determining whether a PCM file having a file name containing the first character string "pcm_header_mix_29" has been created. For example, if a PCM file having a file name of "pcm_after_mix_29_1104212942348.PCM" already exists in the system, the PCM file storage module determines that a PCM file having a file name containing the character string "pcm_after_mix_29" has been created in the system. At this time, the PCM file storage module may directly write audio data to be stored by the audio player into a PCM file having a file name of "pcm_counter_mix_29_1104212942348. PCM".
The PCM file storage module, if judging that the PCM file with the file name containing the first character string is not created in the system, if the PCM file with the file name containing the first character string 'pcm_after_mix_29' is not created, needs to create the PCM file with the file name containing the first character string 'pcm_after_mix_29' at this time. When the PCM file storage module needs to create a PCM file with a file name containing a first string "pcm_header_mix_29", the PCM file storage module obtains the current time of the system and splices the current time to the first string "pcm_header_mix_29", for example, AFTER splicing a timestamp "1104221003144" corresponding to the current time to the first string "pcm_header_mi x_29", a second string is obtained. Further, the PCM file storage module may create the PCM file from the second string. The present embodiment does not specifically limit the spliced character, and may be "_", "&" or the like, for example. Illustratively, the second string AFTER the splice timestamp may be "pcm_header_mix_29_1104221003144".
And the PCM file storage module splices the suffix ". PCM" of the PCM file after splicing the second character string after the timestamp, so that the file name of the PCM file can be obtained. For example, if the second string AFTER the splicing timestamp is "pcm_after_mi x_29_1104221003144", the PCM file storage module splices the PCM file suffix ". PCM" AFTER the second string, and then the file name "pcm_after_mix_29_1104221003144.PCM" of the PCM file is obtained.
After determining the file name of the PCM file, the PCM file storage module then creates the PCM file according to the file name. Illustratively, there is a PCM file of the file name "pcm_after_mix_29_1104221003144.PCM" in the system at this time, i.e. a PCM file having been created in the system with the file name comprising the first string "pcm_after_mix_29". Furthermore, the PCM file storage module may write the audio data to be stored in the PCM file, i.e. in a PCM file whose file name contains the first string "pcm_header_mix_29".
It should be noted that the audio playback thread is destroyed after the audio playback thread completes the audio playback. Thus, multiple audio playback threads created at different times may be bound to the same OUTPUT of the audio HAL. Furthermore, for different audio playing threads, if the audio playing threads are all bound with the same OUTPUT of the audio HAL, when the resampling and mixing module runs on any one of the audio playing threads, the audio data OUTPUT by the resampling and mixing module is stored in the same PCM file. For example, assuming that the music thread and the game thread running simultaneously are both bound to the same OUTPUT of the audio HAL, the audio data OUTPUT by the resampling and mixing module running on the music thread and the audio data OUTPUT by the resampling and mixing module running on the game thread are stored in the same PCM file 2. Wherein the audio data name included in the file name of this PCM file is used to indicate "record resampled and MIX processed audio data", such as "pcm_after_mix"; the ID included in the filename of this PCM file is the number of the same OUTPUT as mentioned above for the audio HAL to which the audio playback threads are bound. For example, during the playing of audio by a media playing application, the file name of a PCM file created in the audio resampling and mixing step may be recorded as "PCM_AFTER_MIX_ID_Create timestamp.
Because the audio data storage method provided by the application is applied to the scene that maintenance personnel or research personnel carry out audio fault positioning on the mobile phone, the situation that one audio playing thread is bound with the same OUTPUT of the audio HAL usually exists.
It is assumed that the first audio playback thread and the second audio playback thread bound to the same OUTPUT of the audio HAL are created sequentially, e.g. the second audio playback thread is created after the first audio playback thread is destroyed. For example, after the first audio playing thread is destroyed, the PCM file storage module may write null data into the PCM file 2 corresponding to the OUTPUT according to an audio data processing period matched with the OUTPUT bound by the first audio playing thread, until the second audio playing thread is bound with the OUTPUT after being created, so as to ensure accuracy of relative time of the audio data stored in the PCM file 2.
Also for example, to distinguish PCM file 2 corresponding to different audio playback threads bound to the same OUTPUT, the ID in the file name of PCM file 2 may be set to the concatenation character of the number and order number of OUTPUT, e.g., 29_1. The order number is used to identify the number of times the OUTPUT is currently bound. At this time, when the resampling and mixing module calls the PCM file storage module to perform the operation of writing the audio data into the PCM file, parameters of the file writing function may include: the name of the audio data is "recording the audio data AFTER resampling and mixing processing", such as pcm_ter_mix; the ID is a splice character of the number and the sequence number of the audio HAL OUTPUT bound with the audio playing thread; the audio data to be stored is the audio data output by the resampling and mixing module. The manner in which the PCM file storage module stores the resampled and mixed audio data in this scenario is similar to that described above, and will not be described again here.
For further example, in order to distinguish PCM files 2 corresponding to different audio playing threads bound to the same OUTPUT, the corresponding PCM files 2 may also be exported to the personal computer after the previous audio playing thread is destroyed. At this time, if the next audio playing thread is bound to the OUTPUT, when the resampling and mixing module calls the PCM file storage module to perform the operation of writing the audio data into the PCM file, the PCM file storage module creates a new PCM file 2.
An electronic device (e.g., a personal computer) may derive a PCM file including the string "pcm_header_mix" in a file name from a cellular phone according to a storage address of the PCM file, that is, derive a PCM file created in an audio resampling and mixing link. The electronic equipment analyzes the PCM files through professional software to judge whether audio faults, such as POP sound, noise and the like exist in the PCM files. If any one of the PCM files has an audio fault, the electronic device can acquire not only the relative time (i.e. the ith second to the jth second (i < j)) of the audio fault in the PCM file, but also the creation time of the PCM file according to the file name of the PCM file. Thus, the electronic device can calculate the absolute time of occurrence of the audio fault according to the creation time of the PCM file and the relative time of occurrence of the audio fault in the PCM file. That is, based on the PCM file creation time, the relative time of the occurrence of the audio fault in the PCM file is accumulated, so as to obtain the absolute time of the occurrence of the audio fault. Furthermore, the electronic equipment can derive the mobile phone log in the corresponding time period according to the absolute time of the occurrence of the audio fault, so that research personnel or after-sales personnel can quickly and effectively locate the mobile phone audio fault in the audio resampling and mixing link by combining the mobile phone log, and the problem of a user is solved as soon as possible.
S405, the resampling and mixing module sends the output audio data to the sound effect processing module.
The present embodiment does not limit the timing of S404 and S405.
S406, the sound effect processing module performs sound effect processing on the received audio data, and calls the PCM file storage module to write the audio data subjected to the sound effect processing into the PCM file 3, wherein the PCM file 3 is marked as a PCM file for recording the audio data subjected to the sound effect processing.
The operation of the sound effect processing module for performing sound effect processing on the received audio data may refer to the sound effect processing operation in the prior art, which is not described herein.
The sound effect processing module calls the PCM file storage module to write the audio data output by the PCM file storage module into the PCM file 3. Similarly, the sound effect processing module can call the PCM file storage module through a file write calling function (such as a FileWrite function) to complete the operation of writing the audio data into the PCM file. Exemplary parameters of the file write function may include, but are not limited to: audio data name, ID, and audio data to be stored.
In this embodiment, when the sound effect processing module invokes the PCM file storage module to perform an operation of writing audio data into the PCM file, parameters of the file writing function may include: the audio data name is "audio data AFTER recording sound EFFECT processing", such as pcm_enter_effect_handle; the ID is the number of the audio HAL OUTPUT bound with the audio playing thread; the audio data to be stored is the audio data output by the sound effect processing module. Here, the "audio data after recording audio effect processing" is merely an exemplary expression of storing audio data in the audio effect processing section, which is not limited in this embodiment.
In this embodiment, the PCM file storage module may perform a data writing process as shown in fig. 5 in response to a call from the sound effect processing module. Referring to fig. 5, the process of storing audio data in the audio sound processing link by the PCM file storage module provided in this embodiment still includes steps as described in S501 to S506, which are not described herein again.
When the PCM file storage module is called by the sound EFFECT processing module, the name of the audio data acquired by the PCM file storage module is recorded sound EFFECT processed audio data (such as PCM_AFTER_EFFECT_HANDLE), and the ID acquired by the PCM file storage module is the number of the audio HAL OUTPUT bound with the audio playing thread.
Assuming that the number of the audio HAL OUTPUT bound by the audio playing thread is 29, the PCM file storage module concatenates the audio data name and the ID, and may obtain a concatenated string of the audio data name "pcm_header_effect_handle" and the ID "29". The present embodiment does not specifically limit the spliced character, and may be "_", "&" or the like, for example. Illustratively, the splice string may be "PCM_AFTER_EFFECT_HANDLE_29".
Illustratively, the PCM file created in the audio sound effect processing link may have a file name: pcm_aft er_effect_handle_29_1104212943125.PCM. The PCM_AFTER_EFFECT_HAN DLE_29 is a spliced character string of an audio data name and an ID, and is used for indicating that the audio data stored in the PCM file is the audio data OUTPUT by the audio processing module, and the number of the audio HAL OUTPUT bound with the audio playing thread is 29; "1104212943125" is a creation time stamp of the PCM file, and is used to indicate the creation time of the PCM file, specifically, the creation time of the PCM file is 11 months 4 days 21 points 29 minutes 43 seconds 125 milliseconds.
The PCM file storage module determines whether a PCM file having a file name containing the string has been created in the system, such as determining whether a PCM file having a file name containing the first string "pcm_header_effect_handle_29" has been created. For example, if a PCM file having a file name of "pcm_after_effect_handle_29_1104212943125.PCM" already exists in the system, the PCM file storage module determines that a PCM file having a file name including the first string "pcm_after_effect_handle_29" has been created in the system. At this time, the PCM file storage module may directly write the audio data to be stored by the audio player into a PCM file having a file name of "pcm_counter_effect_handle_29_1104212943125. PCM".
If the PCM file storage module determines that the PCM file whose file name contains the first string is not created in the system, if the PCM file whose file name contains the first string "pcm_after_effect_handle_29" is not created, the PCM file storage module needs to create the PCM file whose file name contains the first string "pcm_after_effect_ha ndle_29" at this time. When the PCM file storage module needs to create a PCM file with a file name containing a first string "pcm_header_effect_handle_29", the PCM file storage module obtains the current time of the system and splices the current time to the first string "pcm_header_effect_handle_29", for example, splices a timestamp "1104221004169" corresponding to the current time to the first string "pcm_header_effect_handle_29", and then obtains a second string. Further, the PCM file storage module may create the PCM file from the second string. The present embodiment does not specifically limit the spliced character, and may be "_", "&" or the like, for example. Illustratively, the second string AFTER the splice timestamp may be "pcm_header_effect_handle_29_110422100469".
And the PCM file storage module splices the suffix ". PCM" of the PCM file after splicing the second character string after the timestamp, so that the file name of the PCM file can be obtained. For example, if the second string AFTER the splicing timestamp is "pcm_after_ef fect_handle_29_110422100469", the PCM file storage module splices the PCM file suffix ". PCM" AFTER the second string, and then the file name "pcm_after_effect_handle_29_110422100469. PCM" of the PCM file is obtained.
After determining the file name of the PCM file, the PCM file storage module then creates the PCM file according to the file name. Illustratively, there is a PCM file in the system with a file name "pcm_after_effect_handle_29_110422100469. PCM", i.e. a PCM file with a file name containing the first string "pcm_after_effect_handle_29" already created in the system. Furthermore, the PCM file storage module may write the audio data to be stored in the PCM file, i.e. in a PCM file whose file name contains the first string "pcm_header_effect_han dle_29".
It should be noted that, after the audio playing thread completes the audio playing, the audio playing thread is destroyed. Thus, multiple audio playback threads created at different times may be bound to the same OUTPUT of the audio HAL. Furthermore, for different audio playing threads, if the audio playing threads are all bound with the same OUTPUT of the audio HAL, when the audio processing module runs on any one of the audio playing threads, the audio data OUTPUT by the audio processing module is stored in the same PCM file. For example, assuming that the music thread and the game thread running simultaneously are both bound to the same OUTPUT of the audio HAL, the audio data OUTPUT by the sound effect processing module running on the music thread and the audio data OUTPUT by the sound effect processing module running on the game thread are stored in the same PCM file 3. Wherein the audio data name included in the file name of this PCM file is used to indicate "audio data AFTER recording sound EFFECT processing", such as "pcm_enter_effect_handle"; the ID included in the filename of this PCM file is the number of the same OUTPUT as mentioned above for the audio HAL to which the audio playback threads are bound. For example, during the playing of audio by a media playing application, the file name of the PC M file created in the audio sound EFFECT processing link may be recorded as "PCM_AFTER_EFFECT_HANDLE_ID_Create timestamp.
Because the audio data storage method provided by the application is applied to the scene that maintenance personnel or research personnel carry out audio fault positioning on the mobile phone, the situation that one audio playing thread is bound with the same OUTPUT of the audio HAL usually exists.
It is assumed that the first audio playback thread and the second audio playback thread bound to the same OUTPUT of the audio HAL are created sequentially, e.g. the second audio playback thread is created after the first audio playback thread is destroyed. For example, after the first audio playing thread is destroyed, the PCM file storage module may write null data into the PCM file 3 corresponding to the OUTPUT according to an audio data processing period matched with the OUTPUT bound by the first audio playing thread, until the second audio playing thread is bound with the OUTPUT after being created, so as to ensure accuracy of relative time of the audio data stored in the PCM file 3.
Also for example, to distinguish PCM file 3 corresponding to different audio playback threads bound to the same OUTPUT, the ID in the file name of PCM file 3 may be set to the concatenation character of the number and order number of OUTPUT, e.g., 29_1. The order number is used to identify the number of times the OUTPUT is currently bound. At this time, when the sound effect processing module calls the PCM file storage module to perform an operation of writing audio data into the PCM file, parameters of the file writing function may include: the audio data name is "audio data AFTER recording sound EFFECT processing", such as pcm_enter_effect_handle; the ID is a splice character of the number and the sequence number of the audio HAL OUTPUT bound with the audio playing thread; the audio data to be stored is the audio data output by the sound effect processing module. The manner in which the PCM file storage module stores the audio data after the sound effect processing in this scenario is similar to that described above, and will not be described here again.
For further example, in order to distinguish PCM files 3 corresponding to different audio playing threads bound to the same OUTPUT, the corresponding PCM files 3 may also be exported to the personal computer after the previous audio playing thread is destroyed. At this time, if the next audio playing thread is bound to the OUTPUT, when the audio processing module calls the PCM file storage module to perform the operation of writing the audio data into the PCM file, the PCM file storage module creates a new PCM file 3.
An electronic device, such as a personal computer, may derive a PCM file from the handset including the string "pcm_counter_effect_handle" in the file name, i.e. the PCM file created in the audio sound EFFECT processing section, according to the storage address of the PCM file. The electronic equipment analyzes the PCM files through professional software to judge whether audio faults, such as POP sound, noise and the like exist in the PCM files. If any one of the PCM files has an audio fault, the electronic device can acquire not only the relative time (i.e. the ith second to the jth second (i < j)) of the audio fault in the PCM file, but also the creation time of the PCM file according to the file name of the PCM file. Thus, the electronic device can calculate the absolute time of occurrence of the audio fault according to the creation time of the PCM file and the relative time of occurrence of the audio fault in the PCM file. That is, based on the PCM file creation time, the relative time of the occurrence of the audio fault in the PCM file is accumulated, so as to obtain the absolute time of the occurrence of the audio fault. Furthermore, the electronic equipment can derive the mobile phone log in the corresponding time period according to the absolute time of the occurrence of the audio fault, so that research personnel or after-sales personnel can quickly and effectively locate the mobile phone audio fault in the audio sound effect processing link by combining the mobile phone log, and the problem of a user is solved as soon as possible.
S407, the sound effect processing module sends the sound effect processed audio data to the audio HAL.
The present embodiment does not limit the timing of S406 and S407.
S408, the audio HAL invokes an audio driver in the kernel layer.
S409, the audio driver calls a loudspeaker to play the audio.
The audio HAL calls the audio driver, the audio driver calls the speaker, and when the speaker plays the audio in response to the call of the audio driver, corresponding processing is performed, and the specific processing process may refer to the technical scheme in the embodiment of the prior art, which is not described in detail in this application.
In this step, the audio driver may also call other audio output devices, such as a receiver, an earphone, and so on, to play audio, which is not limited in this embodiment.
In the above embodiment, when the mobile phone plays the audio data, the audio data processed in each link is stored in the corresponding PCM file, including: storing the audio data issued by the media playing application into a PCM file 1, wherein the PCM file 1 is marked as a PCM file for recording the audio data issued by the application; the audio data subjected to resampling and mixing processing is stored in a PCM file 2, and the PCM file 2 is marked as a PCM file for recording the audio data subjected to resampling and mixing processing; the audio data with sound effect processing is stored in the PCM file 3, and the PCM file 3 is marked as a PCM file of "audio data with sound effect processing". Wherein each of the PCM files includes a string representing a creation time stamp in a file name thereof. Thus, the specific time of occurrence of the audio fault can be calculated according to the creation time stamp of the PCM file and the relative time (i.e. i-th to j-th seconds (i < j)) of the occurrence of the audio fault (e.g. POP sound, noise, etc.) in the PCM file.
Based on the above embodiments, as an alternative implementation manner, the resampling and mixing module may be divided into a resampling module and a mixing module. The resampling module and the mixing module are operated on the audio playing thread, and the serial numbers of the audio HAL OUTPUT bound by the audio playing thread can be obtained.
The resampling module is used for resampling the audio data, and can specifically perform corresponding processing on the received audio data according to actual conditions. When the sampling rate of the received audio data is not consistent with the sampling rate allowed by the audio playing thread, the resampling module resamples the received audio data; and when the sampling rate of the received audio data accords with the sampling rate allowed by the audio playing thread, the resampling module carries out transparent transmission processing on the received audio data.
The audio mixing module is used for carrying out audio mixing processing on at least two paths of audio data, and can specifically carry out corresponding processing on the received audio data according to actual conditions. When at least two paths of audio data from different audio players are received, the sound effect processing module performs sound mixing processing on the received at least two paths of audio data; when only one path of audio data is received, the sound effect processing module performs sound mixing processing on at least two paths of received audio data; and the sound effect processing module is used for performing transparent transmission processing on the received audio data.
The audio data playing process may include: responding to the received user operation, and creating an audio player by the media playing application; the audio player reads audio data to be played of the media playing application, and calls a PCM file storage module to write the audio data to be played into a PCM file 1, wherein the PCM file 1 is marked as a PCM file for recording audio data issued by the application; the audio player sends the read audio data to be played of the media playing application to the resampling module; the resampling module processes the received audio data according to actual conditions, and calls the PCM file storage module to write the audio data output by the resampling module into the PCM file 21, and the PCM file 21 is marked as a PCM file for recording the resampled audio data; the resampling module sends the audio data output by the resampling module to the sound mixing module; the audio mixing module processes the received audio data according to actual conditions, and calls the PCM file storage module to write the audio data output by the audio mixing module into the PCM file 22, wherein the PCM file 22 is marked as a PCM file for recording audio data after audio mixing processing; the audio mixing module sends the output audio data to the audio processing module; the sound effect processing module performs sound effect processing on the received audio data, and calls the PCM file storage module to write the audio data subjected to the sound effect processing into the PCM file 3, wherein the PCM file 3 is marked as a PCM file for recording the audio data subjected to the sound effect processing; the sound effect processing module sends the sound effect processed audio data to the audio HAL; the audio HAL calls an audio driver in the kernel layer; the audio driver calls the loudspeaker to play the audio.
Thus, in this embodiment, the links of playing audio data by the mobile phone may be divided into: the audio player reads an audio data link, an audio resampling link, an audio mixing link and an audio sound effect processing link which are issued by the application. Namely, the audio resampling link and the audio mixing link are two independent audio processing links. The audio data processed by each link is stored in a corresponding PCM file when the mobile phone plays the audio data, and the method specifically comprises the following steps: storing the audio data issued by the media playing application into a PCM file 1, wherein the PCM file 1 is marked as a PCM file for recording the audio data issued by the application; the audio data subjected to the audio resampling processing is stored in the PCM file 21, and the PCM file 21 is marked as a PCM file for recording the audio data subjected to the resampling processing; the audio data after audio mixing processing is stored in the PCM file 22, and the PCM file 22 is marked as a PCM file for recording the audio data after audio mixing processing; the audio data with sound effect processing is saved to the PCM file 3, and the PCM file 3 is marked as a PCM file of "audio data with sound effect processing". Wherein, each PCM file has a file name comprising a character string for representing the creation time stamp. Each of these PCM files includes a character string for representing a creation time stamp in a file name thereof. Thus, the specific time of occurrence of the audio fault can be calculated according to the creation time stamp of the PCM file and the relative time (i.e. i-th to j-th seconds (i < j)) of the occurrence of the audio fault (e.g. POP sound, noise, etc.) in the PCM file.
The operation of the PCM file storage module to be called to write audio data into the PCM file may be described in detail in the foregoing embodiments, and will not be described herein. Other parts of this embodiment, which are not explained in detail, can also be referred to the foregoing examples, and are not described in detail herein.
The embodiment of the application also provides an audio fault time positioning method. Specifically, the electronic device (such as a personal computer) can determine a specific time point when the audio fault occurs according to the relative time of the audio fault in the PCM file and the creation time stamp of the PCM file, so that a researcher and a developer can quickly and effectively locate the audio fault by combining the device log of the specific time point, the solving speed of the audio fault is increased, and the user experience is improved.
Referring to fig. 6, the flow of the audio failure time positioning method provided in the present application specifically includes:
s601, the electronic device acquires the PCM file.
The PCM file is a PCM file stored in an electronic device (such as a mobile phone) with audio faults, and audio data related to audio playing of the electronic device is stored in the PCM file.
Further, the attribute value of the PCM file includes a file creation time.
Illustratively, the file creation time is stored in the attribute value of the "creation time" attribute of the PCM file.
Also exemplary, the file name of the PCM file includes a file creation time.
Wherein the file creation time may be represented in the form of a time stamp, e.g. "1104212941397". Here, "1104212941397" is merely an exemplary expression of the time stamp, and the expression form of the time stamp is not limited in this embodiment.
In an application scenario, in a process of playing audio data, the audio data processed in each link is stored in a corresponding PCM file by an electronic device (such as a mobile phone). For example, audio data issued by the media playing application is saved to PCM file 1, and PCM file 1 is marked as a PCM file of "recording audio data issued by the application"; the audio data subjected to resampling and mixing processing is stored in a PCM file 2, and the PCM file 2 is marked as a PCM file for recording the audio data subjected to resampling and mixing processing; the audio data with sound effect processing is stored in the PCM file 3, and the PCM file 3 is marked as a PCM file of "audio data with sound effect processing".
Wherein each PCM file carries its creation time stamp, e.g. it is added in its file name.
For example, the PCM file created in the audio data link issued by the audio player reading application may have a file name: pcm_from_app_1_1104212941397.PCM. The PCM_FROM_APP_1 is a spliced character string of an audio data name and an ID, and is used for indicating that the audio data stored in the PCM file is the audio data issued by the media playing application, and the corresponding audio player ID is 1; "1104212941397" is a creation time stamp of the PCM file, and is used to indicate the creation time of the PCM file, specifically, the creation time of the PCM file is 11 months 4 days 21 minutes 41 seconds 397 milliseconds. Here, "1104212941397" is merely an exemplary expression of the time stamp, and the expression form of the time stamp is not limited in this embodiment.
For example, the PCM file created in the audio resampling and mixing link may have a file name: pcm_after_mix_29_1104212942348.PCM. The PCM_AFTER_MIX_29 is a spliced character string of an audio data name and an ID, and is used for indicating that the audio data stored in the PCM file is the audio data OUTPUT by a resampling and mixing module, and the number of the audio HAL OUTPUT bound with an audio playing thread is 29; "1104212942348" is a creation time stamp of the PCM file, and is used to indicate the creation time of the PCM file, specifically, the creation time of the PCM file is 11 months 4 days 21 points 29 minutes 42 seconds 348 milliseconds.
Illustratively, the PCM file created in the audio sound effect processing link may have a file name: pcm_aft er_effect_handle_29_1104212943125.PCM. The PCM_AFTER_EFFECT_HAN DLE_29 is a spliced character string of an audio data name and an ID, and is used for indicating that the audio data stored in the PCM file is the audio data OUTPUT by the audio processing module, and the number of the audio HAL OUTPUT bound with the audio playing thread is 29; "1104212943125" is a creation time stamp of the PCM file, and is used to indicate the creation time of the PCM file, specifically, the creation time of the PCM file is 11 months 4 days 21 points 29 minutes 43 seconds 125 milliseconds.
The electronic device may derive the PCM file created in each link of the mobile phone playing the audio data according to the storage address of the PCM file in the mobile phone and the audio data name in the PCM file name.
S602, the electronic equipment analyzes the PCM file, judges whether audio faults exist in the PCM file, if yes, S603 is executed, and if not, the flow is ended.
The electronic device analyzes the PCM file through a professional analysis tool (such as a PCM audio analysis assistant, etc.), and determines whether an audio fault exists in the PCM file, for example, whether POP sound, noise, etc. exist in the PCM file. If the PCM file does not have audio faults, ending the processing flow of the PCM file.
S603, the electronic device acquires the creation time of the PCM file and determines the relative time of the audio fault in the PCM file.
The electronic device determines the creation time of the PCM file according to the creation time stamp carried by the PCM file. Illustratively, when the creation time stamp of the PCM file is "1104212941397", the creation time of the PCM file is 11 months 4 days 21 minutes 41 seconds 397 milliseconds.
Under the condition that the audio fault exists in the PCM file, the electronic equipment can directly determine the relative time of the audio fault in the PCM file according to the analysis result of the PCM file.
Referring to the PCM file parsing example shown in fig. 7, POP tones, which are embodied as truncations of waveforms, appear between 3-4 seconds in the PCM file. The relative time of the audio fault in the PCM file is 3-4 seconds.
Referring to the PCM file parsing example shown in fig. 8, in the PCM file, noise (metallic sound) appears between 2.475 and 3.44 seconds, which is embodied as a large variation in the amplitude of the waveform. The relative time of the audio fault in the PCM file is 2.475-3.44 seconds.
S604, the electronic device determines the absolute time of occurrence of the audio fault according to the creation time of the PCM file and the relative time of the audio fault in the PCM file.
And the electronic equipment superimposes the relative time of the audio fault in the PCM file on the basis of the creation time of the PCM file, so that the absolute time of the occurrence of the audio fault can be calculated.
Referring to the PCM file parsing example shown in fig. 7, the relative time of POP sound in PCM file is 3-4 seconds. Suppose the creation time of the PCM file is 11 months 4 days 21 points 29 minutes 41 seconds 397 milliseconds. Based on the creation time of the PCM file, the relative time of POP sound in the PCM file is overlapped with the 3 rd to 4 th seconds, and the absolute time of POP sound generation can be calculated to be 11 months 4 days 21 points 29 minutes 44 seconds 397 milliseconds to 11 months 4 days 21 points 29 minutes 45 seconds 397 milliseconds.
Referring to the PCM file parsing example shown in fig. 8, the relative time of the metallic tone in the PCM file is 2.475-3.44 seconds. Suppose the creation time of the PCM file is 11 months 4 days 21 points 29 minutes 41 seconds 397 milliseconds. Based on the creation time of the PCM file, the relative time of POP sound in the PCM file is overlapped with the relative time of 2.475-3.44 seconds, and the absolute time of metal sound generation is calculated to be 11 months 4 days 21 points 29 minutes 43 seconds 872 milliseconds to 11 months 4 days 21 points 29 minutes 44 seconds 837 milliseconds.
For each PCM file carrying the creation time stamp, the electronic device may analyze it as described in steps S601 to S604 to determine whether an audio failure exists in the PCM file. Furthermore, when the electronic device determines that the audio fault exists in the PCM file, the specific time point of the audio fault can be determined according to the relative time of the audio fault in the PCM file and the creation time stamp of the PCM file.
Therefore, the electronic equipment can analyze the PCM file created in each processing link in the process of playing the audio data by the mobile phone, such as analyzing the PCM file marked as the audio data issued by the recording application, the PCM file marked as the audio data after the resampling and mixing processing and the PCM file marked as the audio data after the sound effect processing. The file name of each of these PCM files includes a file creation time stamp. In the process of playing audio data by the mobile phone, no matter which processing link has audio fault, the electronic equipment can determine the specific time point of the audio fault according to the method.
After the electronic equipment determines the absolute time of the occurrence of the audio fault, the mobile phone log in the corresponding time period can be obtained according to the absolute time of the occurrence of the audio fault, namely according to the specific time point of the occurrence of the audio fault, and the audio fault of the mobile phone is positioned by combining the mobile phone log, so that the positioning speed of the audio fault of the mobile phone is increased, and the audio fault of the mobile phone can be solved as early as possible.
In one example, fig. 9 shows a schematic block diagram of an apparatus 900 of an embodiment of the present application, the apparatus 900 may include: the processor 901 and transceiver/transceiver pins 902, optionally, also include a memory 903.
The various components of apparatus 900 are coupled together by a bus 904, wherein bus 904 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are referred to in the figures as bus 904.
Optionally, the memory 903 may be used to store instructions in the audio data storage method embodiments or the audio time to failure localization method embodiments described above. The processor 901 is operable to execute instructions in the memory 903 and control the receive pin to receive signals and the transmit pin to transmit signals.
The apparatus 900 may be an electronic device or a chip of an electronic device in the above-described audio data storage method embodiment or audio time-to-failure positioning method embodiment.
All relevant contents of each step involved in the above audio data storage method embodiment or the audio fault time positioning method embodiment may be referred to the functional description of the corresponding functional module, which is not described herein again.
The steps performed by the terminal 100 in the audio data storage method provided in the embodiment of the present application may also be performed by a chip system included in the terminal 100, where the chip system may include a processor and a bluetooth chip. The chip system may be coupled to a memory such that the chip system, when running, invokes a computer program stored in the memory, implementing the steps performed by the terminal 100 described above. The processor in the chip system can be an application processor or a non-application processor.
The present embodiment also provides a computer storage medium having stored therein computer instructions that, when executed on an electronic device, cause the electronic device to execute the above-described related method steps to implement the audio data storage method or the audio time-to-failure positioning method in the above-described embodiments.
The present embodiment also provides a computer program product which, when run on a computer, causes the computer to perform the above-described related steps to implement the audio data storage method or the audio time-to-failure localization method in the above-described embodiments.
In addition, embodiments of the present application also provide an apparatus, which may be specifically a chip, a component, or a module, and may include a processor and a memory connected to each other; the memory is configured to store computer-executable instructions, and when the device is running, the processor may execute the computer-executable instructions stored in the memory, so that the chip executes the audio data storage method or the audio fault time positioning method in the above method embodiments.
The electronic device (such as a mobile phone), the computer storage medium, the computer program product or the chip provided in this embodiment are used to execute the corresponding method provided above, so that the beneficial effects thereof can be referred to the beneficial effects in the audio data storage method provided above, and will not be described herein.
The electronic device (such as a personal computer), the computer storage medium, the computer program product or the chip provided in this embodiment are used to execute the corresponding method provided above, so the beneficial effects thereof can be referred to the beneficial effects in the audio failure time positioning method provided above, and will not be described herein.
It will be appreciated by those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The above embodiments are merely for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (17)

1. An audio failure time localization method, comprising:
acquiring a Pulse Code Modulation (PCM) file stored in electronic equipment; the PCM file comprises audio data stored by the electronic equipment in the process of playing the audio data, and the attribute value of the PCM file comprises file creation time;
determining the relative time at which an audio fault occurs in the PCM file by parsing the PCM file;
determining an absolute time at which the audio fault occurs according to the relative time and the file creation time; the absolute time is used for acquiring a corresponding electronic equipment log so as to combine the electronic equipment log to locate audio faults.
2. The method of claim 1, wherein the relative time is from the ith second to the jth second; the file creation time is t 0;
wherein said determining an absolute time at which said audio failure occurred based on said relative time and said file creation time comprises:
the absolute time at which the audio failure occurs is determined as (time t0 + i seconds) to (time t0 + j seconds).
3. The method of claim 1, wherein the file creation time is included in a file name of the PCM file.
4. The method of claim 1, wherein the audio fault comprises at least one of:
POP sound fault and noise fault.
5. The method of claim 1, further comprising, after said determining an absolute time at which said audio fault occurred:
acquiring an electronic equipment log corresponding to the absolute time of occurrence of the audio fault;
and carrying out audio fault positioning by combining the electronic equipment log.
6. An audio fault time positioning method is characterized by being applied to first electronic equipment and comprising the following steps of:
acquiring first audio data in the process of playing the audio data by the first electronic equipment;
processing the first audio data to obtain second audio data; the second audio data is used for audio playing;
determining a first PCM file corresponding to the second audio data;
writing the second audio data into the first PCM file; wherein, the attribute value of the first PCM file comprises the file creation time of the first PCM file;
transmitting the first PCM file to a second electronic device, so that the second electronic device determines a first relative time when a first audio fault occurs in the first PCM file by analyzing the first PCM file, and determines an absolute time when the first audio fault occurs according to the first relative time and a file creation time of the first PCM file; the absolute time is used for acquiring a corresponding first electronic equipment log so as to combine the first electronic equipment log to locate audio faults.
7. The method of claim 6, further comprising, after the first audio data is acquired:
determining a second PCM file corresponding to the first audio data;
writing the first audio data into the second PCM file; wherein the attribute value of the second PCM file includes a file creation time of the second PCM file;
and sending the second PCM file to the second electronic device, so that the second electronic device determines a second relative time when a second audio fault occurs in the second PCM file by analyzing the second PCM file, and determines the absolute time when the second audio fault occurs according to the second relative time and the file creation time of the second PCM file.
8. The method according to claim 6 or 7, wherein the file creation time is included in a file name of the PCM file.
9. The method of claim 8, wherein the determining the first PCM file corresponding to the second audio data comprises:
judging whether a first PCM file corresponding to the second audio data exists or not according to the source of the second audio data;
If not, a new PCM file is created as a first PCM file corresponding to the second audio data.
10. The method of claim 9, wherein creating a new PCM file comprises:
determining a first character string; wherein the first string is used to represent the source of the second audio data;
acquiring the current time of a system, and splicing the first character string and the current time of the system to obtain a second character string;
and creating a new PCM file according to the second character string.
11. The method as recited in claim 10, further comprising:
acquiring a second audio data name and an ID corresponding to the second audio data name;
the determining, according to the source of the second audio data, whether the first PCM file corresponding to the second audio data exists includes:
judging whether a target PCM file exists, wherein the file name of the target PCM file comprises a splicing character string of the second audio data name and an ID corresponding to the second audio data name;
the determining the first character string includes:
and splicing the second audio data name and the ID corresponding to the second audio data name to obtain the first character string.
12. The method of claim 6, wherein the source of the first audio data or the second audio data comprises at least one of:
audio data issued by the media playing application, audio data after resampling and mixing processing, and audio data after sound effect processing.
13. The method of claim 6 or 7, further comprising, prior to the electronic device playing the audio data:
responsive to the received first operation, performing ROOT processing on the electronic equipment;
and responding to the received second operation, and starting a PCM file storage function in the audio authority of the electronic equipment.
14. An audio fault time positioning method is characterized by being applied to first electronic equipment and comprising the following steps of:
acquiring first audio data issued by a media playing application in the process of playing the audio data by the first electronic equipment;
determining a first PCM file corresponding to the first audio data, and writing the first audio data into the first PCM file; wherein, the attribute value of the first PCM file comprises the file creation time of the first PCM file;
resampling and/or mixing the first audio data to obtain second audio data;
Determining a second PCM file corresponding to the second audio data, and writing the second audio data into the second PCM file; wherein the attribute value of the second PCM file includes a file creation time of the second PCM file;
performing sound effect processing on the second audio data to obtain third audio data;
determining a third PCM file corresponding to the third audio data, and writing the third audio data into the third PCM file; wherein, the attribute value of the third PCM file comprises the file creation time of the third PCM file;
transmitting a target PCM file to a second electronic device, so that the second electronic device determines the relative time of occurrence of a target audio fault in the target PCM file by analyzing the target PCM file, and determines the absolute time of occurrence of the target audio fault according to the relative time and the file creation time of the target PCM file; wherein the target PCM file is one of the first PCM file, the second PCM file, and the third PCM file; the absolute time is used for acquiring a corresponding first electronic equipment log so as to combine the first electronic equipment log to locate audio faults.
15. An electronic device, comprising:
one or more processors;
a memory;
and one or more computer programs, wherein the one or more computer programs are stored on the memory, which when executed by the one or more processors, cause the electronic device to perform the audio time-to-failure localization method of any of claims 1-5.
16. An electronic device, comprising:
one or more processors;
a memory;
and one or more computer programs, wherein the one or more computer programs are stored on the memory, which when executed by the one or more processors, cause the electronic device to perform the audio time-of-failure localization method of any of claims 6-13, or to perform the audio time-of-failure localization method of claim 14.
17. A computer readable storage medium comprising a computer program, characterized in that the computer program, when run on an electronic device, causes the electronic device to perform the audio time-of-failure localization method of any of claims 1-5, or to perform the audio time-of-failure localization method of any of claims 6-13, or to perform the audio time-of-failure localization method of claim 14.
CN202111540680.2A 2021-12-16 2021-12-16 Audio fault time positioning method, electronic equipment and storage medium Active CN115022442B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111540680.2A CN115022442B (en) 2021-12-16 2021-12-16 Audio fault time positioning method, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111540680.2A CN115022442B (en) 2021-12-16 2021-12-16 Audio fault time positioning method, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115022442A CN115022442A (en) 2022-09-06
CN115022442B true CN115022442B (en) 2023-06-09

Family

ID=83064935

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111540680.2A Active CN115022442B (en) 2021-12-16 2021-12-16 Audio fault time positioning method, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115022442B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111078448A (en) * 2019-08-06 2020-04-28 华为技术有限公司 Method for processing audio abnormity and electronic equipment
CN111913867A (en) * 2020-09-07 2020-11-10 京东数字科技控股股份有限公司 Fault feedback method, device, equipment and storage medium
CN113672420A (en) * 2021-08-10 2021-11-19 荣耀终端有限公司 Fault detection method and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107786931B (en) * 2016-08-24 2021-03-23 中国电信股份有限公司 Audio detection method and device
CN106531202B (en) * 2016-11-14 2019-11-22 腾讯音乐娱乐(深圳)有限公司 A kind of audio-frequency processing method and device
CN106803426A (en) * 2016-12-07 2017-06-06 广州视源电子科技股份有限公司 Audio files storage method and system
CN113704014B (en) * 2021-08-24 2022-11-01 荣耀终端有限公司 Log acquisition system, method, electronic device and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111078448A (en) * 2019-08-06 2020-04-28 华为技术有限公司 Method for processing audio abnormity and electronic equipment
CN111913867A (en) * 2020-09-07 2020-11-10 京东数字科技控股股份有限公司 Fault feedback method, device, equipment and storage medium
CN113672420A (en) * 2021-08-10 2021-11-19 荣耀终端有限公司 Fault detection method and electronic equipment

Also Published As

Publication number Publication date
CN115022442A (en) 2022-09-06

Similar Documents

Publication Publication Date Title
EP3629561A1 (en) Data transmission method and system, and bluetooth headphone
CN111078448B (en) Method for processing audio abnormity and electronic equipment
CN109286725B (en) Translation method and terminal
WO2020062159A1 (en) Wireless charging method and electronic device
CN113890932A (en) Audio control method and system and electronic equipment
CN113630910A (en) Method for using cellular communication function and related device
CN112579038A (en) Built-in recording method and device, electronic equipment and storage medium
CN111382418A (en) Application program authority management method and device, storage medium and electronic equipment
CN114996168A (en) Multi-device cooperative test method, test device and readable storage medium
CN113971969A (en) Recording method, device, terminal, medium and product
CN116208704A (en) Sound processing method and device
CN115022442B (en) Audio fault time positioning method, electronic equipment and storage medium
CN113923305B (en) Multi-screen cooperative communication method, system, terminal and storage medium
CN111131019B (en) Multiplexing method and terminal for multiple HTTP channels
CN115531889A (en) Multi-application screen recording method and device
CN113867851A (en) Electronic equipment operation guide information recording method, electronic equipment operation guide information acquisition method and terminal equipment
CN111556406A (en) Audio processing method, audio processing device and earphone
CN110737765A (en) Dialogue data processing method for multi-turn dialogue and related device
EP4167580A1 (en) Audio control method, system, and electronic device
CN114006969B (en) Window starting method and electronic equipment
CN114416011B (en) Terminal, audio control method and storage medium
CN117492689B (en) Audio processing method and electronic equipment
CN117135532B (en) Audio data processing method, device and storage medium
WO2022078085A1 (en) Method and apparatus for measuring synchronization signal block, and mobile terminal
CN117714584A (en) Audio control method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant