CN116013334B - Audio data processing method, electronic device and storage medium - Google Patents
Audio data processing method, electronic device and storage medium Download PDFInfo
- Publication number
- CN116013334B CN116013334B CN202310042274.6A CN202310042274A CN116013334B CN 116013334 B CN116013334 B CN 116013334B CN 202310042274 A CN202310042274 A CN 202310042274A CN 116013334 B CN116013334 B CN 116013334B
- Authority
- CN
- China
- Prior art keywords
- audio data
- wearable device
- format
- preset
- range
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Telephone Function (AREA)
Abstract
The embodiment of the application provides an audio data processing method, electronic equipment and a storage medium, relates to the technical field of data processing, and is applied to wearable equipment, wherein the method comprises the following steps: after receiving the audio data, pre-analyzing the data head of the audio data to obtain pre-analysis information; judging whether the pre-analysis information belongs to a preset range which can be analyzed by the wearable equipment; if yes, after receiving a playing instruction for playing the audio data, analyzing and playing the audio data; otherwise, generating prompt information representing abnormality of the audio data, and after receiving a processing instruction input by a user aiming at the prompt information, processing the audio data according to the processing instruction. By applying the embodiment of the application, the audio data played by the wearable device can be ensured to be the audio data which can be analyzed by the wearable device.
Description
Technical Field
The present application relates to the field of data processing technologies, and in particular, to an audio data processing method, an electronic device, and a storage medium.
Background
The wearable device, such as an intelligent watch, an intelligent bracelet and the like, can play locally stored audio data, but the wearable device can only analyze and play audio data with bit rate, sampling rate and format within a specified range, if the wearable device stores audio data which cannot be analyzed by the wearable device, if a user controls the wearable device to play the audio data, the wearable device can repeatedly analyze the audio data due to the fact that an analysis result cannot be obtained all the time, and then the situation that dead halt and blocking occur when the wearable device plays the audio data can be caused, so that the use experience of the user is affected.
Therefore, a scheme is needed to be provided to ensure that audio data played by the wearable device is audio data which can be analyzed by the wearable device, so that the situation that dead halt and blocking do not occur when the audio data are played by the wearable device is ensured, and the use experience of a user is improved.
Disclosure of Invention
In view of the above, the present application provides an audio data processing method, an electronic device, and a storage medium, so as to ensure that audio data played by a wearable device is audio data that can be parsed by the wearable device.
In a first aspect, an embodiment of the present application provides an audio data processing method, which is applied to a wearable device, where the method includes:
after receiving the audio data, the wearable device pre-analyzes the data head of the audio data to obtain pre-analysis information, wherein the pre-analysis information comprises: bit rate of audio data, sampling rate of audio data, format of audio data;
the wearable device judges whether the pre-analysis information belongs to a preset range which can be analyzed by the wearable device;
if yes, the wearable device analyzes and plays the audio data after receiving a playing instruction for playing the audio data;
Otherwise, the wearable device generates prompt information representing that the audio data is abnormal, and after receiving a processing instruction input by a user for the prompt information, the wearable device processes the audio data indicated by the processing instruction.
From the above, because the wearable device can only parse and play the audio data with the bit rate, the sampling rate and the format in the specified range, after receiving the audio data, the wearable device pre-parses the data header of the audio data to obtain pre-parsed information including the bit rate, the sampling rate and the format, then judges whether the pre-parsed information belongs to the preset range which can be parsed by the wearable device, if so, the wearable device can parse the audio data, otherwise, the wearable device cannot parse the audio data, so according to the scheme provided by the embodiment of the application, whether the audio data is the audio data which can be parsed by the wearable device can be determined.
In addition, if the pre-analysis information belongs to a preset range which can be analyzed by the wearable device, after receiving a play instruction for playing the audio data, analyzing and playing the audio data, otherwise generating prompt information representing that the audio data is abnormal, and after receiving a processing instruction input by a user for the prompt information, processing the audio data according to the processing instruction, namely, the wearable device can directly analyze and play the audio data aiming at the audio data which can be analyzed by the wearable device, and the wearable device can not directly play the audio device, can not directly analyze the audio data, but firstly process the audio data, so that the situation of dead halt and blocking can not occur when the wearable device plays the audio data, and the use experience of the user is improved.
In one embodiment of the present application, after receiving the processing instruction input by the user for the prompt information, the processing for the audio data indicated by the processing instruction includes:
the wearable device deletes the audio data after receiving a processing instruction which is input by a user aiming at the prompt information and indicates to delete the audio data;
after receiving a processing instruction of performing format conversion on the audio data by a user aiming at the representation of the prompt information input, the wearable device converts the audio data into target audio data, wherein the format of the target audio data is as follows: the target format that can be resolved by the wearable device is: the bit rate of the audio data is within a preset bit range, the sampling rate of the audio data is within a preset sampling range, and the format of the audio data is within a preset format range.
As can be seen from the above, according to the scheme provided by the embodiment of the application, after receiving the processing instruction input by the user and representing that the audio data cannot be parsed, the audio data is deleted, so that the behavior that the wearable device cannot obtain the parsing result all the time and can repeatedly parse the audio data is avoided, further, the situation that the wearable device crashes and is blocked when playing the audio data is ensured, or after receiving the processing instruction input by the user and representing that the audio data is subjected to format conversion, the audio data is converted into the target audio data, because the format of the target audio data is the target format which can be parsed by the wearable device, that is, after converting the audio data into the target audio data, the wearable device can parse and play the target audio data, and the behavior that the wearable device cannot obtain the parsing result all the time and can repeatedly parse the audio data is ensured, and further, the situation that the wearable device crashes and is blocked when playing the audio data is ensured.
In one embodiment of the present application, the method further includes:
in the process of converting the audio data into target audio data which can be analyzed by the wearable device, the wearable device judges whether the audio data can be successfully converted into target audio data;
and if the audio data cannot be successfully converted into the target audio data, deleting the audio data by the wearable equipment.
From the above, if the header of the audio data cannot be pre-parsed normally, pre-parsed information cannot be obtained, and further, whether the audio data is the audio data that can be parsed by the wearable device cannot be determined according to the embodiment of the present application, and the audio data can be deleted. In addition, through deleting above-mentioned audio data, can not appear wearable equipment can not acquire the analysis result all the time, can carry out the action of repeated analysis to this audio data, and then guaranteed that the condition of dead halt and blocking appears when the audio data is broadcast to wearable equipment.
In one embodiment of the present application, the method further includes:
and if the data head of the audio data cannot be normally pre-parsed in the process of pre-parsing the audio data, deleting the audio data by the wearable device.
From the above, if the header of the audio data cannot be pre-parsed normally, pre-parsed information cannot be obtained, and further, whether the audio data is the audio data that can be parsed by the wearable device cannot be determined according to the embodiment of the present application, and the audio data can be deleted.
In one embodiment of the present application, the bit rate in the pre-parsed information is obtained by:
after receiving audio data, the wearable device pre-analyzes a data head of the audio data to obtain the number of channels and the bit depth of the audio data;
the wearable device calculates the product of the sampling rate, the number of channels and the bit depth to obtain the bit rate of the audio data.
From the above, since the bit rate of the audio data is related to the sampling rate, the number of channels and the bit depth of the audio data, the bit rate of the audio data can be obtained after the data header of the audio data is pre-parsed to obtain the sampling rate, the number of channels and the bit depth of the audio data.
In an embodiment of the present application, the determining whether the pre-analysis information belongs to a preset range that the wearable device can analyze includes:
The wearable device judges whether the format in the pre-analysis information belongs to a preset format range which can be analyzed by the wearable device;
if the format does not belong to the preset format range, the wearable device executes the step of generating prompt information representing abnormality of the audio data, and after receiving a processing instruction input by a user aiming at the prompt information, processing the audio data by the processing instruction;
if the format belongs to the preset format range, the wearable device judges whether the bit rate in the pre-analysis information belongs to the preset bit range which can be analyzed by the wearable device and whether the sampling rate belongs to the preset sampling range which can be analyzed by the wearable device;
if the bit rate is within the preset bit range and the sampling rate is within the preset sampling range, the wearable device executes the step of analyzing and playing the audio data after receiving the playing instruction for playing the audio data;
otherwise, the wearable device executes the step of generating prompt information representing the abnormality of the audio data, and after receiving a processing instruction input by a user for the prompt information, processing the audio data indicated by the processing instruction.
From the above, since the format, the sampling rate and the bit rate are included in the pre-analysis information, it may be determined whether the format belongs to the preset format range, if not, it may be determined that the pre-analysis information does not belong to the preset range that the wearable device can analyze, if yes, it may be determined whether the sampling rate belongs to the preset sampling range and the bit rate belongs to the preset bit range, if yes, it may be determined that the pre-analysis information belongs to the preset range that the wearable device can analyze, otherwise, it may be determined whether the pre-analysis information belongs to the preset range that the wearable device can analyze.
In a second aspect, an embodiment of the present application provides an electronic device comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the electronic device to perform the steps of any of the first aspects.
In a third aspect, an embodiment of the present application provides a computer readable storage medium, where the computer readable storage medium includes a stored program, where when the program runs, the program controls a device in which the computer readable storage medium is located to execute the method of any one of the first aspects.
The embodiment of the application has the beneficial effects that:
the embodiment of the application provides an audio data processing method which is applied to wearable equipment, and comprises the following steps: after receiving the audio data, pre-analyzing the data head of the audio data to obtain pre-analysis information, wherein the pre-analysis information comprises the following components: bit rate of audio data, sampling rate of audio data, format of audio data; judging whether the pre-analysis information belongs to a preset range which can be analyzed by the wearable equipment; if yes, after receiving a playing instruction for playing the audio data, analyzing and playing the audio data; otherwise, generating prompt information representing abnormality of the audio data, and after receiving a processing instruction input by a user aiming at the prompt information, processing the audio data indicated by the processing instruction.
From the above, because the wearable device can only parse and play the audio data with the bit rate, the sampling rate and the format in the specified range, after receiving the audio data, the wearable device pre-parses the data header of the audio data to obtain pre-parsed information including the bit rate, the sampling rate and the format, then judges whether the pre-parsed information belongs to the preset range which can be parsed by the wearable device, if so, the wearable device can parse the audio data, otherwise, the wearable device cannot parse the audio data, so according to the scheme provided by the embodiment of the application, whether the audio data is the audio data which can be parsed by the wearable device can be determined.
In addition, if the pre-analysis information belongs to a preset range which can be analyzed by the wearable device, after receiving a play instruction for playing the audio data, analyzing and playing the audio data, otherwise generating prompt information representing that the audio data is abnormal, and after receiving a processing instruction input by a user for the prompt information, processing the audio data according to the processing instruction, namely, the wearable device can directly analyze and play the audio data aiming at the audio data which can be analyzed by the wearable device, and the wearable device can not directly play the audio device, can not directly analyze the audio data, but firstly process the audio data, so that the situation of dead halt and blocking can not occur when the wearable device plays the audio data, and the use experience of the user is improved.
Drawings
Fig. 1 is a schematic diagram of an electronic device according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of an audio data processing according to the related art;
fig. 3 is a flowchart of a first audio data processing method according to an embodiment of the present application;
Fig. 4 is a flowchart of a second audio data processing method according to an embodiment of the present application;
fig. 5 is a flowchart of a third audio data processing method according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a first audio data processing method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a second audio data processing method according to an embodiment of the present application;
fig. 8 is a schematic flow chart of pre-parsing audio data according to an embodiment of the present application;
fig. 9 is a schematic flow chart of audio data conversion according to an embodiment of the present application;
fig. 10 is a flowchart of a fourth audio data processing method according to an embodiment of the present application.
Detailed Description
For a better understanding of the technical solution of the present application, the following detailed description of the embodiments of the present application refers to the accompanying drawings.
In order to clearly describe the technical solution of the embodiments of the present application, in the embodiments of the present application, the words "first", "second", etc. are used to distinguish the same item or similar items having substantially the same function and effect. For example, the first instruction and the second instruction are for distinguishing different user instructions, and the sequence of the instructions is not limited. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
In the present application, the words "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
The embodiment of the application can be applied to electronic devices such as tablet computers, personal computers (personal computer, PC), personal digital assistants (personal digital assistant, PDA), smart watches, netbooks, wearable electronic devices, augmented reality (augmented reality, AR) devices, virtual Reality (VR) devices, vehicle-mounted devices, intelligent automobiles, robots, intelligent glasses, intelligent televisions and the like.
As shown in fig. 1, fig. 1 is a schematic diagram of an electronic device according to an embodiment of the present application, where the electronic device shown in fig. 1 may include a processor 110, an external memory 121, an external memory 1-n 120, a universal serial bus (Universal Serial Bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 1-n 193, a display 1-n 194, a subscriber identity module (Subscriber Identity Module, SIM) card interface 1-n 195, and the like. Among them, the sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, and a bone conduction sensor 180M, etc.
It should be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device. In other embodiments of the application, the electronic device may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (Application Processor, AP), a modem processor (modem), a graphics processor (Graphics Processing Unit, GPU), an image signal processor (Image Signal Processor, ISP), a controller, a video codec, a digital signal processor (Digital Signal Processor, DSP), a baseband processor, and/or a neural-Network Processor (NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The processor 110 may generate operation control signals according to the instruction operation code and the timing signals to complete instruction fetching and instruction execution control.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (Inter-Integrated Circuit, I2C) interface, an integrated circuit built-in audio (Inter-Integrated Circuit Sound, I2S) interface, a pulse code modulation (Pulse Code Modulation, PCM) interface, a universal asynchronous receiver Transmitter (Universal Asynchronous Receiver/Transmitter, UART) interface, a mobile industry processor interface (Mobile Industry Processor Interface, MIPI), a General-Purpose Input/Output (GPIO) interface, and a subscriber identity module (Subscriber Identity Module, SIM) interface.
The I2C interface is a bi-directional synchronous Serial bus, comprising a Serial Data Line (SDA) and a Serial clock Line (Derail Clock Line, SCL). In some embodiments, the processor 110 may contain multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc., respectively, through different I2C bus interfaces. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through an I2C bus interface to implement a touch function of the electronic device.
The I2S interface may be used for audio communication. In some embodiments, the processor 110 may contain multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through the I2S interface, to implement a function of answering a call through the bluetooth headset.
PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface. The audio module 170 may transmit the acquired downstream audio stream data and upstream audio stream data to an electronic device wirelessly connected to the electronic device through the wireless communication module 160.
In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface to implement a function of answering a call through the bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through a UART interface, so as to implement a function of obtaining a downstream audio stream through a bluetooth-connected electronic device.
The MIPI interface may be used to connect the processor 110 to peripheral devices such as a display 194, a camera 193, and the like. The MIPI interface includes camera serial interface (Camera Serial Interface, CSI), display serial interface (Display Serial Interface, DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the photographing functions of electronic device 100. The processor 110 and the display screen 194 communicate via a DSI interface to implement the display functionality of the electronic device.
It should be understood that the connection relationship between the modules illustrated in the embodiments of the present application is only illustrative, and does not limit the structure of the electronic device. In other embodiments of the present application, the electronic device may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The wireless communication function of the electronic device may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G or the like for use on the first electronic device. In some embodiments, the transmission of call data between two electronic devices may be accomplished through the mobile communication module 150, for example, as a called party device, downstream audio stream data from the calling party device may be obtained, and upstream audio stream data may be transmitted to the calling party device.
The wireless communication module 160 may provide solutions for wireless communication including WLAN (Wireless Local Area Networks, wireless local area network), BT (Bluetooth), GNSS (Global Navigation Satellite System ), FM (Frequency Modulation, frequency modulation), NFC (Near Field Communication, near field communication technology), IR (infrared technology), and the like, which are applied to an electronic device.
In some embodiments, the antenna 1 and the mobile communication module 150 of the electronic device are coupled, and the antenna 2 and the wireless communication module 160 are coupled, so that the electronic device can communicate with the network and other devices through wireless communication technology. In one embodiment of the application, the electronic device may implement a local area network connection with another electronic device through the wireless communication module 160. Wireless communication techniques may include global system for mobile communications (Global System for Mobile Communications, GSM), general packet radio service (General Packet Radio Service, GPRS), code Division multiple access (Code Division Multiple Access, CDMA), wideband code Division multiple access (Wideband Code Division Multiple Access, WCDMA), time Division-synchronous code Division multiple access (Time-Division-Synchronous Code Division Multiple Access, TD-SCDMA), long term evolution (Long Term Evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (Global Positioning System, GPS), a global navigation satellite system (Global Navigation Satellite System, GLONASS), a Beidou satellite navigation system (Beidou Navigation Satellite System, BDS), a Quasi zenith satellite system (Quasi-Zenith Satellite System, QZSS), and/or a satellite based augmentation system (Satellite Based Augmentation System, SBAS), among others.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), an Active-matrix or Active-matrix Organic Light-Emitting Diode (AMOLED), a flexible Light-Emitting Diode (Flex Light-Emitting Diode), a MiniLED, microLED, micro-OLED, a quantum dot Light-Emitting Diode (Quantum dot Light Emitting Diode, QLED), or the like. In some embodiments, the electronic device may include 1 or N display screens 194, N being a positive integer greater than 1.
The external memory interface 120 may be used to connect external memory cards, such as Micro secure digital (Secure Digital Memory, SD) cards, to enable expansion of the memory capabilities of the electronic device. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. Files such as music, video, audio files, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an operating system, and application programs (such as a sound playing function, an image playing function, and a recording function) required for at least one function, etc. The storage data area may store data created during use of the electronic device (e.g., upstream audio data, downstream audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (Universal Flash Storage, UFS), and the like. The processor 110 performs various functional applications of the electronic device and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor 110.
The electronic device may implement a call conflict handling function, etc. through the audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone interface 170D, application processor, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When the electronic device picks up a call or voice message, the voice transmitted by the caller device may be heard through the listener 170B.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or sending voice information, the user can sound near the microphone 170C through the mouth, and input a sound signal to the microphone 170C to realize the collection of the upstream.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. In some embodiments, the manual answer call function may be implemented when the user clicks an answer key on the display screen 194, and the manual hang-up call function may be implemented when the user clicks a hang-up key on the display screen 194.
The touch sensor 180K, also referred to as a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor 180K may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device at a different location than the display 194.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects by touching different areas of the display screen 194. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195, or removed from the SIM card interface 195 to enable contact and separation with the electronic device. The electronic device may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 195 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic equipment interacts with the network through the SIM card, so that the functions of communication, data communication and the like are realized. In some embodiments, the electronic device employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device and cannot be separated from the electronic device.
In order to facilitate understanding of the scheme provided by the embodiment of the present application, an application scenario of the present application is first described.
Firstly, the audio data may be audio data transmitted from a software application in a mobile phone, a notebook computer, a tablet computer and the like to a wearable device by a user, or may be audio data downloaded to the wearable device through a network and a cloud, wherein the software application may be watch management software, music software, social software and the like, and the audio data may be music data, voice message data, voice broadcast data and the like.
Referring to fig. 2, a flowchart of audio data processing provided for the related art may include the following steps: S201-S209.
Step S201: and after receiving the audio data transmission instruction input by the user, the software application transmits the audio data to the secondary core of the wearable device.
Specifically, the software application is installed in an electronic device in communication connection with a wearable device, where the electronic device may be a mobile phone, a tablet computer, etc., and the communication connection may be a bluetooth connection, a wireless connection, etc.
Step S202: after the auxiliary core of the wearable device receives the audio data sent by the software application, the auxiliary core of the wearable device sends the audio data to the main core of the wearable device.
Step S203: and the wearable equipment main core uses a third party analysis algorithm to pre-analyze the audio data to obtain and store the audio data and pre-analysis information.
Specifically, the steps S201 to S203 belong to a process of transmitting audio data in a software application to a wearable device, and when a user controls the wearable device to play the audio data, the steps S204 to S206 may be executed.
Step S204: after receiving an audio data playing instruction input by a user, the front-end program of the wearable device sends the playing instruction to a main core of the wearable device.
Step S205: after the wearable equipment main core receives the playing instruction, the stored audio data are read, and the audio data are pre-parsed to obtain pre-parsed information.
Step S206: the wearable device primary core sends the pre-resolution information and the audio data to the wearable device secondary core.
Step S207: the wearable equipment pair checks the pre-analysis information and the audio data to further analyze so as to judge whether the audio data can be normally played.
Specifically, if the audio data can be played normally, step S209 is performed, otherwise step S208 is performed.
Step S208: the wearable device auxiliary core sends abnormal information indicating that the audio data cannot be played to a wearable device front-end program, so that the wearable device front-end program ejects a blast (dry cup) prompt for prompting a user that the audio data cannot be played normally in the wearable device UI based on the abnormal information.
Step S209: the wearable device secondary core plays the audio data based on the smart pa.
Because the wearable device can only parse the audio data with the bit rate, the sampling rate and the format within the specified range, if the bit rate, the sampling rate and the format of the audio data sent to the wearable device do not belong to the specified range, the wearable device cannot parse the audio data.
Such as: if the wearable device can analyze the audio data of the format a by using the third party analysis algorithm corresponding to the format a and can analyze the audio data of the format B by using the third party analysis algorithm corresponding to the format B, but if the format of the audio data is the format C and the wearable device cannot analyze the audio data of the format C by using the third party analysis algorithm corresponding to the format C, the wearable device may default use the third party analysis algorithm corresponding to the format a to analyze the audio data, further analysis information cannot be obtained, and because the analysis information cannot be obtained, the wearable device main core may repeatedly analyze the audio data, further cause a crash and blocking condition when the wearable device plays the audio data, and affect the use experience of the user.
Alternatively, if the external format of the audio data is format a and the internal format is format B, for example: the actual suffix name of the audio data is format B, but the user modifies the suffix name of the audio data to format a, that is, format a is an external format of the audio data, format B is an internal format of the audio data, after receiving the audio data, the wearable device determines that the format of the audio data is format a through the external format of the audio data, and the actual format of the audio data is format B, because the wearable device determines that format a is format a, the wearable device uses a third party parsing algorithm corresponding to format a to parse the audio data, but the actual format of the audio data is format B, so that parsing information cannot be obtained, and because parsing information cannot be obtained, the wearable device may repeatedly parse the audio data, so that a dead-time and fast situation occurs when the wearable device plays the audio data, and the use experience of the user is affected.
In order to solve the above-mentioned problems, a front-end program of a software application installed in an electronic device such as a tablet computer and a mobile phone can determine whether the wearable device can parse the audio data based on the capability of a chip configured in the wearable device, if so, the audio data is sent to the wearable device, otherwise, the audio data is not required to be sent to the wearable device. However, as the variety of software applications described above increases, the more types of audio data are sent into the wearable device, there is no guarantee that the front-end program of each software application will be accurate in determining the capabilities of the chip configured in the wearable device.
In addition, only when audio data in the software application is transmitted to the wearable device, the front-end program in the software application can judge whether the wearable device can analyze the audio data, if the wearable device downloads the audio data from a network or a cloud, the front-end program cannot exist to judge whether the wearable device can analyze the audio data, and therefore the audio data cannot be guaranteed to be the audio data which can be analyzed by the wearable device.
In order to solve the above-mentioned problems, an embodiment of the present application provides an audio data processing method, referring to fig. 3, fig. 3 is a schematic flow chart of a first audio data processing method provided in the embodiment of the present application, which may include the following steps: step S301 to step S304.
Step S301: after receiving the audio data, pre-analyzing the data head of the audio data to obtain pre-analysis information.
Wherein, the pre-analysis information includes: the bit rate of the audio data, the sampling rate of the audio data, the format of the audio data.
Firstly, the wearable device can receive audio data from software applications in mobile phones, notebook computers, tablet computers and other devices, and can also receive audio data downloaded from a network and a cloud.
Specifically, because the wearable device can only parse the audio data with the bit rate, the sampling rate and the format within the specified range, after receiving the audio data, the data head of the audio data is first pre-parsed to obtain pre-parsed information including the bit rate, the sampling rate and the format, and after obtaining the pre-parsed information, the wearable device can determine whether the wearable device can parse the audio data through the following steps.
In one embodiment of the present application, the audio data may be music data, voice message data, voice broadcast data, etc., such as: after receiving the voice broadcasting data downloaded from the cloud, the wearable device can pre-analyze the data head of the voice broadcasting data to obtain pre-analysis information, or after receiving the music data transmitted from the music software in the mobile device, the wearable device can pre-analyze the data head of the music data to obtain pre-analysis information.
In one embodiment of the present application, the wearable device further includes the following step a in the process of preresolving the audio data.
Step A: and if the data head of the audio data cannot be normally pre-analyzed in the process of pre-analyzing the audio data, deleting the audio data.
Such as: the audio data is audio data with damaged data hair or information in the data head of the audio data is lost in the process of sending the audio data to the wearable device, so that the wearable device determines that the data head cannot be pre-parsed normally in the process of pre-parsing the audio data, the pre-parsed information cannot be obtained, and whether the audio data is the audio data which can be parsed by the wearable device or not cannot be determined by the embodiment of the application, and the audio data can be deleted.
From the above, if the header of the audio data cannot be pre-parsed normally, pre-parsed information cannot be obtained, and further, whether the audio data is the audio data that can be parsed by the wearable device cannot be determined according to the embodiment of the present application, and the audio data can be deleted.
In addition, the header of the audio data may include: the format of the audio data, the sampling rate of the audio data, the bit depth of the audio data, the number of channels of the audio data, the data size of the audio data, etc., so that the sampling rate of the audio data and the format of the audio data can be easily obtained by pre-parsing the data header of the audio data.
In another embodiment of the present application, the bit rate in the above pre-parsed information may be obtained through the following steps B-C.
And (B) step (B): after receiving the audio data, pre-analyzing the data head of the audio data to obtain the channel number and bit depth of the audio data.
Specifically, since the header of the audio data may include channel data and bit depth of the audio data, the number of channels and bit depth of the audio data can be obtained by pre-parsing the header of the audio data.
Step C: and calculating the product of the sampling rate, the number of channels and the bit depth to obtain the bit rate of the audio data.
In one embodiment of the present application, if the sampling rate of the audio data is 44.1kHZ, the number of channels is 2, and the Bit depth is 8Bit, the product of the sampling rate, the number of channels and the Bit depth may be calculated, and the obtained Bit rate is 705.6KB/s.
From the above, since the bit rate of the audio data is related to the sampling rate, the number of channels and the bit depth of the audio data, the bit rate of the audio data can be obtained after the data header of the audio data is pre-parsed to obtain the sampling rate, the number of channels and the bit depth of the audio data.
Step S302: judging whether the pre-analysis information belongs to a preset range which can be analyzed by the wearable equipment.
Specifically, if the pre-analysis information is within the preset range that the wearable device can analyze, it is indicated that the wearable device can analyze the audio data, and therefore step S303 may be executed, and if the pre-analysis information is not within the preset range that the wearable device can analyze, it is indicated that the wearable device may not be able to analyze the audio data, and therefore step S304 may be executed.
In one embodiment of the present application, the step S302 may be implemented by the steps 302A-302B in fig. 4, which are described in detail in the following embodiments, which are not described in detail herein.
Step S303: after receiving a playing instruction for playing the audio data, analyzing and playing the audio data.
Specifically, if the pre-analysis information belongs to a preset range that the wearable device can analyze, it is indicated that the wearable device can analyze the audio data, so that after receiving a play instruction for playing the audio data, the wearable device analyzes and plays the audio data.
In one embodiment of the present application, if the pre-analysis information belongs to a preset range that the wearable device can analyze, after receiving a play instruction for playing the audio data, the wearable device front-end program sends the play instruction to a wearable device main core, and the wearable device main core analyzes the audio data to obtain analysis information and sends the analysis information to a wearable device auxiliary core, and the wearable device auxiliary core further analyzes the analysis information and plays the audio device.
Step S304: generating prompt information representing abnormality of the audio data, and after receiving a processing instruction input by a user for the prompt information, processing the audio data indicated by the processing instruction.
Specifically, if the pre-analysis information does not belong to the preset range that the wearable device can analyze, it is indicated that the wearable device cannot analyze the audio data, so that a prompt message indicating that the audio data is abnormal can be generated to prompt the user that the audio data cannot be analyzed and played.
In addition, after knowing the prompt information, a user can input a processing instruction aiming at the prompt information, and after receiving the processing instruction, the wearable device can process the audio data by the processing instruction, so that after the wearable device processes the audio device, the wearable device can analyze and play the audio data, further, the problem that the wearable device can not acquire an analysis result all the time, repeatedly analyze the audio data, and the situation that dead halt and stuck state can not occur when the wearable device plays the audio data is ensured.
In one embodiment of the present application, the step S304 may be implemented through steps 304A-304B in fig. 5, which are described in detail in the following embodiments, which are not described in detail herein.
From the above, because the wearable device can only parse and play the audio data with the bit rate, the sampling rate and the format in the specified range, after receiving the audio data, the wearable device pre-parses the data header of the audio data to obtain pre-parsed information including the bit rate, the sampling rate and the format, then judges whether the pre-parsed information belongs to the preset range which can be parsed by the wearable device, if so, the wearable device can parse the audio data, otherwise, the wearable device cannot parse the audio data, so according to the scheme provided by the embodiment of the application, whether the audio data is the audio data which can be parsed by the wearable device can be determined.
In addition, if the pre-analysis information belongs to a preset range which can be analyzed by the wearable device, after receiving a play instruction for playing the audio data, analyzing and playing the audio data, otherwise generating prompt information representing that the audio data is abnormal, and after receiving a processing instruction input by a user for the prompt information, processing the audio data according to the processing instruction, namely, the wearable device can directly analyze and play the audio data aiming at the audio data which can be analyzed by the wearable device, and the wearable device can not directly play the audio device, can not directly analyze the audio data, but firstly process the audio data, so that the situation of dead halt and blocking can not occur when the wearable device plays the audio data, and the use experience of the user is improved.
Referring to fig. 4, fig. 4 is a flowchart of a second audio data processing method according to an embodiment of the present application, and compared with the embodiment shown in fig. 3, the step S302 may include the following steps: S302A-S302B.
Step S302A: judging whether the format in the pre-analysis information belongs to a preset format range which can be analyzed by the wearable equipment.
Specifically, since the pre-analysis information includes the format, the sampling rate and the bit rate, it is determined whether the pre-analysis information belongs to a preset range that the wearable device can analyze, whether the format in the pre-analysis information belongs to the preset format range that the wearable device can analyze may be determined first, if the format in the pre-analysis information does not belong to the preset format range, it may be stated that the pre-analysis information does not belong to the preset range, the step S304 may be executed, and otherwise, the step S302B may be executed.
In an embodiment of the present application, formats within the preset format range are all formats that can be resolved by the wearable device, and if the first preset format range that can be resolved by the first wearable device includes: after the received audio data is parsed by the first wearable device according to the MP3 (Moving Picture Experts Group Audio Layer III, dynamic video expert compression standard audio layer 3) format, AAC (Advanced Audio Coding ) format and WAV (WaveForm) format, if the format of the received audio data is MP3 format, that is, the format belongs to a first preset format range that the first wearable device can parse, the step S304 may be executed, and if the format of the received audio data is FLAC format, that is, the format does not belong to the first preset format range that the first wearable device can parse, the step S302B may be executed.
Or if the resolvable second preset format range of the second wearable device includes: after the second wearable device analyzes the received audio data, the step S304 may be executed if the format of the received audio data is a WAV format, that is, the format belongs to a second preset format range that the second wearable device can analyze, and the step S302B may be executed if the format of the received audio data is a CDA format, that is, the format does not belong to the second preset format range that the second wearable device can analyze.
Step S302B: judging whether the bit rate in the pre-analysis information belongs to a preset bit range which can be analyzed by the wearable equipment and whether the sampling rate belongs to a preset sampling range which can be analyzed by the wearable equipment.
Specifically, if the format in the pre-analysis information belongs to the preset format range that can be analyzed by the wearable device, it may be continuously determined whether the bit rate in the pre-analysis information belongs to the preset bit range that can be analyzed by the wearable device and whether the sample rate belongs to the preset sample range that can be analyzed by the wearable device, if the format belongs to the preset format range, the bit rate belongs to the preset bit range and the sample rate belongs to the preset sample range, it may be indicated that the pre-analysis information belongs to the preset range that can be analyzed by the wearable device, so step S303 may be performed, and if the bit rate does not belong to the preset bit range but the sample rate belongs to the preset sample range, or the sample rate does not belong to the preset bit range, or the bit rate does not belong to the preset bit range and the sample rate does not belong to the preset bit range, it may be indicated that the pre-analysis information does not belong to the preset sample range, so step S304 may be performed.
In one embodiment of the present application, if the format of the audio data is MP3 format and belongs to the preset format range, it may be continuously determined whether the bit rate of the audio data belongs to the preset bit range and the sampling rate of the audio data belongs to the preset sampling range. When the format of the audio data is the MP3 format, the preset sampling range which can be analyzed by the wearable device is as follows: 44.1kHZ and 48kHZ, the preset bit range that wearable device can parse is: if the bit rate is greater than 288KB/S, if the bit rate obtained by the parsing of the wearable device is 300KB/S and the sampling rate is 48kHZ, that is, the bit rate is within the preset bit range and the sampling rate is within the preset sampling range, it is indicated that the pre-parsing information is within the preset range, step S303 may be performed, and if the bit rate obtained by the parsing of the wearable device is 200KB/S and the sampling rate is 44.1kHZ, that is, the sampling rate is within the preset sampling range but the bit rate is not within the preset bit range, it is indicated that the pre-parsing information is not within the preset range, and step S304 may be performed.
From the above, since the format, the sampling rate and the bit rate are included in the pre-analysis information, it may be determined whether the format belongs to the preset format range, if not, it may be determined that the pre-analysis information does not belong to the preset range that the wearable device can analyze, if yes, it may be determined whether the sampling rate belongs to the preset sampling range and the bit rate belongs to the preset bit range, if yes, it may be determined that the pre-analysis information belongs to the preset range that the wearable device can analyze, otherwise, it may be determined whether the pre-analysis information belongs to the preset range that the wearable device can analyze.
Referring to fig. 5, fig. 5 is a flowchart of a third audio data processing method according to an embodiment of the present application, and compared with the embodiment shown in fig. 3, the step S304 may include the following steps: step S304A-step S304B.
Step S304A: and deleting the audio data after receiving a processing instruction which is input by a user aiming at the prompt information and indicates to delete the audio data.
Specifically, after receiving a processing instruction for deleting the audio data, the wearable device can delete the audio data, because the audio data which cannot be analyzed by the audio data are deleted, the situation that the wearable device cannot acquire an analysis result all the time and can repeatedly analyze the audio data is avoided, and further the situation that the wearable device is crashed and blocked when playing the audio data is guaranteed.
In one embodiment of the present application, the front-end program of the wearable device may display a prompt message indicating that the audio data is abnormal in a UI (User Interface) Interface of the wearable device, where the User may learn that the audio data is abnormal through the prompt message in the UI Interface of the wearable device, and the User may input a processing instruction indicating that the audio data is deleted, and after receiving the processing instruction indicating that the audio data is deleted, the wearable device deletes the audio data.
Step S304B: after receiving a processing instruction for performing format conversion on the audio data according to the indication information input by a user, converting the audio data into target audio data.
Wherein, the format of the target audio data is as follows: the target format that the wearable device can analyze is as follows: the bit rate of the audio data is within a preset bit range, the sampling rate of the audio data is within a preset sampling range, and the format of the audio data is within a preset format range.
Specifically, after receiving a processing instruction for converting the format of the audio data, the wearable device can convert the audio data into target audio data, wherein the format of the target audio data is a target format which can be analyzed by the wearable device, that is, after converting the audio data into the target audio data, the wearable device can analyze the target audio data, so that the phenomenon that the wearable device cannot acquire an analysis result all the time and repeatedly analyze the audio data is avoided, and the situation that the wearable device is blocked by a dead halt when playing the audio data is guaranteed.
In one embodiment of the application, the front-end program of the wearable device can display prompt information representing that the audio data is abnormal in the UI interface of the wearable device, the user can learn that the audio data is abnormal through the prompt information in the UI interface of the wearable device, the user can input a processing instruction representing that the format of the audio data is converted, and the wearable device converts the audio data into target audio data after receiving the processing instruction representing that the format of the audio data is converted.
In addition, aiming at the format that can be resolved by different wearable devices, the preset sampling range and the preset bit range that can be resolved by the wearable devices are also different, for example: the preset format range includes: MP3 format, AAC format and WAV format, when the format of the frequency data is MP3, the preset sampling range which can be analyzed by the wearable device is: 44.1kHZ and 48kHZ, the preset bit range that wearable device can parse is: when the format of the audio data is converted into the MP3 format, the bit rate of the audio data is converted into 400KB/s, the sampling rate of the audio data is converted into 48kHZ, the bit rate of the audio data belongs to a preset bit range, the sampling rate of the audio data belongs to a preset sampling range, the format of the audio data belongs to a preset format range, and the audio data is successfully converted into target audio data.
In another embodiment of the present application, the audio device may be converted into the target audio device through a conversion chip configured in the wearable device, or the audio device may be converted into the target audio device through a conversion code written by a staff member, or the audio device may be converted into the target audio device using a coding library such as FFmpeg, name, pydub, etc.
And secondly, in the process of converting the audio data into target audio data which can be analyzed by the wearable equipment, the method further comprises the following step D-step E.
Step D: it is determined whether the above-described audio data can be successfully converted into target audio data.
Specifically, because the target audio data is capable of being parsed by the wearable device, if the audio data can be successfully converted into the target audio data, after receiving a play instruction for playing the target audio data, the target audio data can be parsed and played, and if the audio data cannot be successfully converted into the target audio data, the step E can be executed.
Step E: and deleting the audio data.
Specifically, if the audio data cannot be successfully converted into the target audio data, it is indicated that the audio data cannot be converted into the audio data which can be analyzed by the wearable device, so that the audio data can be deleted.
If the data head of the audio data cannot be normally pre-parsed, pre-parsed information cannot be obtained, and further whether the audio data can be parsed by the wearable device cannot be determined through the embodiment of the application, and the audio data can be deleted. In addition, through deleting above-mentioned audio data, can not appear wearable equipment can not acquire the analysis result all the time, can carry out the action of repeated analysis to this audio data, and then guaranteed that the condition of dead halt and blocking appears when the audio data is broadcast to wearable equipment.
As can be seen from the above, according to the scheme provided by the embodiment of the application, after receiving the processing instruction input by the user and representing that the audio data cannot be parsed, the audio data is deleted, so that the behavior that the wearable device cannot obtain the parsing result all the time and can repeatedly parse the audio data is avoided, further, the situation that the wearable device crashes and is blocked when playing the audio data is ensured, or after receiving the processing instruction input by the user and representing that the audio data is subjected to format conversion, the audio data is converted into the target audio data, because the format of the target audio data is the target format which can be parsed by the wearable device, that is, after converting the audio data into the target audio data, the wearable device can parse and play the target audio data, and the behavior that the wearable device cannot obtain the parsing result all the time and can repeatedly parse the audio data is ensured, and further, the situation that the wearable device crashes and is blocked when playing the audio data is ensured.
Referring to fig. 6, which is a schematic structural diagram of a first audio data processing method according to an embodiment of the present application, in fig. 6, a frame layer of an application service frame of a wearable device is shown to include a system basic capability, where the system basic capability includes: communication services and audio services. Wherein the communication service includes: message service, call service, contacts, interconnection, and audio service includes: music playing, audio management, voice broadcasting, voice service and music control, wherein the audio management is used for realizing audio data processing.
Referring to fig. 7, a schematic structural diagram of a second audio data processing method according to an embodiment of the present application is provided, where in fig. 7, the method includes an application layer, a framework layer, a bluetooth device, and data processing, where the framework layer includes audio management, and the audio management includes: the system comprises a third party analysis algorithm module and a preprocessing module, wherein the preprocessing module comprises: a pre-parsing sub-module and an audio data conversion sub-module.
Specifically, the bluetooth device is configured to transmit audio data to a pre-parsing sub-module, where the pre-parsing sub-module is configured to determine whether the wearable device is capable of parsing the audio data, if the audio data can be parsed, send the audio data to a third party parsing algorithm module, after parsing the audio data by the third party parsing algorithm module, send the parsed data to a data processing unit to enable the audio data to be played, if the audio data cannot be parsed, the pre-processing module sends a prompt message indicating that the audio data is abnormal to an application layer, then the application layer displays the prompt message in a UI interface of the wearable device, then a user performs a format conversion processing instruction on the audio data according to the prompt message in the UI interface of the wearable device, if the user inputs a processing instruction on the audio data by the input representation, and if the user inputs a processing instruction on the audio data by the representation, the audio data conversion sub-module converts the audio data into target audio data capable of playing the audio data, and if the processing instruction on the audio data by the input representation by the user is deleted, the pre-processing module deletes the audio data by the input representation.
The process of parsing the audio data by the pre-parsing sub-module may be implemented by steps S801 to S807 in fig. 8, and the process of converting the audio data by the audio data conversion sub-module may be implemented by steps S901 to S904 in fig. 9.
Referring to fig. 8, fig. 8 is a schematic flow chart of pre-parsing audio data according to an embodiment of the present application, which may include the following steps: step S801 to step S807.
Step S801: after receiving the audio data, pre-analyzing the data head of the audio data to obtain pre-analyzing information comprising the format, bit rate and sampling flow of the audio data.
Step S802: and judging whether the pre-analysis information obtained by pre-analysis is normal information or not.
Specifically, if the pre-analysis information is normal information, step S804 is performed, otherwise step S803 is performed.
Step S803: and deleting the audio data.
Step S804: judging whether the format in the pre-analysis information belongs to a preset format range which can be analyzed by the wearable equipment.
Specifically, if the format in the pre-analysis information belongs to the range of the preset format that can be analyzed by the wearable device, step S806 is performed, otherwise step S805 is performed.
Step S805: generating prompt information representing abnormality of the audio data, deleting the audio data after receiving a processing instruction which is input by a user for the prompt information and represents deleting the audio data, or converting the audio data into target audio data after receiving a processing instruction which is input by the user for the prompt information and represents performing format conversion on the audio data.
Step S806: judging whether the bit rate in the pre-analysis information belongs to a preset bit range which can be analyzed by the wearable device and whether the sampling rate belongs to a preset sampling range which can be analyzed by the wearable device.
Specifically, if the bit rate is within the preset bit range and the sampling rate is within the preset sampling range, step S807 is executed, otherwise step S805 is executed.
Step S807: and sending the audio data to a third party analysis algorithm module so that the third party analysis algorithm module analyzes the audio data.
Specifically, the specific implementation of the steps S801 to S807 has been described in detail above, and will not be described herein.
Referring to fig. 9, fig. 9 is a schematic flow chart of audio data conversion according to an embodiment of the present application, which may include the following steps: step S901 to step S904.
Step S901: after receiving a processing instruction for performing format conversion on the audio data according to the indication information input by a user, converting the audio data into target audio data.
Wherein, the format of the target audio data is as follows: the target format that the wearable device can analyze is as follows: the bit rate of the audio data is within a preset bit range, the sampling rate of the audio data is within a preset sampling range, and the format of the audio data is within a preset format range.
Step S902: it is determined whether the above-described audio data can be successfully converted into target audio data.
Specifically, if the above audio data can be successfully converted into the target audio data, step S903 is performed, otherwise step S904 is performed.
Step S903: and sending the target audio data to a third-party analysis algorithm module so that the third-party analysis algorithm module analyzes the target audio data.
Step S904: and deleting the audio data.
Specifically, the specific implementation of the steps S901 to S904 has been described in detail above, and will not be described herein.
Referring to fig. 10, fig. 10 is a flowchart of a fourth audio data processing method according to an embodiment of the present application, which may include the following steps: step S1001-step S1019.
Step S1001: the software application transmits audio data to the wearable device secondary core.
Step S1002: the wearable device secondary core sends the received audio data to the wearable device primary core.
Step S1003: the wearable device main core sends the audio data to the preprocessing module.
Step S1004: the preprocessing module performs preresolved on the data head of the audio data to obtain preresolved information comprising format, sampling rate and bit rate.
Step S1005: the preprocessing module judges whether pre-analysis information obtained by pre-analysis is normal information or not.
Specifically, if the pre-analysis information obtained by the pre-analysis is normal information, step S1008 is executed, otherwise steps S1006-S1007 are executed.
Step S1006: the preprocessing module sends anomaly information representing the audio data corruption to the wearable device front-end program.
Step S1007: the wearable device front-end program displays anomaly information representing the corruption of the audio data in the wearable device UI interface to alert the user that the audio data has been corrupted.
Step S1008: the preprocessing module judges whether the pre-analysis information belongs to a preset range which can be analyzed by the wearable equipment.
Specifically, if the format in the pre-parsing information does not belong to the preset format range, step S1009-step S1010 are executed; if the bit rate in the pre-analysis information does not belong to the preset bit range and/or the sampling rate does not belong to the preset sampling range, executing step S1011-step S1012; if the pre-analysis information belongs to the preset range that the wearable device can analyze, step S1013 is executed.
Step S1009: the preprocessing module sends abnormality information representing an abnormality in the format of the audio data to the wearable device front-end program.
Step S1010: the wearable device front-end program displays abnormality information representing an abnormality in the format of the audio data in the wearable device UI interface to prompt the user for the abnormality in the format of the audio data.
Step S1011: the preprocessing module sends abnormality information representing an abnormality of the sampling rate and/or the bit rate of the audio data to the wearable device front-end program.
Step S1012: the wearable device front-end program displays abnormality information representing an abnormality in the audio sampling rate and/or bit rate in the wearable device UI interface to prompt the user for an abnormality in the sampling rate and/or bit rate of the audio data.
Step S1013: and the preprocessing module sends the audio data to the wearable equipment main core.
Step S1014: and the wearable equipment main core analyzes the audio data by using a third party analysis algorithm to obtain and store the audio data and pre-analysis information.
Step S1015: after receiving a playing instruction for playing the audio data, the main core of the wearable device reads the stored audio data and pre-analyzes the audio data to obtain pre-analysis information.
Step S1016: the wearable device primary core sends the pre-resolution information and the audio data to the wearable device secondary core.
Step S1017: the wearable equipment pair checks the pre-analysis information and the audio data to further analyze so as to judge whether the audio data can be normally played.
Specifically, if the audio data can be played normally, step S1019 is executed, otherwise step S1018 is executed.
Step S1018: the wearable device auxiliary core sends abnormal information indicating that the audio data cannot be played to a wearable device front-end program, so that the wearable device front-end program ejects a blast prompt for prompting a user that the audio data cannot be played normally in the wearable device UI based on the abnormal information.
Step S1019: the wearable device secondary core plays the audio data based on the smart pa.
Specifically, the specific implementation of the step S1001 to the step S1017 is described in detail above, and will not be described herein.
In a specific implementation, the present application further provides a computer storage medium, where the computer storage medium may store a program, where when the program runs, the device where the computer readable storage medium is controlled to execute some or all of the steps in the foregoing embodiments. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a random-access memory (random access memory, RAM), or the like.
In a specific implementation, an embodiment of the present application further provides a computer program product, where the computer program product contains executable instructions, where the executable instructions when executed on a computer cause the computer to perform some or all of the steps in the above method embodiments.
Embodiments of the disclosed mechanisms may be implemented in hardware, software, firmware, or a combination of these implementations. Embodiments of the application may be implemented as a computer program or program code that is executed on a programmable system comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices in a known manner. For the purposes of this application, a processing system includes any system having a processor such as, for example, a digital signal processor (Digital Signal Processor, DSP), microcontroller, application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or microprocessor.
The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. Program code may also be implemented in assembly or machine language, if desired. Indeed, the mechanisms described in the present application are not limited in scope by any particular programming language. In either case, the language may be a compiled or interpreted language.
In some cases, the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be read and executed by one or more processors. For example, the instructions may be distributed over a network or through other computer readable media. Thus, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), including, but not limited to, floppy diskettes, optical disks, compact disk Read-Only memories (Compact Disc Read Only Memory, CD-ROMs), magneto-optical disks, read-Only memories (ROMs), random Access Memories (RAMs), erasable programmable Read-Only memories (Erasable Programmable Read Only Memory, EPROMs), electrically erasable programmable Read-Only memories (Electrically Erasable Programmable Read Only Memory, EEPROMs), magnetic or optical cards, flash Memory, or tangible machine-readable Memory for transmitting information (e.g., carrier waves, infrared signal digital signals, etc.) in an electrical, optical, acoustical or other form of propagated signal using the internet. Thus, a machine-readable medium includes any type of machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).
In the drawings, some structural or methodological features may be shown in a particular arrangement and/or order. However, it should be understood that such a particular arrangement and/or ordering may not be required. Rather, in some embodiments, these features may be arranged in a different manner and/or order than shown in the drawings of the specification. Additionally, the inclusion of structural or methodological features in a particular figure is not meant to imply that such features are required in all embodiments, and in some embodiments, may not be included or may be combined with other features.
It should be noted that, in the embodiments of the present application, each unit/module mentioned in each device is a logic unit/module, and in physical terms, one logic unit/module may be one physical unit/module, or may be a part of one physical unit/module, or may be implemented by a combination of multiple physical units/modules, where the physical implementation manner of the logic unit/module itself is not the most important, and the combination of functions implemented by the logic unit/module is only a key for solving the technical problem posed by the present application. Furthermore, in order to highlight the innovative part of the present application, the above-described device embodiments of the present application do not introduce units/modules that are less closely related to solving the technical problems posed by the present application, which does not indicate that the above-described device embodiments do not have other units/modules.
It should be noted that in the examples and descriptions of this patent, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
While the application has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the application.
Claims (8)
1. An audio data processing method, applied to a wearable device, the method comprising:
the method comprises the steps that a wearable device auxiliary core of the wearable device receives audio data, a preprocessing module of the wearable device pre-analyzes a data head of the audio data to obtain pre-analysis information, and the pre-analysis information comprises: bit rate of audio data, sampling rate of audio data, format of audio data;
the preprocessing module judges whether the pre-analysis information belongs to a preset range which can be analyzed by the wearable equipment;
if yes, after a front-end program of the wearable device receives a playing instruction for playing the audio data, a wearable device main check of the wearable device analyzes the audio data to obtain analysis information, and a wearable device auxiliary check of the wearable device further analyzes the analysis information and plays the audio data;
otherwise, the preprocessing module generates prompt information representing abnormality of the audio data, and after receiving a processing instruction input by a user aiming at the prompt information, the preprocessing module processes the audio data indicated by the processing instruction, wherein the processing instruction is as follows: processing instructions representing deletion of the audio data or processing instructions representing format conversion of the audio data.
2. The method according to claim 1, wherein after receiving a processing instruction input by a user for the prompt information, performing processing indicated by the processing instruction on the audio data, includes:
deleting the audio data after receiving a processing instruction which is input by a user aiming at the prompt information and used for indicating to delete the audio data;
after receiving a processing instruction of performing format conversion on the audio data aiming at the prompt information input by a user, converting the audio data into target audio data, wherein the format of the target audio data is as follows: the target format that can be resolved by the wearable device is: the bit rate of the audio data is within a preset bit range, the sampling rate of the audio data is within a preset sampling range, and the format of the audio data is within a preset format range.
3. The method according to claim 2, wherein the method further comprises:
in the process of converting the audio data into target audio data which can be analyzed by the wearable equipment, judging whether the audio data can be successfully converted into target audio data or not;
And deleting the audio data if the audio data cannot be successfully converted into the target audio data.
4. The method according to claim 1, wherein the method further comprises:
and if the data head of the audio data cannot be normally pre-analyzed in the process of pre-analyzing the audio data, deleting the audio data.
5. The method of claim 1, wherein obtaining the bit rate in the pre-parsed information comprises:
after receiving audio data, pre-analyzing a data head of the audio data to obtain the number of channels and the bit depth of the audio data;
and calculating the product of the sampling rate, the number of channels and the bit depth to obtain the bit rate of the audio data.
6. The method according to claim 1, wherein the determining whether the pre-resolution information is within a preset range that the wearable device can resolve includes:
judging whether the format in the pre-analysis information belongs to a preset format range which can be analyzed by the wearable equipment;
executing the step of generating prompt information representing abnormality of the audio data if the format does not belong to the preset format range, and after receiving a processing instruction input by a user aiming at the prompt information, carrying out processing indicated by the processing instruction on the audio data;
If the format belongs to the preset format range, judging whether the bit rate in the pre-analysis information belongs to the preset bit range which can be analyzed by the wearable equipment and whether the sampling rate belongs to the preset sampling range which can be analyzed by the wearable equipment;
if the bit rate is within the preset bit range and the sampling rate is within the preset sampling range, executing the step of analyzing and playing the audio data after receiving a playing instruction for playing the audio data;
otherwise, executing the step of generating prompt information representing the abnormality of the audio data, and after receiving a processing instruction input by a user aiming at the prompt information, carrying out processing indicated by the processing instruction on the audio data.
7. An electronic device comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the electronic device to perform the method of any one of claims 1-6.
8. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored program, wherein the program when run controls a device in which the computer readable storage medium is located to perform the method according to any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310042274.6A CN116013334B (en) | 2023-01-28 | 2023-01-28 | Audio data processing method, electronic device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310042274.6A CN116013334B (en) | 2023-01-28 | 2023-01-28 | Audio data processing method, electronic device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116013334A CN116013334A (en) | 2023-04-25 |
CN116013334B true CN116013334B (en) | 2023-08-18 |
Family
ID=86024725
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310042274.6A Active CN116013334B (en) | 2023-01-28 | 2023-01-28 | Audio data processing method, electronic device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116013334B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102376333A (en) * | 2010-08-18 | 2012-03-14 | Tcl集团股份有限公司 | Multimedia playing terminal and method and device for playing multimedia files |
CN106384596A (en) * | 2016-09-22 | 2017-02-08 | 努比亚技术有限公司 | Audio data processing method and terminal |
CN111078930A (en) * | 2019-12-13 | 2020-04-28 | 集奥聚合(北京)人工智能科技有限公司 | Audio file data processing method and device |
CN111199743A (en) * | 2020-02-28 | 2020-05-26 | Oppo广东移动通信有限公司 | Audio coding format determining method and device, storage medium and electronic equipment |
CN113689864A (en) * | 2021-10-27 | 2021-11-23 | 北京百瑞互联技术有限公司 | Audio data processing method and device and storage medium |
CN114639392A (en) * | 2022-03-11 | 2022-06-17 | 深圳追一科技有限公司 | Audio processing method and device, electronic equipment and storage medium |
WO2022186470A1 (en) * | 2021-03-04 | 2022-09-09 | 삼성전자 주식회사 | Audio processing method and electronic device including same |
CN115394316A (en) * | 2022-08-23 | 2022-11-25 | 汉桑(南京)科技股份有限公司 | Audio processing method, system, device and storage medium |
-
2023
- 2023-01-28 CN CN202310042274.6A patent/CN116013334B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102376333A (en) * | 2010-08-18 | 2012-03-14 | Tcl集团股份有限公司 | Multimedia playing terminal and method and device for playing multimedia files |
CN106384596A (en) * | 2016-09-22 | 2017-02-08 | 努比亚技术有限公司 | Audio data processing method and terminal |
CN111078930A (en) * | 2019-12-13 | 2020-04-28 | 集奥聚合(北京)人工智能科技有限公司 | Audio file data processing method and device |
CN111199743A (en) * | 2020-02-28 | 2020-05-26 | Oppo广东移动通信有限公司 | Audio coding format determining method and device, storage medium and electronic equipment |
WO2022186470A1 (en) * | 2021-03-04 | 2022-09-09 | 삼성전자 주식회사 | Audio processing method and electronic device including same |
CN113689864A (en) * | 2021-10-27 | 2021-11-23 | 北京百瑞互联技术有限公司 | Audio data processing method and device and storage medium |
CN114639392A (en) * | 2022-03-11 | 2022-06-17 | 深圳追一科技有限公司 | Audio processing method and device, electronic equipment and storage medium |
CN115394316A (en) * | 2022-08-23 | 2022-11-25 | 汉桑(南京)科技股份有限公司 | Audio processing method, system, device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN116013334A (en) | 2023-04-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11893359B2 (en) | Speech translation method and terminal when translated speech of two users are obtained at the same time | |
CN111556479B (en) | Information sharing method and related device | |
CN113448482B (en) | Sliding response control method and device of touch screen and electronic equipment | |
CN111222836B (en) | Arrival reminding method and related device | |
CN113126948B (en) | Audio playing method and related equipment | |
CN113993226B (en) | Service processing method and device in terminal equipment supporting double cards | |
CN111274043B (en) | Near field communication method, near field communication device, near field communication system, storage medium and electronic equipment | |
CN116795753A (en) | Audio data transmission processing method and electronic equipment | |
CN116013334B (en) | Audio data processing method, electronic device and storage medium | |
CN113225361B (en) | Data transmission method and electronic equipment | |
CN116033057B (en) | Method for synchronizing sound recordings based on distributed conversation, electronic equipment and readable storage medium | |
CN116662130A (en) | Method for counting application use time length, electronic equipment and readable storage medium | |
CN116261124A (en) | Data transmission method and device, electronic equipment and intelligent terminal | |
CN114664306A (en) | Method, electronic equipment and system for editing text | |
CN113467821A (en) | Application program repairing method, device, equipment and readable storage medium | |
CN115485685A (en) | Application program safety detection method and device, storage medium and electronic equipment | |
CN115002543B (en) | Video sharing method, electronic device and storage medium | |
CN117133311B (en) | Audio scene recognition method and electronic equipment | |
CN116684036B (en) | Data processing method and related device | |
CN116662990B (en) | Malicious application identification method, electronic device, storage medium and program product | |
CN116668967B (en) | Push message processing method, system, terminal equipment, push server and medium | |
CN116321265B (en) | Network quality evaluation method, electronic device and storage medium | |
CN116700852B (en) | Card display method, terminal, storage medium and program product | |
CN116665643B (en) | Rhythm marking method and device and terminal equipment | |
WO2023078221A1 (en) | Language translation method and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |