CN111506291B - Audio data acquisition method, device, computer equipment and storage medium - Google Patents

Audio data acquisition method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN111506291B
CN111506291B CN202010148208.3A CN202010148208A CN111506291B CN 111506291 B CN111506291 B CN 111506291B CN 202010148208 A CN202010148208 A CN 202010148208A CN 111506291 B CN111506291 B CN 111506291B
Authority
CN
China
Prior art keywords
audio data
linux
management system
audio
multipath
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010148208.3A
Other languages
Chinese (zh)
Other versions
CN111506291A (en
Inventor
王杰
李智勇
常乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing SoundAI Technology Co Ltd
Original Assignee
Beijing SoundAI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing SoundAI Technology Co Ltd filed Critical Beijing SoundAI Technology Co Ltd
Priority to CN202010148208.3A priority Critical patent/CN111506291B/en
Publication of CN111506291A publication Critical patent/CN111506291A/en
Application granted granted Critical
Publication of CN111506291B publication Critical patent/CN111506291B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/162Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4488Object-oriented
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/545Interprogram communication where tasks reside in different layers, e.g. user- and kernel-space

Abstract

The disclosure discloses an audio data acquisition method, an audio data acquisition device, computer equipment and a storage medium, and relates to the technical field of voice recognition, wherein the method comprises the following steps: acquiring multiple paths of audio data; transmitting the multipath audio data to an audio management system of Linux; the method comprises the steps of reading multipath audio data transmitted to an audio management system of Linux through an application layer; and calling the JNI to input the read multipath audio data to a Java layer of an application layer. By the method, the complexity and the operation difficulty of collecting the multi-channel audio data are reduced in the process of collecting the multi-channel audio data, and the process of collecting the audio data is optimized while the requirement of far-field voice interaction scenes is met.

Description

Audio data acquisition method, device, computer equipment and storage medium
Technical Field
The disclosure relates to the technical field of voice recognition, and in particular relates to an audio data acquisition method, an audio data acquisition device, computer equipment and a storage medium.
Background
With the continuous development of artificial intelligence, voice interaction is an important field in many artificial intelligence fields, and in the process of realizing voice interaction, the accuracy of audio data acquisition affects a series of steps such as subsequent voice recognition.
Currently, a plurality of microphones are used for cooperation to acquire multi-channel audio data and then input into an intelligent audio processing algorithm for analysis and processing. In the related art, in order to meet the acquisition requirement of multi-channel Audio data in a far-field interaction scene, when the multi-channel Audio data is acquired and transmitted, firstly, after the multi-channel Audio data is acquired by a plurality of microphones, the multi-channel Audio data is compressed by a HAL layer (Hardware Abstraction Layer ) of an Android system (Android), and then is transmitted to an Android application layer through an original Audio channel (Audio channel) of an Android framework layer, and the compressed multi-channel Audio data is decompressed by the application layer and is analyzed and processed by an intelligent voice processing algorithm.
In the related art, compression processing is required to be performed on multi-channel audio data in the HAL layer of the Android system, in order to achieve the above process, a certain technical threshold is required to change the system source code of the HAL layer at the lower layer of the Android system, different Android systems are required to be independently adapted, and the original Android system is required to be changed, so that the complexity of collecting the multi-channel audio data is higher, and the operation difficulty is higher.
Disclosure of Invention
The disclosure provides an audio data acquisition method, an audio data acquisition device, computer equipment and a storage medium. The complexity and the operation difficulty of collecting multichannel audio data can be reduced, and the technical scheme is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided an audio signal processing method, the method comprising:
acquiring multiple paths of audio data, wherein the multiple paths of audio data are audio data acquired through multiple microphones;
transmitting the multipath audio data to an audio management system of Linux;
reading the multipath audio data transmitted to the Linux audio management system through an application layer;
and calling the JNI to input the read multipath audio data to a Java layer of the application layer.
In one possible implementation manner, the acquiring multiple paths of audio data includes:
and acquiring the multipath audio data by calling microphone driving to drive a plurality of microphones.
In one possible implementation, the audio management system for delivering the multiple paths of audio data to Linux includes:
and transmitting the multipath audio data to an audio management system of Linux through the microphone driver.
In one possible implementation, the reading, by an application layer, the multiple paths of audio data that are delivered to the Linux audio management system includes:
acquiring a node handle from a virtual device node of the Linux audio management system through the application layer, wherein the virtual device node is a mapping of the microphone driver;
and reading the multipath audio data transmitted to the audio management system of the Linux in a file read-write mode according to the node handle.
In one possible implementation, the audio management system of Linux is tinylalsa.
In one possible implementation, the method further includes:
adjusting parameters for acquiring the multipath audio data by setting parameters of the Linux audio management system;
wherein, the parameters for collecting the multipath audio data comprise: the number of microphone channels, the sampling rate, the sampling bit depth, the sampling node, etc.
According to a second aspect of embodiments of the present disclosure, there is provided an audio data acquisition device, the device comprising:
the acquisition module is used for acquiring multiple paths of audio data, wherein the multiple paths of audio data are audio data acquired through a plurality of microphones;
The transmission module is used for transmitting the multipath audio data to an audio management system of Linux;
the reading module is used for reading the multipath audio data transmitted to the Linux audio management system through an application layer;
and the input module is used for calling the JNI to input the read multipath audio data to the Java layer of the application layer.
In one possible implementation manner, the acquiring module is configured to acquire the multiple paths of audio data by invoking a microphone driver to drive a plurality of microphones.
In one possible implementation, the delivering module is configured to deliver the multiple paths of audio data to an audio management system of Linux through the microphone driver.
In one possible implementation, the reading module includes:
an obtaining sub-module, configured to obtain, by using the application layer, a node handle from a virtual device node of the Linux audio management system, where the virtual device node is a mapping of the microphone driver;
and the reading sub-module is used for reading the multipath audio data transmitted to the Linux audio management system in a file read-write mode according to the node handle.
In one possible implementation, the audio management system of Linux is tinylalsa.
In one possible implementation, the apparatus further includes:
the adjusting module is used for adjusting the parameters for acquiring the multipath audio data by setting the parameters of the Linux audio management system;
wherein, the parameters for collecting the multipath audio data comprise: the number of microphone channels, the sampling rate, the sampling bit depth, the sampling node, etc.
According to a third aspect of embodiments of the present disclosure, there is provided an audio data acquisition device, the device comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to:
acquiring multiple paths of audio data, wherein the multiple paths of audio data are audio data acquired through multiple microphones;
transmitting the multipath audio data to an audio management system of Linux;
reading the multipath audio data transmitted to the Linux audio management system through an application layer;
and calling the JNI to input the read multipath audio data to a Java layer of the application layer.
According to a fourth aspect of embodiments of the present disclosure, there is provided an apparatus comprising a processor and a memory having stored therein at least one instruction, at least one program, code set or instruction set, which is loaded and executed by the processor to implement the audio data acquisition method according to any of the alternatives of the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes or a set of instructions, the at least one instruction, the at least one program, the set of codes or the set of instructions being loaded and executed by a processor to implement the audio data acquisition method according to any one of the alternatives of the first aspect.
The technical scheme provided by the disclosure can comprise the following beneficial effects:
acquiring multiple paths of audio data; transmitting the multipath audio data to an audio management system of Linux; the method comprises the steps of reading multipath audio data transmitted to an audio management system of Linux through an application layer; and calling the JNI to input the read multipath audio data to a Java layer of an application layer. In the process of multi-channel audio data acquisition, the complexity and the operation difficulty of acquiring multi-channel audio data are reduced, and the process of audio data acquisition is optimized while the requirement of far-field voice interaction scenes is met.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 illustrates a schematic structure of a terminal provided by an exemplary embodiment of the present disclosure;
FIG. 2 illustrates a system architecture diagram of Android, as shown in an exemplary embodiment of the present disclosure;
FIG. 3 illustrates a flow chart of an audio data collection method shown in an exemplary embodiment of the present disclosure;
FIG. 4 illustrates a schematic diagram of an audio data acquisition flow architecture shown in an exemplary embodiment of the present disclosure;
FIG. 5 illustrates a flow chart of an audio data collection method shown in an exemplary embodiment of the present disclosure;
FIG. 6 illustrates a block diagram of an audio data acquisition device shown in an exemplary embodiment of the present disclosure;
FIG. 7 is a block diagram of a computer device shown in accordance with an exemplary embodiment;
fig. 8 is a block diagram of a computer device, according to an example embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
It should be understood that references herein to "a number" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
With research and advancement of artificial intelligence technology, research and application of artificial intelligence technology is being developed in various fields, such as common smart home, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned, automatic driving, unmanned aerial vehicles, robots, smart medical treatment, smart customer service, etc., and it is believed that with the development of technology, artificial intelligence technology will be applied in more fields and with increasing importance value.
The disclosure relates to the technical field of smart home, in particular to an audio signal processing method.
First, some nouns involved in the present disclosure are explained.
1) Artificial intelligence (Artificial Intelligence AI)
Artificial intelligence is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and expand human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
2) Voice technology (Speech Technology)
Key technologies for Speech technology are automatic Speech recognition technology (Automatic Speech Recognition, ASR) and Speech synthesis technology (TTS). The method can enable the computer to listen, watch, say and feel, is the development direction of human-computer interaction in the future, and voice becomes one of the best human-computer interaction modes in the future.
3) Android (Android system)
Android is a Linux-based free and open source operating system led and developed by Google corporation and open mobile phone alliance, and is mainly used in computer equipment, where the computer equipment can be a terminal, such as a smart phone, a tablet computer, a smart watch, and the like, and can also be a smart device with a voice interaction function, such as a smart sound box, a smart television set top box, a smart robot, and the like.
Referring to fig. 1, a schematic diagram of a terminal according to an exemplary embodiment of the present disclosure is shown.
As shown in fig. 1, the terminal includes a motherboard 110, an external input/output device 120, a memory 130, an external interface 140, a capacitive touch system 150, and a power supply 160.
Wherein, the motherboard 110 has integrated therein processing elements such as a processor and a controller.
The external input/output device 120 may include a display component (such as a display screen), a sound playing component (such as a speaker), a sound collecting component (such as a microphone), various types of keys, and the like.
The memory 130 has stored therein program codes and data.
The external interface 140 may include a headset interface, a charging interface, a data interface, and the like.
The capacitive touch system 150 may be integrated into a display component or key of the external input/output device 120, and the capacitive touch system 150 is used to detect a touch operation performed by a user on the display component or key.
The power supply 160 is used to power the other various components in the terminal.
The Android system architecture adopts a layered architecture and can be divided into four layers, please refer to fig. 2, which shows a schematic diagram of the Android system architecture according to an exemplary embodiment of the present disclosure, and as shown in fig. 2, the Android system architecture has an Application layer (Application) 210, an Application framework layer (Application Framework) 220, a system Runtime layer (Android run) 230 and a Linux kernel layer (library) 240. Wherein:
The application layer 210, mainly an application program in the system, comprises a program written in Java language and running on a virtual machine, and the Android is published together with a series of core application packages, wherein the application packages can include clients, SMS short message programs, calendars, maps, browsers, contact management programs and the like.
The application framework layer 220 mainly provides various APIs (Application Programming Interface, application programming interfaces) that may be used when building an application. The application framework layer can be said to be the core of an application program, is a common use and adherence convention, and then is co-expanded on the convention, but the program keeps the consistency of the main structure, and the function of the application framework layer is to keep the program clear to meet different requirements and not to influence each other.
The Framework layer may be referred to as a layer implemented in the Java language, where APIs defined in this layer are all written in the Java language. But the method also comprises a JNI method, the JNI uses a C/C++ writing interface, and the method calls the bottom layer method in the core library layer according to the function table query, and finally accesses the Linux kernel. The role of the frame layer may therefore be:
1. and writing some normalized modules by Java language, and packaging the normalized modules into a framework for an APP layer developer to call and develop an application program with special service.
2. The native method of the core lib layer is called by Java Native Interface, the library of JNI is loaded at the start of the Dalvik virtual machine, dalvik directly addresses the JNI method and then calls.
The system runtime layer 230, android contains some C/C++ libraries that can be used by different components in the Android system. They provide services to developers through the Android application framework. For example, the system runtime may include a system C library, a media library, a Surface Manager, libWebCone, and the like.
The Linux kernel layer 240 provides the underlying drivers for various hardware of the Android device (e.g., display, audio, camera, bluetooth, wiFi, power Management, etc.), such as enhanced display driver, keyboard driver, flash memory driver, camera driver, audio driver, bluetooth driver, wiFi (Wireless Fidelity, wireless internet) driver, binder IPC driver, power Management, including hardware clocks, memory allocation and sharing, low memory Management, kernel debug, log device, android IPC mechanism, etc.
4) HAL layer (Hardware Abstraction Layer )
HAL is mainly used to deal with the problem of migration compatibility between different platforms, which can represent the rest of the system as an abstract hardware device, in particular removing flaws and traits that are rich in real hardware. These devices take the form of machine-independent services (function calls and macros) that other parts of the operating system and the devices can use. By using HAL services and indirect hardware addressing, the driver and core need to be changed only very little when ported to new hardware. The migration HAL itself is straightforward, since all machine-related code is concentrated in one place, and the goal of the migration is well defined, i.e. all HAL services are implemented.
5) ALSA (Advanced Linux Sound Architecture, advanced Linux sound system)
ALSA is a Linux kernel component that provides drivers for sound cards to replace the original OSS (open sound system), which provides audio and MIDI (Musical Instrument Digital Interface, digital interface for music devices) support on the Linux operating system.
6) Tinyalsa (TinyAdvanced Linux Sound Architecture, simplified version of the advanced Linux sound system)
Tinyalsa is a lightweight library that encapsulates the ALSA interface of the kernel for simplifying ALSA programming in user space.
7) JNI (Java Native Interface Java local interface)
JNI provides several APIs that enable Java to communicate with other languages (mainly C and c++ in Android).
Java has a cross-platform property that constrains its functionality because it has little to no internal links to the local machine. One way to solve the problem of Java's native operation is JNI. Java invokes the native method through JNI, while the native method is stored in library files (DLL files on Windows platform and SO files on UNIX machines). By calling the internal method of the local library file, java can realize the close connection with the local machine and call each interface method of the system level.
In a voice interaction scene, interference noise often exists in the sound environment of a far field and a complex scene, the accuracy of voice recognition is affected, a plurality of microphones are needed to cooperate to provide multi-position voice data for more accurate recognition of effective sound sources, and correspondingly, a terminal equipped with the plurality of microphones acquires multi-channel audio data and performs processing analysis through an intelligent voice algorithm to acquire information in the multi-channel audio data so as to realize voice interaction of the far field or the complex scene. In order to meet the acquisition requirement of multi-channel audio data, the traditional one-channel and two-channel data acquisition in the terminal cannot meet the current far-field interaction requirement.
In the related technology, a terminal compresses multi-channel Audio data through an Android HAL layer, an Android application layer decompresses the multi-channel Audio data, and a standard Android Audio channel is used for forcibly converting a data stream into a data stream with a standard channel adaptation in a coding and decoding mode so as to meet far-field interaction requirements. The method is characterized in that the terminal uses audio drive to collect original audio data of multiple paths through a microphone; modifying a HAL layer of an Android lower layer; inputting the original audio data of multiple paths acquired by the audio driver into a HAL layer; the HAL layer adopts a compression algorithm to compress the original audio data of multiple paths into two paths; uploading the Audio data compressed into two paths to an Android frame layer, and uploading the Audio data to an Android application layer through an original Audio path of the Android frame layer; after acquiring the audio data compressed into two paths, the Android application layer adopts a corresponding decompression algorithm to decompress the audio data compressed into two paths into original multi-path audio data for the far-field audio algorithm to be used as original data for processing.
In order to realize the method in the related technology, a developer needs to modify the Android system source code of the terminal, so that the workload is large, the working difficulty is large, and a high technical threshold is provided; because the Android systems in different terminals are not completely consistent, independent code adaptation is required for each Android system, and therefore the universal use cannot be realized; in the development process of the Android system, a plurality of teams are required to cooperatively develop, and a longer development period and higher integration cost are required; for the application layer in the above method, in order to realize the decoding function, additional function setting is required, so that the complexity of the application program is increased; in the method, the original Android system needs to be modified, so that the stability of the system can be influenced to a certain extent, and the stability of the system is reduced.
In order to solve the problems in the related art, the present disclosure provides an audio data acquisition method, which can reduce the complexity and the operation difficulty of acquiring multi-channel audio data, and optimize the process of audio data acquisition while meeting the requirements of far-field voice interaction scenes. Referring to fig. 3, a flowchart of an audio data collection method according to an exemplary embodiment of the present disclosure is shown, where the method may be performed by a computer device, which may be a terminal or an intelligent device having a voice interaction function, and the terminal may be implemented as the terminal shown in fig. 1, and the method includes:
In step 310, multiple paths of audio data are acquired, the multiple paths of audio data being audio data acquired by a computer device via multiple microphones.
The plurality of microphones may be implemented as a microphone assembly, and the number of channels of the multiple channels of audio data may correspond to the number of microphones included in the microphone assembly, for example, if the number of microphones included in the microphone assembly is 4, the sound input into the computer device by the microphone assembly is a sound source signal of 4 channels, where the sound source signal may be a sound signal sent by an object issuing a voice command.
When the microphone array in the voice interaction device collects the sound in the environment, the sound is sent to the microphone driver, the microphone driver converts the sound into a sound signal, and the sound wave is converted into a digital signal. Since the sound input into the voice interaction device is multi-channel, the microphone driver processes the sound and then the output sound signal is multi-channel, i.e. multi-channel audio signal.
In the Android system, an underlying driver is provided for various hardware of the Android device, including a microphone driver, also called a sound card driver. The microphone drive is the special drive of the microphone of the common audio output equipment, so that the microphone can play a role in computer equipment, output smooth audio and ensure the tone quality effect.
Step 320, the multi-channel audio data is transmitted to the audio management system of Linux.
Because the Android system is a Linux-based constructed system, the Linux audio management system is integrated in the Android system, wherein the Linux audio management system is realized in an Android system operation library layer.
In step 330, the multi-channel audio data transmitted to the Linux audio management system is read by the application layer.
In the disclosed embodiments, the application layer includes an application layer and an application framework layer.
In the Android system, an application layer can call a data file in the Linux audio management system in a non-standard interface mode, so that multi-channel audio data transmitted to the Linux audio management system are read.
And step 340, calling the JNI to input the read multipath audio data to a Java layer of the application layer.
In an Android system, an application program in an application layer is generally written based on Java language, libraries contained in a system operation library layer are built through C language or C++ language, that is to say, an audio management system of Linux is built through C language or C++ language, in order to realize call transition between the application program layer and the audio management system of Linux, transition between the application program layer and the audio management system of Linux can be realized through JNI, a plurality of APIs are provided by the JNI, and intercommunication between Java and C++ language is realized in Android.
Therefore, the computer device needs to input the multiple paths of audio data read by the application layer into the Java layer (Android application program) of the application layer through calling the JNI for the Android application developer to use.
Please refer to fig. 4, which is a schematic diagram illustrating an audio data collection flow architecture illustrated in an exemplary embodiment of the present disclosure, as illustrated in fig. 4, a JNI and a Java layer are located in an application layer of an Android system of a computer device, where the application layer includes an application program layer and an application framework layer, the application layer may input the read multi-path audio data into Java for use by an Android application developer by calling the JNI, and other parts of the description may refer to the above description and will not be repeated herein.
In one possible implementation, the acquiring multiple paths of audio data includes:
multiple microphones are driven by invoking microphone driving to acquire multiple paths of audio data.
In one possible implementation, the audio management system for delivering multiple paths of audio data to Linux includes:
the multi-channel audio data is transmitted to the audio management system of Linux through microphone driving.
In one possible implementation, reading, by an application layer, multiple paths of audio data delivered to an audio management system of Linux, includes:
Acquiring a node handle from a virtual device node of an audio management system of Linux through an application layer, wherein the virtual device node is a mapping of microphone driving;
and reading the multipath audio data from the node handle in a file read-write mode.
In one possible implementation, the audio management system of Linux is tinyalsa.
In one possible implementation, the method further includes:
parameters of the audio management system for acquiring the multipath audio data are adjusted by setting parameters of the Linux audio management system;
wherein, the parameters of collecting the multichannel audio data include: the number of microphone channels, the sampling rate, the sampling bit depth, the sampling node, etc.
In summary, according to the audio data collection method provided by the present disclosure, multiple paths of audio data are obtained; transmitting the multipath audio data to an audio management system of Linux; the method comprises the steps of reading multipath audio data transmitted to an audio management system of Linux through an application layer; and calling the JNI to input the read multipath audio data to a Java layer of the application layer. In the process of multi-channel audio data acquisition, the complexity and the operation difficulty of acquiring multi-channel audio data are reduced, and the process of audio data acquisition is optimized while the requirement of far-field voice interaction scenes is met.
Referring to fig. 5, a flowchart of an audio data collection method according to an exemplary embodiment of the present disclosure is shown, where the method may be performed by a computer device, which may be a terminal or an intelligent device having a voice interaction function, and the terminal may be implemented as the terminal shown in fig. 1, and the method includes:
step 510, multiple microphones are driven to acquire multiple paths of audio data by invoking microphone driving.
The driver, i.e., driver, is a configuration file written by a hardware manufacturer according to an operating system, and contains information about hardware devices, which enables a computer to communicate with the corresponding devices. Drivers are special programs added to the operating system, so that the hardware in the computer cannot work without a driver.
The microphone drive is a configuration file which can be communicated with the microphone by the computer, and the computer can control the microphone assembly by calling the microphone drive so as to acquire multi-path audio data.
At step 520, the multiplexed audio data is delivered to the Linux audio management system via microphone drivers.
In one possible implementation, the Linux audio management system is tinylalsa.
Tinyalsa is a user layer audio interface based on an ALSA kernel, is integrated in an Android system, and can be called by an application layer in a non-standard interface mode.
In one possible implementation manner, parameters for acquiring multiple paths of audio data can be adjusted by setting parameters of an audio management system of Linux;
wherein, the parameters of collecting the multichannel audio data include: the number of microphone channels, the sampling rate, the sampling bit depth, the sampling node, etc.
That is, the change of the multi-channel audio data collection mode can be achieved by setting the parameters of the audio management system of Linux, for example, audio data with different channel numbers can be recorded by setting the parameters of the audio management system of Linux, the sampling rate can be adjusted, the sampling bit depth can be adjusted, the sampling nodes can be changed, and the like, so as to collect different audio data.
Step 530, obtain, by the application layer, a node handle from a virtual device node of the Linux audio management system, the virtual device node being a microphone driven mapping.
And step 540, reading the multipath audio data transmitted to the audio management system of the Linux in a file read-write mode according to the node handle.
The device node is created under the condition of/dev and is a hub for connecting the kernel and the user layer, which is equivalent to an inode of a hard disk, and records the position and information of the hardware device. In Linux, all devices are stored in a file form under a/dev directory and accessed in a file mode, a device node is an abstraction of a Linux kernel on the devices, and one device node is one device. The application program executes the access device through a standardized set of calls that are independent of any particular driver. And the driver is responsible for mapping these standardized calls to the specific operations of the actual hardware.
Wherein the virtual device node corresponds to a mapping, in the embodiment of the present disclosure, corresponds to a mapping of a microphone driver, through which the microphone driver itself may be operated, and the virtual device node may be located in tinylassa.
The node handle may be regarded as a device number or a name of the virtual device node, and the object to be operated may be correspondingly acquired by acquiring the node handle, for example, the application program may acquire the node handle from the virtual device node of Tinyalsa, and correspond to the microphone driver according to the node handle, so that the microphone driver may be operated by operating the virtual device node.
And step 550, calling the JNI to input the read multipath audio data to a Java layer of the application layer.
The description of step 550 may refer to the related description in the embodiment shown in fig. 3, and will not be repeated here.
In summary, according to the audio data collection method provided by the present disclosure, multiple paths of audio data are obtained; transmitting the multipath audio data to an audio management system of Linux; the method comprises the steps of reading multipath audio data transmitted to an audio management system of Linux through an application layer; and calling the JNI to input the read multipath audio data to a Java layer of the application layer. In the process of multi-channel audio data acquisition, the complexity and the operation difficulty of acquiring multi-channel audio data are reduced, and the process of audio data acquisition is optimized while the requirement of far-field voice interaction scenes is met.
Referring to fig. 6, a block diagram of an audio data acquisition device according to an exemplary embodiment of the present disclosure is shown, where the device may be applied in a computer device, and the computer device may be a terminal or an intelligent device with a voice interaction function, where the terminal may be implemented as the terminal shown in fig. 1, to perform all or part of the steps of the method in any of the embodiments shown in fig. 3 and fig. 4, where the device includes:
an acquisition module 610 for acquiring multiple paths of audio data, the multiple paths of audio data being audio data acquired by the computer device through multiple microphones;
a delivering module 620, configured to deliver the multiple paths of audio data to an audio management system of Linux;
a reading module 630, configured to read, by the application layer, multiple paths of audio data that are delivered to the Linux audio management system;
and the input module 640 is used for calling the JNI to input the read multi-channel audio data to the Java layer of the application layer.
In one possible implementation, the acquiring module 610 is configured to acquire multiple paths of audio data by invoking a microphone driver to drive multiple microphones.
In one possible implementation, the delivery module 620 is configured to deliver multiple paths of audio data to the Linux audio management system via microphone drivers.
In one possible implementation, the reading module 630 includes:
an acquisition sub-module, configured to acquire, through an application layer, a node handle from a virtual device node of an audio management system of Linux, where the virtual device node is a mapping of microphone drivers;
and the reading sub-module is used for reading the multipath audio data transmitted to the Linux audio management system in a file read-write mode according to the node handle.
In one possible implementation, the Linux audio management system is tinylalsa.
In one possible implementation, the apparatus further includes:
the adjusting module is used for adjusting the parameters of the acquired multi-channel audio data by setting the parameters of the audio management system of Linux;
wherein, the parameters of collecting the multichannel audio data include: the number of microphone channels, the sampling rate, the sampling bit depth, the sampling node, etc.
In summary, the audio data acquisition device provided by the present disclosure is applied in a computer device, and multiple paths of audio data are acquired; transmitting the multipath audio data to an audio management system of Linux; the method comprises the steps of reading multipath audio data transmitted to an audio management system of Linux through an application layer; and calling the JNI to input the read multipath audio data to a Java layer of the application layer. In the process of multi-channel audio data acquisition, the complexity and the operation difficulty of acquiring multi-channel audio data are reduced, and the process of audio data acquisition is optimized while the requirement of far-field voice interaction scenes is met.
An exemplary embodiment of the present disclosure provides an audio signal processing apparatus capable of implementing all or part of the steps of the method of any one of the embodiments shown in fig. 3 and fig. 5, where the apparatus is used in a computer device, and the computer device may be a terminal or an intelligent device with a voice interaction function, and the terminal may be implemented as the terminal shown in fig. 1, and the apparatus may include:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to:
acquiring multiple paths of audio data, wherein the multiple paths of audio data are audio data acquired by a computer device through a plurality of microphones;
transmitting the multipath audio data to an audio management system of Linux;
the method comprises the steps of reading multipath audio data transmitted to an audio management system of Linux through an application layer;
and calling the JNI to input the read multipath audio data to a Java layer of the application layer.
In one possible implementation, the acquiring multiple paths of audio data includes:
multiple microphones are driven by invoking microphone driving to acquire multiple paths of audio data.
In one possible implementation, the audio management system for delivering multiple paths of audio data to Linux includes:
The multi-channel audio data is transmitted to the audio management system of Linux through microphone driving.
In one possible implementation, reading, by an application layer, multiple paths of audio data delivered to an audio management system of Linux, includes:
acquiring a node handle from a virtual device node of an audio management system of Linux through an application layer, wherein the virtual device node is a mapping of microphone driving;
and reading the multipath audio data from the node handle in a file read-write mode.
In one possible implementation, the audio management system of Linux is tinyalsa.
In one possible implementation, the method further includes:
parameters of the audio management system for acquiring the multipath audio data are adjusted by setting parameters of the Linux audio management system;
wherein, the parameters of collecting the multichannel audio data include: the number of microphone channels, the sampling rate, the sampling bit depth, the sampling node, etc.
In summary, the audio data acquisition device provided by the present disclosure is applied in a computer device, and multiple paths of audio data are acquired; transmitting the multipath audio data to an audio management system of Linux; the method comprises the steps of reading multipath audio data transmitted to an audio management system of Linux through an application layer; and calling the JNI to input the read multipath audio data to a Java layer of the application layer. In the process of multi-channel audio data acquisition, the complexity and the operation difficulty of acquiring multi-channel audio data are reduced, and the process of audio data acquisition is optimized while the requirement of far-field voice interaction scenes is met.
Fig. 7 is a block diagram illustrating a computer device 700, according to an example embodiment. The computer device can be implemented as an intelligent device with voice interaction function in the above scheme of the disclosure. The computer apparatus 700 includes a central processing unit (Central Processing Unit, CPU) 701, a system Memory 704 including a random access Memory (Random Access Memory, RAM) 702 and a Read-Only Memory (ROM) 703, and a system bus 705 connecting the system Memory 704 and the central processing unit 701. The computer device 700 also includes a basic Input/Output system (I/O) 706, which helps to transfer information between various devices within the computer, and a mass storage device 707 for storing an operating system 713, application programs 714, and other program modules 715.
The basic input/output system 706 includes a display 708 for displaying information and an input device 709, such as a mouse, keyboard, or the like, for a user to input information. Wherein the display 708 and the input device 709 are coupled to the central processing unit 701 through an input output controller 710 coupled to a system bus 705. The basic input/output system 706 may also include an input/output controller 710 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, the input output controller 710 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 707 is connected to the central processing unit 701 through a mass storage controller (not shown) connected to the system bus 705. The mass storage device 707 and its associated computer-readable media provide non-volatile storage for the computer device 700. That is, the mass storage device 707 may include a computer readable medium (not shown) such as a hard disk or a compact disk-Only Memory (CD-ROM) drive.
The computer readable medium may include computer storage media and communication media without loss of generality. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, erasable programmable read-Only register (Erasable Programmable Read Only Memory, EPROM), electrically erasable programmable read-Only Memory (EEPROM) flash Memory or other solid state Memory technology, CD-ROM, digital versatile disks (Digital Versatile Disc, DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will recognize that the computer storage medium is not limited to the one described above. The system memory 704 and mass storage device 707 described above may be collectively referred to as memory.
According to various embodiments of the present disclosure, the computer device 700 may also operate through a network, such as the Internet, to a remote computer on the network. I.e., the computer device 700 may be connected to the network 712 through a network interface unit 711 coupled to the system bus 705, or other types of networks or remote computer systems (not shown) may be coupled using the network interface unit 711.
The memory further includes one or more programs stored in the memory, and the central processor 701 implements all or part of the steps performed by the voice interaction device in the method shown in fig. 3 or 5 by executing the one or more programs.
The disclosed embodiments also provide a computer-readable storage medium storing computer software instructions for use with the above-described computer device, which contains a program designed to execute the above-described audio signal processing method. For example, the computer readable storage medium may be ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Fig. 8 is a block diagram illustrating a computer device 800, according to an example embodiment. The computer device may be implemented as a terminal in the above-described scheme of the present disclosure. For example, the terminal may be a terminal as shown in fig. 1.
In general, the computer device 800 includes: a processor 801 and a memory 802.
Processor 801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 801 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ).
Memory 802 may include one or more computer-readable storage media, which may be non-transitory. Memory 802 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 802 is used to store at least one instruction for execution by processor 801 to implement the key verification method provided by the method embodiments in the present disclosure.
In some embodiments, the electronic device 800 may further optionally include: a peripheral interface 803, and at least one peripheral. The processor 801, the memory 802, and the peripheral interface 803 may be connected by a bus or signal line. Individual peripheral devices may be connected to the peripheral device interface 803 by buses, signal lines, or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 804, a touch display 805, a camera 806, audio circuitry 807, a positioning component 808, and a power supply 809.
Peripheral interface 803 may be used to connect at least one Input/Output (I/O) related peripheral to processor 801 and memory 802. In some embodiments, processor 801, memory 802, and peripheral interface 803 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 801, the memory 802, and the peripheral interface 803 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 804 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals.
In some embodiments, the electronic device 800 also includes one or more sensors 810. The one or more sensors 810 include, but are not limited to: acceleration sensor 811, gyroscope sensor 812, pressure sensor 813, fingerprint sensor 814, optical sensor 815, and proximity sensor 816.
Those skilled in the art will appreciate that the structure shown in fig. 8 is not limiting and that more or fewer components than shown may be included or certain components may be combined or a different arrangement of components may be employed.
The memory also includes one or more programs stored in the memory, by which the central processor 801 implements all or part of the steps of the methods shown in fig. 3 or 5.
Embodiments of the present disclosure also provide a computer readable storage medium storing computer software instructions for use by the above-described computer device, where at least one instruction, at least one program, code set, or instruction set is stored, where the at least one instruction, at least one program, code set, or instruction set is loaded and executed by a processor to implement the above-described audio data collection method performed by the computer device. For example, the computer readable storage medium may be ROM, random access Memory (Random Access Memory, RAM), compact disk read-Only (CD-ROM), magnetic tape, floppy disk, optical data storage device, etc.
The disclosed embodiments also provide a computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which are loaded and executed by the processor to implement all or part of the steps in an audio data acquisition method as described above.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (13)

1. A method of audio data acquisition, the method comprising:
acquiring multiple paths of audio data, wherein the multiple paths of audio data are audio data acquired through multiple microphones;
transmitting the multipath audio data to an audio management system of Linux;
reading the multipath audio data transmitted to the Linux audio management system through an application layer;
Calling JNI to input the read multipath audio data to a Java layer of the application layer;
the reading, by an application layer, the multiplexed audio data delivered to the Linux audio management system, including:
acquiring a node handle from a virtual device node of the Linux audio management system through the application layer, wherein the virtual device node is a mapping of the microphone driver, and the application layer calls a data file in the Linux audio management system in a non-standard interface mode;
and reading the multipath audio data transmitted to the audio management system of the Linux in a file read-write mode according to the node handle.
2. The method of claim 1, wherein the acquiring multiple paths of audio data comprises:
and acquiring the multipath audio data by calling microphone driving to drive a plurality of microphones.
3. The method of claim 2, wherein said delivering the multiplexed audio data to an audio management system of Linux comprises:
and transmitting the multipath audio data to an audio management system of Linux through the microphone driver.
4. The method of claim 1, wherein the Linux audio management system is tinylalsa.
5. The method according to claim 4, wherein the method further comprises:
adjusting parameters for acquiring the multipath audio data by setting parameters of the Linux audio management system;
wherein, the parameters for collecting the multipath audio data comprise: the number of microphone channels, the sampling rate, the sampling bit depth, the sampling node, etc.
6. An audio data acquisition device, the device comprising:
the acquisition module is used for acquiring multiple paths of audio data, wherein the multiple paths of audio data are audio data acquired through a plurality of microphones;
the transmission module is used for transmitting the multipath audio data to an audio management system of Linux;
the reading module is used for reading the multipath audio data transmitted to the Linux audio management system through an application layer;
the input module is used for calling the JNI to input the read multipath audio data to the Java layer of the application layer;
the reading module comprises:
the acquisition sub-module is used for acquiring a node handle from a virtual equipment node of the Linux audio management system through the application layer, wherein the virtual equipment node is the mapping of the microphone drive, and the application layer calls a data file in the Linux audio management system in a non-standard interface mode;
And the reading sub-module is used for reading the multipath audio data transmitted to the Linux audio management system in a file read-write mode according to the node handle.
7. The apparatus of claim 6, wherein the acquisition module is configured to acquire the multiplexed audio data by invoking a microphone driver to drive a plurality of the microphones.
8. The apparatus of claim 7, wherein the delivery module is configured to deliver the multiplexed audio data to an audio management system of Linux via the microphone driver.
9. The apparatus of claim 6, wherein the Linux audio management system is tinylalsa.
10. The apparatus of claim 9, wherein the apparatus further comprises:
the adjusting module is used for adjusting the parameters for acquiring the multipath audio data by setting the parameters of the Linux audio management system;
wherein, the parameters for collecting the multipath audio data comprise: the number of microphone channels, the sampling rate, the sampling bit depth, the sampling node, etc.
11. An audio data acquisition device for use in a computer apparatus, the device comprising:
A processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to:
acquiring multiple paths of audio data, wherein the multiple paths of audio data are audio data acquired through multiple microphones;
transmitting the multipath audio data to an audio management system of Linux;
reading the multipath audio data transmitted to the Linux audio management system through an application layer;
calling JNI to input the read multipath audio data to a Java layer of the application layer;
the reading, by an application layer, the multiplexed audio data delivered to the Linux audio management system, including:
acquiring a node handle from a virtual device node of the Linux audio management system through the application layer, wherein the virtual device node is a mapping of the microphone driver, and the application layer calls a data file in the Linux audio management system in a non-standard interface mode;
and reading the multipath audio data transmitted to the audio management system of the Linux in a file read-write mode according to the node handle.
12. An apparatus comprising a processor and a memory having stored therein at least one instruction, at least one program, code set, or instruction set, the at least one instruction, the at least one program, code set, or instruction set being loaded and executed by the processor to implement the audio data collection method of any of claims 1 to 5.
13. A computer readable storage medium having stored therein at least one instruction, at least one program, code set, or instruction set, the at least one instruction, the at least one program, the code set, or instruction set being loaded and executed by a processor to implement the audio data acquisition method of any one of claims 1 to 5.
CN202010148208.3A 2020-03-05 2020-03-05 Audio data acquisition method, device, computer equipment and storage medium Active CN111506291B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010148208.3A CN111506291B (en) 2020-03-05 2020-03-05 Audio data acquisition method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010148208.3A CN111506291B (en) 2020-03-05 2020-03-05 Audio data acquisition method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111506291A CN111506291A (en) 2020-08-07
CN111506291B true CN111506291B (en) 2024-01-09

Family

ID=71863955

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010148208.3A Active CN111506291B (en) 2020-03-05 2020-03-05 Audio data acquisition method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111506291B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112214213B (en) * 2020-10-27 2023-10-20 南方电网数字电网科技(广东)有限公司 Linux kernel development and management method and device, computer equipment and storage medium
CN112615853B (en) * 2020-12-16 2023-01-10 瑞芯微电子股份有限公司 Android device audio data access method
CN112883220B (en) * 2021-01-22 2023-05-26 北京雷石天地电子技术有限公司 Audio processing method, audio processing device and readable storage medium
CN113220262A (en) * 2021-03-26 2021-08-06 西安神鸟软件科技有限公司 Multi-application audio data distribution method and terminal equipment
CN113220261A (en) * 2021-03-26 2021-08-06 西安神鸟软件科技有限公司 Audio data acquisition method based on virtual microphone and terminal equipment
CN113220260A (en) * 2021-03-26 2021-08-06 西安神鸟软件科技有限公司 Multi-application audio data processing method and terminal equipment
CN113286182B (en) * 2021-04-02 2022-06-14 北京智象信息技术有限公司 Method and system for eliminating echo between TV and sound pickup peripheral
CN114879931B (en) * 2022-07-11 2022-11-22 南京芯驰半导体科技有限公司 Onboard audio path management method and system supporting multiple operating systems

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106454181A (en) * 2016-10-14 2017-02-22 青岛海信移动通信技术股份有限公司 Local video recording and synchronously pushing method based on Android platform, and law enforcement recorder
CN107301035A (en) * 2016-04-15 2017-10-27 中兴通讯股份有限公司 A kind of audio sync recording-reproducing system and method based on android system
CN109378017A (en) * 2018-09-26 2019-02-22 科大讯飞股份有限公司 A kind of way of recording, device, audio system, sound pick-up outfit and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106911831B (en) * 2017-02-09 2019-09-20 青岛海信移动通信技术股份有限公司 A kind of data processing method of the microphone of terminal and terminal with microphone

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107301035A (en) * 2016-04-15 2017-10-27 中兴通讯股份有限公司 A kind of audio sync recording-reproducing system and method based on android system
CN106454181A (en) * 2016-10-14 2017-02-22 青岛海信移动通信技术股份有限公司 Local video recording and synchronously pushing method based on Android platform, and law enforcement recorder
CN109378017A (en) * 2018-09-26 2019-02-22 科大讯飞股份有限公司 A kind of way of recording, device, audio system, sound pick-up outfit and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郭超远.音频数据采集系统的设计与实施.《CNKI优秀硕士学位论文全文库》.2018,(2018年第10期),第29-31页. *

Also Published As

Publication number Publication date
CN111506291A (en) 2020-08-07

Similar Documents

Publication Publication Date Title
CN111506291B (en) Audio data acquisition method, device, computer equipment and storage medium
JP6862632B2 (en) Voice interaction methods, devices, equipment, computer storage media and computer programs
CN108133707B (en) Content sharing method and system
KR20190024762A (en) Music Recommendation Method, Apparatus, Device and Storage Media
JP2020529032A (en) Speech recognition translation method and translation device
JP2019185062A (en) Voice interaction method, terminal apparatus, and computer readable recording medium
JP2020184298A (en) Speech skill creating method and system
JP2019015951A (en) Wake up method for electronic device, apparatus, device and computer readable storage medium
KR20210001082A (en) Electornic device for processing user utterance and method for operating thereof
US10976997B2 (en) Electronic device outputting hints in an offline state for providing service according to user context
JP6985113B2 (en) How to provide an interpreter function for electronic devices
CN112102836A (en) Voice control screen display method and device, electronic equipment and medium
WO2023103918A1 (en) Speech control method and apparatus, and electronic device and storage medium
KR20200107058A (en) Method for processing plans having multiple end points and electronic device applying the same method
CN111147530A (en) System architecture, multi-voice platform switching method, intelligent terminal and storage medium
CN111290746A (en) Object access method, device, equipment and storage medium
CN111770236B (en) Conversation processing method, device, system, server and storage medium
CN113535279A (en) Method and device for sharing audio equipment by Linux platform and android application
JP6944920B2 (en) Smart interactive processing methods, equipment, equipment and computer storage media
CN113157240A (en) Voice processing method, device, equipment, storage medium and computer program product
KR20220099322A (en) Electronic device and method for managing memory using the same
KR20200112791A (en) Method and apparatus for function of translation using earset
US10909049B1 (en) Converting a pin into a loopback pin
US20230393820A1 (en) MVVM Architecture-Based Application Development Method and Terminal
KR20230072356A (en) Method of reorganizing quick command based on utterance and electronic device therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant