CN117998006A - Audio and vibration cooperative control method and device - Google Patents

Audio and vibration cooperative control method and device Download PDF

Info

Publication number
CN117998006A
CN117998006A CN202311598971.6A CN202311598971A CN117998006A CN 117998006 A CN117998006 A CN 117998006A CN 202311598971 A CN202311598971 A CN 202311598971A CN 117998006 A CN117998006 A CN 117998006A
Authority
CN
China
Prior art keywords
vibration
audio
data
amplitude
volume adjustment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311598971.6A
Other languages
Chinese (zh)
Inventor
张琳
胡鹏龙
叶学兵
李涛
赵宪浩
赵子康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202311598971.6A priority Critical patent/CN117998006A/en
Publication of CN117998006A publication Critical patent/CN117998006A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Circuit For Audible Band Transducer (AREA)

Abstract

The present disclosure provides a method and apparatus for controlling audio and vibration cooperatively, including in a process of processing audio data and vibration data, in response to detecting a volume adjustment event, synchronously adjusting magnitudes of the audio data and the vibration data based on the volume adjustment event, driving a speaker to operate based on the audio data after the magnitudes are adjusted, and driving a vibration motor to operate based on the vibration data after the magnitudes are adjusted. In the embodiment of the disclosure, the synchronous adjustment of the audio playing and the vibration efficiency output intensity can be realized based on a single volume adjusting event, the motor vibration intensity does not need to be independently adjusted by entering a setting page, the adjustment operation of a user is greatly facilitated, and the operation experience of the user is improved.

Description

Audio and vibration cooperative control method and device
Technical Field
The disclosure relates to the technical field of electronic equipment, in particular to a method and a device for controlling audio and vibration cooperatively.
Background
Nowadays, more and more scenes with cooperative audio and touch sense exist in electronic equipment, for example, when music is played, the equipment vibrates along with the rhythm of the music; also for example in shooting games, the device vibrates with the sound of a gun, etc. Currently, more and more manufacturers research on collaborative control of device audio and touch to improve user experience.
However, in the related art, the audio signal paths and the vibration signal paths are independent of each other, and the output intensities of the audio signal paths and the vibration signal paths cannot be adjusted synchronously, so that the user experience is poor.
Disclosure of Invention
In order to realize synchronous control of audio frequency and vibration efficiency output intensity, the embodiment of the disclosure provides an audio frequency and vibration cooperative control method, an audio frequency and vibration cooperative control device, electronic equipment and a storage medium.
In a first aspect, embodiments of the present disclosure provide an audio and vibration cooperative control method, including:
In the processing process of the audio data and the vibration data, in response to detecting a volume adjustment event, synchronously adjusting the amplitude values of the audio data and the vibration data based on the volume adjustment event;
and driving a loudspeaker to work based on the audio data after the amplitude is adjusted, and driving a vibration motor to work based on the vibration data after the amplitude is adjusted.
In some embodiments, the step of synchronously adjusting the magnitudes of the audio data and the vibration data based on the volume adjustment event in response to detecting the volume adjustment event during the processing of the audio data and the vibration data comprises:
the audio processing module receives sound vibration effect data sent by an audio hardware abstraction layer, wherein the sound vibration effect data comprises the audio data and the vibration data;
In response to detecting a volume adjustment event, the audio hardware abstraction layer sending a first adjustment parameter to the audio processing module based on the volume adjustment event;
The audio processing module adjusts the amplitude of the sound vibration effect data based on the first adjustment parameter.
In some embodiments, the driving the speaker to operate based on the amplitude-adjusted audio data and driving the vibration motor to operate based on the amplitude-adjusted vibration data includes:
the audio processing module performs channel separation on the sound vibration effect data with the amplitude adjusted to obtain target audio data and target vibration data;
the speaker is driven to operate based on the target audio data, and the vibration motor is driven to operate based on the target vibration data.
In some embodiments, the step of synchronously adjusting the magnitudes of the audio data and the vibration data based on the volume adjustment event in response to detecting the volume adjustment event during the processing of the audio data and the vibration data comprises:
in the processing process of the audio data and the vibration data, in response to detecting a volume adjustment event, the application framework layer adjusts the amplitude of the vibration data based on the volume adjustment event;
the audio hardware abstraction layer sends audio adjustment parameters to the audio processing module based on the volume adjustment event;
the audio processing module adjusts the amplitude of the audio data based on the audio adjustment parameter.
In some embodiments, the driving the speaker to operate based on the amplitude-adjusted audio data and driving the vibration motor to operate based on the amplitude-adjusted vibration data includes:
The vibration hardware abstraction layer drives the vibration motor to work based on the vibration data with the amplitude adjusted, and the audio processing module drives the loudspeaker to work based on the audio data with the amplitude adjusted.
In some embodiments, the application framework layer adjusts the amplitude of the vibration data based on the volume adjustment event, comprising:
The application framework layer determines a vibration adjustment parameter based on the volume adjustment event and adjusts the amplitude of the vibration data based on the vibration adjustment parameter.
In some embodiments, the sound vibration data includes haptic OGG data.
In a second aspect, embodiments of the present disclosure provide an audio and vibration cooperative control apparatus, including:
An amplitude adjustment module configured to synchronously adjust magnitudes of the audio data and the vibration data based on a volume adjustment event in response to detecting the volume adjustment event during processing of the audio data and the vibration data;
And the driving control module is configured to drive the loudspeaker to work based on the audio data with the amplitude adjusted, and drive the vibration motor to work based on the vibration data with the amplitude adjusted.
In some embodiments, the amplitude adjustment module is configured to:
the audio processing module receives sound vibration effect data sent by an audio hardware abstraction layer, wherein the sound vibration effect data comprises the audio data and the vibration data;
In response to detecting a volume adjustment event, the audio hardware abstraction layer sending a first adjustment parameter to the audio processing module based on the volume adjustment event;
The audio processing module adjusts the amplitude of the sound vibration effect data based on the first adjustment parameter.
In some embodiments, the drive control module is configured to:
the audio processing module performs channel separation on the sound vibration effect data with the amplitude adjusted to obtain target audio data and target vibration data;
the speaker is driven to operate based on the target audio data, and the vibration motor is driven to operate based on the target vibration data.
In some embodiments, the amplitude adjustment module is configured to:
in the processing process of the audio data and the vibration data, in response to detecting a volume adjustment event, the application framework layer adjusts the amplitude of the vibration data based on the volume adjustment event;
the audio hardware abstraction layer sends audio adjustment parameters to the audio processing module based on the volume adjustment event;
the audio processing module adjusts the amplitude of the audio data based on the audio adjustment parameter.
In some embodiments, the drive control module is configured to:
The vibration hardware abstraction layer drives the vibration motor to work based on the vibration data with the amplitude adjusted, and the audio processing module drives the loudspeaker to work based on the audio data with the amplitude adjusted.
In some embodiments, the amplitude adjustment module is configured to:
The application framework layer determines a vibration adjustment parameter based on the volume adjustment event and adjusts the amplitude of the vibration data based on the vibration adjustment parameter.
In some embodiments, the sound vibration data includes haptic OGG data.
In a third aspect, embodiments of the present disclosure provide an electronic device, including:
a speaker;
A vibration motor; and
A controller comprising a processor and a memory storing computer instructions for causing the processor to perform the method according to any embodiment of the first aspect.
In a fourth aspect, an embodiment of the present disclosure provides a storage medium storing computer instructions for causing a computer to perform the method according to any embodiment of the first aspect.
The audio and vibration cooperative control method of the embodiment of the disclosure comprises the steps of responding to the detection of a volume adjustment event in the processing process of audio data and vibration data, synchronously adjusting the amplitude of the audio data and the vibration data based on the volume adjustment event, driving a loudspeaker to work based on the audio data with the amplitude adjusted, and driving a vibration motor to work based on the vibration data with the amplitude adjusted. In the embodiment of the disclosure, the synchronous adjustment of the audio playing and the vibration efficiency output intensity can be realized based on a single volume adjusting event, the motor vibration intensity does not need to be independently adjusted by entering a setting page, the adjustment operation of a user is greatly facilitated, and the operation experience of the user is improved.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the prior art, the drawings that are required in the detailed description or the prior art will be briefly described, it will be apparent that the drawings in the following description are some embodiments of the present disclosure, and other drawings may be obtained according to the drawings without inventive effort for a person of ordinary skill in the art.
Fig. 1 is a block diagram of an electronic device in accordance with some embodiments of the present disclosure.
Fig. 2 is a system architecture diagram of an electronic device in accordance with some embodiments of the present disclosure.
Fig. 3 is a flow chart of a method of audio and vibration cooperative control in accordance with some embodiments of the present disclosure.
Fig. 4 is a flow chart of a method of audio and vibration cooperative control in accordance with some embodiments of the present disclosure.
Fig. 5 is a schematic diagram of an audio and vibration cooperative control method according to some embodiments of the present disclosure.
Fig. 6 is a flow chart of a method of audio and vibration cooperative control in accordance with some embodiments of the present disclosure.
Fig. 7 is a schematic diagram of a method of audio and vibration cooperative control in accordance with some embodiments of the present disclosure.
Fig. 8 is a block diagram of a configuration of an audio and vibration cooperative control apparatus according to some embodiments of the present disclosure.
Fig. 9 is a block diagram of an electronic device in accordance with some embodiments of the present disclosure.
Detailed Description
The following description of the embodiments of the present disclosure will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the described embodiments are some, but not all, of the embodiments of the present disclosure. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure. In addition, technical features related to different embodiments of the present disclosure described below may be combined with each other as long as they do not make a conflict with each other.
In electronic devices nowadays, more and more audio and touch sense cooperative scenes exist, and users can obtain better sensory experience under the audio touch sense cooperative scenes.
For example, taking shooting game scenes as an example, through audio touch cooperation, the machine body and the gunshot can vibrate synchronously when a user opens a gun, so that a more realistic shooting effect is simulated. For example, music with strong rhythm sense is taken as an example, and the music can synchronously vibrate along with the rhythm body such as the drum points of the music through the cooperation of audio touch sense, so that the user obtains immersive music experience.
At present, more and more manufacturers design schemes for controlling synchronous output of audio and vibration effect, but in related technical schemes, audio signals and vibration effect signal paths are independent of each other, and a user cannot synchronously adjust vibration effect by single volume adjustment.
For example, in one exemplary scenario, a user plays music using the audio and vibration effects synergistic effect, and when the user wants to increase the high volume and vibration effects to obtain a stronger audio-visual experience, the volume and vibration intensity can only be independently adjusted, respectively. For example, the user can control the volume by volume keying and then enter a setup page to adjust the vibration intensity.
For example, in another exemplary scenario, where the user sets the ringing vibration effect, when a phone call is incoming, the user may use the volume key to turn down the ringing loudness, but the vibration intensity cannot be synchronously reduced.
It can be seen that in the related art, under the audio frequency and vibration effect cooperation scene, the output intensity of audio frequency and vibration can not be synchronously adjusted, so that the audio frequency and vibration effect are respectively adjusted, the operation is complex, and the user experience is reduced.
Based on this, the embodiment of the disclosure provides a method, a device, an electronic device and a storage medium for controlling audio and vibration cooperatively, which aim to realize synchronous adjustment of audio and vibration output intensity, and a user can synchronously adjust vibration intensity only by single volume adjustment operation.
The audio and vibration cooperative control method of the embodiment of the disclosure can be applied to any electronic device with a speaker and a vibration motor, for example, the electronic device can be a smart phone, a tablet computer, a smart watch and the like, and the disclosure is not limited thereto.
For example, fig. 1 shows an example block diagram of an electronic device 100 in some embodiments of the present disclosure, described below in conjunction with fig. 1.
As shown in fig. 1, in some embodiments, the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processingunit, GPU), an image signal processor (IMAGE SIGNAL processor, ISP), a controller, a memory, a video codec, a digital signal processor (DIGITAL SIGNAL processor, DSP), a baseband processor, and/or a neural Network Processor (NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some implementations, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it may be called directly from memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. In some implementations, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
In an embodiment of the present disclosure, the processor 110 causes the electronic device to perform the audio and vibration cooperative control method of the present disclosure by executing instructions stored in the internal memory 121.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a plurality of microphones 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio data into analog audio electrical signal output and also to convert analog audio electrical signal input into digital audio data. For example, the audio module 170 is configured to convert an analog audio electrical signal output by the microphone 170C into digital audio data.
The audio module 170 may further include an audio processing module, among others. The audio processing module is used for performing audio processing on the digital audio data in the video recording mode so as to generate audio. The audio module 170 may also be used to encode and decode audio data.
In some implementations, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert analog audio electrical signals into sound signals. The electronic device 100 may listen to music, or to hands-free conversations, through the speaker 170A. A receiver 170B, also referred to as a "earpiece", is used to convert the analog audio electrical signal into a sound signal. When electronic device 100 is answering a telephone call or voice message, voice may be received by placing receiver 170B in close proximity to the human ear. Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into analog audio electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C.
It is to be understood that the structure illustrated in the embodiments of the present disclosure does not constitute a specific limitation on the electronic device 100. In other embodiments of the present disclosure, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. The embodiment of the disclosure exemplifies an Android system with a hierarchical architecture, and illustrates a software structure of the electronic device 100.
Fig. 2 shows a block diagram of a software system of the electronic device 100 in some embodiments of the present disclosure, where the android architecture divides the system into several layers, each layer having a distinct role and division of work, and the layers communicate with each other through software interfaces.
In some embodiments, the android system is divided into four layers, from top to bottom, an application layer (Applications), an application framework layer (Application Framework), a hardware abstraction layer (HAL, hardware Abstraction Layer), and a Kernel layer (Linux Kernel), respectively.
The application layer is a layer of interaction with a user in the Android system. The application program layer comprises various applications of the mobile phone, such as system applications of settings, contacts, short messages, calls, gallery, calendars, browsers and the like, or third party applications of WeChat and the like. The application layer may access services provided by the application framework layer according to different applications. Generally, applications are developed using the Java language by calling an application programming interface (application programming interface, API) provided by the application framework layer.
An application Framework (Framework) layer provides an application programming interface (application programming interface, API) and programming Framework for the application of the application layer. The application framework layer includes a number of predefined functions. For example, as shown in fig. 2, the application framework layers may include media services (MEDIA SERVER), audio services (Audio Server), camera services (CAMERA SERVER), system services (SYSTEM SERVER), and the like.
The HAL layer is an interface layer between the operating system kernel and the hardware circuitry. HAL layers include, but are not limited to: an Audio hardware abstraction layer (Audio HAL) and a vibration hardware abstraction layer (Vibrator HAL). The Audio HAL is used for processing an Audio stream, for example, processing the Audio stream such as volume adjustment, and the Vibrator HAL is used for processing a vibration effect stream.
The kernel layer provides core system services of an operating system, such as security, memory management, process management, network protocol stacks, a driving model and the like, which are all based on the Linux kernel. The Linux kernel also acts as an abstraction layer between the hardware and software stacks. This layer has many drivers associated with the electronic device, the main drivers being: a display drive; linux-based frame buffer driving; a keyboard driver as an input device; flash driving based on memory technology equipment; a camera drive; an audio drive; a motor drive; bluetooth driving; wi-Fi drive, etc.
With the above system architecture understood, the implementation of the audio and vibration cooperative control method of the present disclosure will be described below with reference to fig. 3.
As shown in fig. 3, in some embodiments, the audio and vibration cooperative control method of the present disclosure includes:
And S310, in the processing process of the audio data and the vibration data, in response to detecting the volume adjustment event, synchronously adjusting the amplitude values of the audio data and the vibration data based on the volume adjustment event.
And S320, driving the loudspeaker to work based on the audio data with the amplitude adjusted, and driving the motor to work based on the vibration data with the amplitude adjusted.
In the embodiment of the disclosure, the electronic device can process the audio data and the vibration data based on the system architecture, the loudspeaker of the electronic device can be driven to sound through the audio data, and the vibration motor of the electronic device can be driven to vibrate through the vibration data.
In the disclosed embodiments, in response to detecting a volume adjustment event while the speaker and vibration motor are operating, it is illustrated that the user desires to adjust the audio loudness (i.e., volume), such as to turn the volume of the audio playback up or down.
In this case, in the embodiment of the disclosure, the audio data is not adjusted only based on the event, but also the vibration data is synchronously adjusted based on the event, so that a single volume adjustment event is realized, and the audio playing loudness and the vibration intensity can be synchronously adjusted.
It can be understood that the loudness of the audio playing, i.e. the volume, is essentially the magnitude of the vibration amplitude of the loudspeaker diaphragm, so that the magnitude of the audio data, i.e. the magnitude of the representing volume, is adjusted, i.e. the magnitude of the audio data is adjusted. Similarly, the intensity of motor vibration is essentially the magnitude of motor vibration amplitude, so that the amplitude of vibration data, namely the intensity of vibration, is represented, and the vibration intensity of motor vibration, namely the amplitude of vibration data, is adjusted.
The volume adjustment event refers to an event triggered by a user to adjust the volume of audio playing, for example, taking a smart phone as an example, the user may trigger the volume adjustment event by pressing a volume button disposed on the side of the smart phone, or may trigger the volume adjustment event by touching a screen of the smart phone.
The volume adjustment events generally include a volume up and a volume down, and in embodiments of the present disclosure, the user may achieve simultaneous adjustment of audio playback and vibration intensity through different volume adjustment events.
In the embodiment of the disclosure, when a volume adjustment event is detected, an adjustment parameter may be first determined according to the volume adjustment event, where the adjustment parameter represents a coefficient for adjusting the magnitudes of the audio data and the vibration data, then the magnitudes of the audio data and the vibration data may be synchronously adjusted according to the adjustment parameter, and then the speaker and the vibration motor are driven to work based on the adjusted audio data and vibration data, so as to complete audio playing and output intensity adjustment of the vibration effect.
It can be understood that in the audio and vibration effect cooperative control scheme in the related art, the volume of audio playing and the intensity of vibration effect are respectively and independently controlled, so that when a user wants to adjust the volume and the vibration effect simultaneously, the user needs to adjust the volume to a proper size by using a volume key first, then needs to enter a setting page to further adjust the motor vibration intensity, and the operation is very complicated and the adjustment efficiency is lower.
In the embodiment of the disclosure, the synchronous adjustment of the audio playing and the vibration efficiency output intensity can be realized based on a single volume adjusting event, for example, the intensity of motor vibration can be synchronously reduced when a user turns down the volume, and the intensity of motor vibration can be synchronously increased when the user turns up the volume, so that the user does not need to enter a setting page to independently adjust the motor vibration intensity, thereby greatly facilitating the user adjustment operation and improving the user operation experience.
It should be noted that, in the related art, there are mainly two modes of audio and vibration synergy:
1) And packaging the audio data and the vibration effect data in the same sound and vibration effect file.
For example, the OGG file is a file including both audio data and vibration effect data. The OGG file may include multiple channels (channels), for example, a conventional OGG music file includes 3 channels, where two channels are audio data and the other Channel is vibration data.
2) The audio data and the vibration effect data are two independent files.
For example, the audio data may be packaged as a file in audio format, such as a file with a suffix name of. MP3, and the vibration effect data may be packaged as a file in vibration effect format, which may be a file with a suffix name of. Bin.
In the embodiment of the disclosure, for the above two cases, different system control modes are respectively set to realize synchronous adjustment of audio data and vibration efficiency data, and are respectively described below.
As shown in fig. 4 and 5, in some embodiments, the audio and vibration cooperative control method of the present disclosure includes:
S410, the audio processing module receives the sound vibration effect data sent by the audio hardware abstraction layer.
In the embodiment of the disclosure, the sound-vibration effect data refers to a data file containing both audio data and vibration data, for example, in one example, the sound-vibration effect data may be touch sense OGG data, that is, the aforementioned OGG data file, where the OGG data file includes 3 channels in total, 2 channels are audio data, and 1 channel is vibration data. Those skilled in the art will appreciate that this disclosure is not repeated.
Referring to fig. 5, the sound and vibration effect data generated by the application layer (APP) is sent to the application Framework layer (Framework), and the application Framework layer may call the API interface of the media service (MEDIA SERVER) and the Audio service (Audio Server) to perform corresponding data processing on the sound and vibration effect data.
An Audio hardware abstraction layer (Audio HAL) included in the Hardware Abstraction Layer (HAL) serves as an interface layer between the system and the hardware circuit, and can transmit the received sound and vibration effect data to the Audio processing module ADSP.
ADSP, audio DSP (DIGITAL SIGNAL Process), which is a hardware for Audio data processing, is an Audio processing module described in this disclosure. In the embodiment of the disclosure, the amplitude adjustment of the audio data and the vibration data can be realized by using the audio processing module ADSP.
When ADSP processes the sound vibration effect data, channel separation can be carried out on the sound vibration effect data, so that the audio data and the vibration data in the sound vibration effect data are separated. Then, the ADSP may send the audio data and the vibration data to an audio driver and a motor driver of a Kernel layer (Kernel), respectively, and the audio driver and the motor driver drive a speaker and a vibration motor of a Hardware layer (Hardware) to realize synchronous generation of audio playing and vibration effects.
S420, in response to detecting the volume adjustment event, the audio hardware abstraction layer sends a first adjustment parameter to the audio processing module based on the volume adjustment event.
In connection with fig. 5, in the embodiment of the present disclosure, when a user triggers a volume adjustment event, for example, the user triggers the volume adjustment event by pressing a volume key, and, for example, the user triggers the volume adjustment event by touching, at this time, an application Framework layer (Framework) may detect the volume adjustment event and then issue the volume adjustment event to an Audio hardware abstraction layer (Audio HAL).
An Audio hardware abstraction layer (Audio HAL) may determine a first adjustment parameter k according to the volume adjustment event, where the first adjustment parameter k represents a coefficient that needs to adjust the amplitude of the sound vibration data. Then, the Audio hardware abstraction layer (Audio HAL) may send the first adjustment parameter k to the processing module of the Audio processing module ADSP by means of parameter setting.
In some embodiments, the first adjustment parameter k may be positively correlated with the volume value, that is, the higher the volume value corresponding to the volume adjustment event, the larger the value of the first adjustment parameter k, and vice versa, the lower the volume value corresponding to the volume adjustment event, the smaller the value of the first adjustment parameter k.
S430, the audio processing module adjusts the amplitude of the sound vibration effect data based on the first adjusting parameter.
Referring to fig. 5, after receiving the first adjustment parameter k, the processing module ADSP may adjust the amplitude of the current sound vibration data according to the first adjustment parameter k. For example, in one example, the amplitude adjustment of the sound effect data may be achieved by multiplying the first adjustment parameter k by the amplitude parameter of the current sound effect data.
It can be understood that, because the sound vibration effect data includes the audio data and the vibration effect data at the same time, when the amplitude is adjusted based on the first adjusting parameter k, the amplitude of the audio data and the amplitude of the vibration effect data can be adjusted at the same time, so that the audio playing volume and the vibration intensity can be adjusted synchronously by single volume adjustment operation.
S440, the volume processing module performs channel separation on the sound vibration effect data with the amplitude adjusted to obtain target audio data and target vibration data.
In view of the foregoing, after the audio processing module ADSP performs amplitude adjustment on the sound vibration data in combination with the first adjustment parameter k, the sound vibration data is still a package file containing both the audio data and the vibration data. Then, the audio processing module ADSP needs to perform channel separation on the sound vibration effect data, so as to obtain the audio data and the vibration data after the amplitude is adjusted, that is, the target audio data and the target vibration data in the disclosure.
S450, driving the loudspeaker to work based on the target audio data, and driving the vibration motor to work based on the target vibration data.
As shown in connection with fig. 5, the audio processing module ADSP issues the target audio data and the target vibration data to the corresponding drivers of the Kernel layer (Kernel), respectively.
For example, in one example, the target audio data is sent to an audio Driver (PA Driver) of the kernel layer through the TDM Port interface, and then a speaker of the Hardware layer (Hardware) can be driven to sound, so as to realize volume adjustment of audio playing. Meanwhile, target vibration data is issued to a motor drive (HAPTIC DRIVER) of the inner core layer through the SWR Port interface, and then a vibration motor of the Hardware layer (Hardware) can be driven to vibrate, so that adjustment of vibration intensity is realized.
According to the embodiment of the disclosure, the synchronous adjustment of the audio playing and the vibration output intensity can be realized based on the single volume adjustment event, for example, when a user plays shooting games, the vibration intensity can be stronger when the volume is adjusted to be increased, the motor vibration intensity does not need to be independently adjusted when the user does not need to enter a setting page, the user adjustment operation is greatly facilitated, and the user operation experience is improved.
As shown in fig. 6 and 7, in some embodiments, the audio and vibration cooperative control method of the present disclosure example includes:
In response to detecting the volume adjustment event during the processing of the audio data and the vibration data, the application framework layer adjusts the amplitude of the vibration data based on the volume adjustment event S610.
As shown in fig. 7, in the embodiments of fig. 6 and 7, the audio data and the vibration data are two data files, respectively, unlike the embodiments of fig. 4 and 5. In one example, the audio data may be an audio file with a suffix name of, for example, MP3, and the vibration data may be a vibration file with a suffix name of, for example, bin.
For vibration data, since the data stream of the vibration data does not pass through the audio processing module ADSP, amplitude adjustment of the vibration data cannot be realized in the ADSP. Thus, in embodiments of the present disclosure, vibration data may be amplitude adjusted at an application Framework layer (Framework).
For example, in the embodiment of fig. 7, when the application Framework layer (Framework) detects a volume adjustment event, the amplitude of the vibration data may be adjusted by an adjustment module located at the application Framework layer according to the volume adjustment event.
In some embodiments, the adjustment module may determine the vibration adjustment parameter j based on the volume adjustment event, where the vibration adjustment parameter j represents a coefficient that needs to adjust the amplitude of the vibration data, and the vibration adjustment parameter j may be positively correlated with the volume value, that is, the higher the volume value corresponding to the volume adjustment event, the larger the value of the vibration adjustment parameter j, and conversely, the lower the volume value corresponding to the volume adjustment event, the smaller the value of the vibration adjustment parameter j.
After the vibration adjustment parameter j is determined, the amplitude of the vibration data can be adjusted according to the vibration adjustment parameter j. For example, in one example, the amplitude adjustment of the vibration data may be achieved by multiplying the vibration adjustment parameter j by the amplitude parameter of the current vibration data.
S620, the audio hardware abstraction layer sends the audio adjustment parameters to the audio processing module based on the volume adjustment event.
S630, the audio processing module adjusts the amplitude of the audio data based on the audio adjustment parameters.
As shown in fig. 7, for the Audio data, the general processing flow is the same as that of the foregoing embodiments of fig. 4 and 5, and the difference is that, since the Audio data no longer includes vibration data, when the volume adjustment event is detected, the Audio hardware abstraction layer (audiohal) sends the Audio adjustment parameters to the Audio processing module ADSP based on the volume adjustment event to perform amplitude adjustment, and then does not need to perform channel separation processing, but directly sends the Audio data after the amplitude adjustment to the Kernel layer (Kernel), which will be understood by those skilled in the art, and this disclosure will not be repeated.
And S640, driving the vibration motor to work by the vibration hardware abstraction layer based on the vibration data with the amplitude adjusted, and driving the loudspeaker to work by the audio processing module based on the audio data with the amplitude adjusted.
Referring to fig. 7, for vibration data, after the amplitude adjustment of the vibration data is performed by the application Framework layer (Framework), the adjusted vibration data may be issued to the vibration Hardware abstraction layer (Vibrator HAL), and the vibration Hardware abstraction layer (Vibrator HAL) may serve as an interface layer between the system and the Hardware circuit, and may send the received vibration data to the motor driver (HAPTIC DRIVER) of the Kernel layer (Kernel), and then may drive the vibration motor of the Hardware layer (Hardware) to vibrate, so as to implement adjustment of vibration intensity.
For Audio data, after the amplitude adjustment is performed on the Audio data by the Audio processing module ADSP, the adjusted Audio data can be issued to an Audio Hardware abstraction layer (Audio HAL), the Audio Hardware abstraction layer (Audio HAL) is used as an interface layer between a system and a Hardware circuit, the received Audio data can be sent to an Audio driver (PADriver) of a Kernel layer (Kernel), and then a loudspeaker of the Hardware layer (Hardware) can be driven to sound, so that the volume adjustment is realized.
According to the embodiment of the disclosure, the synchronous adjustment of the audio playing and the vibration output intensity can be realized based on the single volume adjustment event, for example, when a user plays shooting games, the vibration intensity can be stronger when the volume is adjusted to be increased, the motor vibration intensity does not need to be independently adjusted when the user does not need to enter a setting page, the user adjustment operation is greatly facilitated, and the user operation experience is improved.
In some embodiments, the present disclosure provides an audio and vibration cooperative control apparatus that may be applied to any electronic device having a speaker and a vibration motor, e.g., a smart phone, a tablet computer, a smart watch, etc., without limitation.
As shown in fig. 8, in some embodiments, an audio and vibration cooperative control apparatus of an example of the present disclosure includes:
An amplitude adjustment module 10 configured to synchronously adjust the magnitudes of the audio data and the vibration data based on a volume adjustment event in response to detecting the volume adjustment event during the processing of the audio data and the vibration data;
A drive control module 20 configured to drive the speaker to operate based on the amplitude-adjusted audio data and to drive the vibration motor to operate based on the amplitude-adjusted vibration data.
In some embodiments, the amplitude adjustment module 10 is configured to:
the audio processing module receives sound vibration effect data sent by an audio hardware abstraction layer, wherein the sound vibration effect data comprises the audio data and the vibration data;
In response to detecting a volume adjustment event, the audio hardware abstraction layer sending a first adjustment parameter to the audio processing module based on the volume adjustment event;
The audio processing module adjusts the amplitude of the sound vibration effect data based on the first adjustment parameter.
In some embodiments, the drive control module 20 is configured to:
the audio processing module performs channel separation on the sound vibration effect data with the amplitude adjusted to obtain target audio data and target vibration data;
the speaker is driven to operate based on the target audio data, and the vibration motor is driven to operate based on the target vibration data.
In some embodiments, the amplitude adjustment module 10 is configured to:
in the processing process of the audio data and the vibration data, in response to detecting a volume adjustment event, the application framework layer adjusts the amplitude of the vibration data based on the volume adjustment event;
the audio hardware abstraction layer sends audio adjustment parameters to the audio processing module based on the volume adjustment event;
the audio processing module adjusts the amplitude of the audio data based on the audio adjustment parameter.
In some embodiments, the drive control module 20 is configured to:
The vibration hardware abstraction layer drives the vibration motor to work based on the vibration data with the amplitude adjusted, and the audio processing module drives the loudspeaker to work based on the audio data with the amplitude adjusted.
In some embodiments, the amplitude adjustment module 10 is configured to:
The application framework layer determines a vibration adjustment parameter based on the volume adjustment event and adjusts the amplitude of the vibration data based on the vibration adjustment parameter.
In some embodiments, the sound vibration data includes haptic OGG data.
According to the embodiment of the disclosure, the synchronous adjustment of the audio playing and the vibration output intensity can be realized based on the single volume adjustment event, for example, when a user plays shooting games, the vibration intensity can be stronger when the volume is adjusted to be increased, the motor vibration intensity does not need to be independently adjusted when the user does not need to enter a setting page, the user adjustment operation is greatly facilitated, and the user operation experience is improved.
In some embodiments, the present disclosure provides an electronic device comprising:
a speaker;
A vibration motor; and
A controller comprising a processor and a memory storing computer instructions for causing the processor to perform the method of any of the embodiments described above.
In some embodiments, the present disclosure provides a storage medium storing computer instructions for causing a computer to perform the method of any of the above embodiments.
In particular, fig. 9 shows a schematic structural diagram of an apparatus 600 suitable for implementing the method of the present disclosure, by which the corresponding functions of the processor and the storage medium described above may be implemented.
As shown in fig. 9, the apparatus 600 includes a processor 601, which can perform various appropriate actions and processes according to a program stored in a memory 602 or a program loaded into the memory 602 from a storage section 608. In the memory 602, various programs and data required for the operation of the device 600 are also stored. The processor 601 and the memory 602 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, mouse, etc.; an output portion 607 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The drive 610 is also connected to the I/O interface 605 as needed. Removable media 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on drive 610 so that a computer program read therefrom is installed as needed into storage section 608.
In particular, according to embodiments of the present disclosure, the above method processes may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the method described above. In such an embodiment, the computer program can be downloaded and installed from a network through the communication portion 609, and/or installed from the removable medium 611.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should be apparent that the above embodiments are merely examples for clarity of illustration and are not limiting of the embodiments. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. While still being apparent from variations or modifications that may be made by those skilled in the art are within the scope of the present disclosure.

Claims (10)

1. An audio and vibration cooperative control method is characterized by comprising the following steps:
In the processing process of the audio data and the vibration data, in response to detecting a volume adjustment event, synchronously adjusting the amplitude values of the audio data and the vibration data based on the volume adjustment event;
and driving a loudspeaker to work based on the audio data after the amplitude is adjusted, and driving a vibration motor to work based on the vibration data after the amplitude is adjusted.
2. The method of claim 1, wherein said simultaneously adjusting the magnitudes of the audio data and the vibration data based on the volume adjustment event in response to detecting the volume adjustment event during the audio data and vibration data processing, comprises:
the audio processing module receives sound vibration effect data sent by an audio hardware abstraction layer, wherein the sound vibration effect data comprises the audio data and the vibration data;
In response to detecting a volume adjustment event, the audio hardware abstraction layer sending a first adjustment parameter to the audio processing module based on the volume adjustment event;
The audio processing module adjusts the amplitude of the sound vibration effect data based on the first adjustment parameter.
3. The method of claim 2, wherein the driving speaker operation based on the amplitude adjusted audio data and driving vibration motor operation based on the amplitude adjusted vibration data comprises:
the audio processing module performs channel separation on the sound vibration effect data with the amplitude adjusted to obtain target audio data and target vibration data;
the speaker is driven to operate based on the target audio data, and the vibration motor is driven to operate based on the target vibration data.
4. The method of claim 1, wherein said simultaneously adjusting the magnitudes of the audio data and the vibration data based on the volume adjustment event in response to detecting the volume adjustment event during the audio data and vibration data processing, comprises:
in the processing process of the audio data and the vibration data, in response to detecting a volume adjustment event, the application framework layer adjusts the amplitude of the vibration data based on the volume adjustment event;
the audio hardware abstraction layer sends audio adjustment parameters to the audio processing module based on the volume adjustment event;
the audio processing module adjusts the amplitude of the audio data based on the audio adjustment parameter.
5. The method of claim 4, wherein the driving speaker operation based on the amplitude adjusted audio data and driving vibration motor operation based on the amplitude adjusted vibration data comprises:
The vibration hardware abstraction layer drives the vibration motor to work based on the vibration data with the amplitude adjusted, and the audio processing module drives the loudspeaker to work based on the audio data with the amplitude adjusted.
6. The method of claim 4, wherein the application framework layer adjusting the amplitude of the vibration data based on the volume adjustment event comprises:
The application framework layer determines a vibration adjustment parameter based on the volume adjustment event and adjusts the amplitude of the vibration data based on the vibration adjustment parameter.
7. The method of claim 2, wherein the step of determining the position of the substrate comprises,
The sound vibration effect data includes tactile OGG data.
8. An audio and vibration cooperative control device, characterized by comprising:
An amplitude adjustment module configured to synchronously adjust magnitudes of the audio data and the vibration data based on a volume adjustment event in response to detecting the volume adjustment event during processing of the audio data and the vibration data;
And the driving control module is configured to drive the loudspeaker to work based on the audio data with the amplitude adjusted, and drive the vibration motor to work based on the vibration data with the amplitude adjusted.
9. An electronic device, comprising:
a speaker;
A vibration motor; and
A controller comprising a processor and a memory, the memory storing computer instructions for causing the processor to perform the method of any one of claims 1 to 7.
10. A storage medium having stored thereon computer instructions for causing a computer to perform the method according to any one of claims 1 to 7.
CN202311598971.6A 2023-11-27 2023-11-27 Audio and vibration cooperative control method and device Pending CN117998006A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311598971.6A CN117998006A (en) 2023-11-27 2023-11-27 Audio and vibration cooperative control method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311598971.6A CN117998006A (en) 2023-11-27 2023-11-27 Audio and vibration cooperative control method and device

Publications (1)

Publication Number Publication Date
CN117998006A true CN117998006A (en) 2024-05-07

Family

ID=90886106

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311598971.6A Pending CN117998006A (en) 2023-11-27 2023-11-27 Audio and vibration cooperative control method and device

Country Status (1)

Country Link
CN (1) CN117998006A (en)

Similar Documents

Publication Publication Date Title
EP4030276B1 (en) Content continuation method and electronic device
WO2021147415A1 (en) Method for implementing stereo output and terminal
WO2020253844A1 (en) Method and device for processing multimedia information, and storage medium
US10051368B2 (en) Mobile apparatus and control method thereof
EP2326136A2 (en) Method and apparatus for remote controlling bluetooth device
US20220342631A1 (en) Method and system for playing audios
CN114466097A (en) Mobile terminal capable of preventing sound leakage and sound output method of mobile terminal
CN113176869A (en) Screen-projecting audio time delay control method and device and computer readable storage medium
CN117998006A (en) Audio and vibration cooperative control method and device
CN115407962A (en) Audio shunting method and electronic equipment
US20200128321A1 (en) Electronic device including a plurality of speakers
WO2024021712A9 (en) Audio playback method and electronic device
CN116743905B (en) Call volume control method and electronic equipment
WO2023284403A1 (en) Audio processing method and device
CN116567489B (en) Audio data processing method and related device
WO2022002218A1 (en) Audio control method, system, and electronic device
CN113709652B (en) Audio play control method and electronic equipment
CN111866226B (en) Terminal and sound production method
WO2023160204A1 (en) Audio processing method, and electronic device
WO2024066933A9 (en) Loudspeaker control method and device
CN116347320A (en) Audio playing method and electronic equipment
CN117931116A (en) Volume adjusting method, electronic equipment and medium
CN116112600A (en) Call volume adjusting method, electronic equipment and storage medium
CN116743904A (en) Call volume control method and electronic equipment
CN116962919A (en) Sound pickup method, sound pickup system and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination