CN117750271A - Intelligent cabin system and method for playing sound by using vehicle-mounted sound equipment - Google Patents

Intelligent cabin system and method for playing sound by using vehicle-mounted sound equipment Download PDF

Info

Publication number
CN117750271A
CN117750271A CN202311669808.4A CN202311669808A CN117750271A CN 117750271 A CN117750271 A CN 117750271A CN 202311669808 A CN202311669808 A CN 202311669808A CN 117750271 A CN117750271 A CN 117750271A
Authority
CN
China
Prior art keywords
channel
sound
vehicle
playing
speakers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311669808.4A
Other languages
Chinese (zh)
Inventor
雷金亮
吴成贵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weilai Automobile Technology Anhui Co Ltd
Original Assignee
Weilai Automobile Technology Anhui Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weilai Automobile Technology Anhui Co Ltd filed Critical Weilai Automobile Technology Anhui Co Ltd
Priority to CN202311669808.4A priority Critical patent/CN117750271A/en
Publication of CN117750271A publication Critical patent/CN117750271A/en
Pending legal-status Critical Current

Links

Landscapes

  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)

Abstract

The present application relates to automotive electronics, and more particularly to an intelligent cabin system, a method of playing sound using a car audio device, and a computer readable storage medium for implementing the method. According to one aspect of the present application, there is provided a method of playing sound using a car audio apparatus, including: A. acquiring an audio signal comprising channel signals associated with one or more channels; B. extracting sound characteristic information from the audio signal, wherein the sound characteristic information at least comprises the direction corresponding to the sound channel and the intensity of the channel signal; C. determining a playing strategy based on the sound characteristic information and configuration information of the vehicle-mounted sound equipment, wherein the configuration information comprises positions of speakers in the vehicle-mounted sound equipment in a vehicle, and the playing strategy indicates corresponding speakers for playing channel signals associated with various channels; and D, outputting channel signals associated with the channels to corresponding speakers based on the playing strategy.

Description

Intelligent cabin system and method for playing sound by using vehicle-mounted sound equipment
Technical Field
The present application relates to automotive electronics, and more particularly to an intelligent cabin system, a method of playing sound using a car audio device, and a computer readable storage medium for implementing the method.
Background
Vehicle audio equipment is specially designed for the interior environment of a vehicle to adapt to the complex acoustic environment and space limitation in the vehicle, is usually installed in the vehicle, and plays sound through equipment such as a vehicle audio host and speakers. Many current vehicle applications are not specifically optimized for vehicle audio devices, resulting in very general audio effects. In addition, many mobile applications may be used in driving scenes, and if a car audio device can be utilized as an audio playback apparatus, the playback effect can be improved and the power consumption of the mobile terminal can be reduced.
Disclosure of Invention
An object of the present application is to provide a method of playing sound using a car audio apparatus, which enables the playing of audio signals of various sound sources using the car audio apparatus.
According to one aspect of the present application, there is provided a method of playing sound using a car audio apparatus, including:
A. acquiring an audio signal comprising channel signals associated with one or more channels;
B. extracting sound characteristic information from the audio signal, wherein the sound characteristic information at least comprises the direction corresponding to the sound channel and the intensity of the channel signal;
C. determining a playing strategy based on the sound characteristic information and configuration information of the vehicle-mounted sound equipment, wherein the configuration information comprises positions of speakers in the vehicle-mounted sound equipment in a vehicle, and the playing strategy indicates corresponding speakers for playing channel signals associated with various channels; and
D. based on the play strategy, channel signals associated with the respective channels are output to the respective speakers.
Optionally, in the above method, the audio signal source is an audio processor within the intelligent cockpit system or an audio signal generating device external to the intelligent cockpit system.
Optionally, in the above method, the sound characteristic information further includes frequency characteristics of channel signals associated with respective channels.
Optionally, in the above method, the configuration information further includes a usable state of the speaker.
Optionally, in the above method, step C includes determining a matched speaker for each channel based on the configuration information.
Further optionally, in the above method, the matching includes a direction corresponding to the channel being coincident with or close to a direction of the speaker relative to the sound receiver.
Still further optionally, in the above method, the playing policy further indicates a processing mode of channel signals associated with respective channels, and step C further includes:
for the case where the direction corresponding to the channel is close to the direction of the speaker with respect to the sound receiver, the channel signal associated with the channel is processed based on the processing mode.
It is yet another object of the present application to provide an intelligent cockpit system that utilizes vehicle audio equipment to effect playback of audio signals from various sound sources.
According to another aspect of the present application, there is provided an intelligent cockpit system comprising:
an audio device including one or more speakers disposed inside the vehicle;
a control unit configured to perform the following operations:
A. acquiring an audio signal comprising channel signals associated with one or more sound sources;
B. extracting sound characteristic information from the audio signal, wherein the sound characteristic information at least comprises the direction represented by the sound channel and the intensity of the channel signal;
C. determining a playing strategy based on the sound characteristic information and configuration information of the vehicle-mounted sound equipment, wherein the configuration information comprises positions of speakers in the vehicle-mounted sound equipment in a vehicle, and the playing strategy indicates corresponding speakers for playing channel signals associated with various channels; and
D. based on the play strategy, channel signals associated with the respective channels are output to the respective speakers.
Optionally, in the above intelligent cockpit system, the audio signal source is an audio processor within the intelligent cockpit system or an audio signal generating device external to the intelligent cockpit system.
Optionally, in the above intelligent cabin system, the sound characteristic information further includes frequency characteristics of channel signals associated with respective channels.
Optionally, in the above intelligent cabin system, the configuration information further includes a usable state of the speaker.
Optionally, in the above intelligent cockpit system, the control unit is configured to perform operation C in the following manner: based on the configuration information, a matched speaker is determined for each channel.
Further optionally, in the above intelligent cabin system, the matching includes that a direction corresponding to the sound channel is identical or close to a direction of the speaker relative to the sound receiver.
Still further optionally, in the above intelligent cockpit system, the playing policy further indicates a processing mode of channel signals associated with each channel, and operation C further includes: for the case where the direction corresponding to the channel is close to the direction of the speaker with respect to the sound receiver, the channel signal associated with the channel is processed based on the processing mode.
According to a further aspect of the present application, there is provided a computer readable storage medium having instructions stored therein, characterized in that the method as described above is implemented by execution of the instructions by a processor.
In some embodiments of the present application, audio signals from various audio signal sources are played with a car audio device, which may provide good audible enjoyment to the user by means of powerful computing resources of the intelligent cabin domain. Furthermore, in some embodiments, the sound characteristic information includes the direction represented by the channel, the intensity of the channel signal and the frequency characteristic, and the configuration information includes not only the position of the speaker inside the vehicle but also the usable state of the speaker, by means of which the play strategy of the speaker can be flexibly controlled.
Drawings
The foregoing and/or other aspects and advantages of the present application will become more apparent and more readily appreciated from the following description of the various aspects taken in conjunction with the accompanying drawings in which like or similar elements are designated with the same reference numerals. The drawings include:
fig. 1 is a schematic diagram of an automotive electronics system architecture.
Fig. 2 is a schematic diagram of a smart cockpit system.
Fig. 3 illustrates a block diagram of a computing device that may implement the functionality of the control unit of fig. 2.
Fig. 4 illustrates a block diagram of another computing device that may implement the functionality of the control unit of fig. 2.
Fig. 5 is a flow chart of a method of playing sound using a car audio device according to some embodiments of the present application.
Fig. 6 is a flowchart of a play policy generation method according to further embodiments of the present application.
Detailed Description
The present application is described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments of the application are shown. This application may, however, be embodied in different forms and should not be construed as limited to the embodiments set forth herein. The above-described embodiments are provided to fully complete the disclosure herein so as to more fully convey the scope of the application to those skilled in the art.
In this specification, terms such as "comprising" and "including" mean that there are other elements and steps not directly or explicitly recited in the description and claims, nor do the subject matter of the present application exclude the presence of other elements and steps.
Unless specifically stated otherwise, terms such as "first" and "second" do not denote a sequential order of elements in terms of time, space, size, etc., but rather are merely used to distinguish one element from another.
Automotive electronics systems can generally divide functional domains in various ways. FIG. 1 is a schematic diagram of an automotive electronics system architecture illustrating an exemplary manner of functional block partitioning. As shown in fig. 1, automotive electronics system 10 includes an autopilot domain 110, a smart cockpit domain 120, a body domain 130, a powertrain domain 140, and a chassis domain 150, which are illustratively in bus communication (e.g., ethernet) with one another. It should be noted that the above-described division of the functional domains is only exemplary, and that other ways are possible, such as integrating the body domain into the intelligent cabin domain.
In the architecture of the automotive electronics system shown in fig. 1, autopilot domain 110 provides data processing operations and judgment capabilities required for autopilot, including data processing operations on millimeter wave radar, cameras, lidar, GPS, inertial navigation, etc. Meanwhile, the automatic driving domain also provides the safety guarantee work of the underlying core data and the networking data of the vehicle in the automatic driving state.
The intelligent cabin domain 120 is used to perform the functions of an electronic system of the car cabin, which may be, for example, an integrated system integrating instrument information and multimedia entertainment information display, or an on-board central control screen.
The body domain 130 is used to perform overall control of body functions, which may be, for example, a conventional Body Controller (BCM) or further integrate the functions of a keyless start system (PEPS), a ripple anti-pinch, an air conditioning control system, etc. on the basis of this.
The power domain 140 is used to optimize and control the vehicle powertrain. The chassis region 150 is used to perform vehicle running control, and includes, for example, a power steering system (EPS), a body stabilization system (ESC), an electric brake booster, an airbag control system, and an air suspension, a vehicle speed sensor, and the like.
Fig. 2 is a schematic diagram of a smart cockpit system that may be used, for example, to implement the functions of the smart cockpit domain of fig. 1.
Referring to fig. 2, the intelligent cabin system 21 is shown to include an on-board communication unit 211, an audio device 212, and a control unit 213. It is noted that for the sake of descriptive simplicity, fig. 2 does not show other units of the intelligent cabin system (such as navigation devices, man-machine interaction interfaces and actuators such as air-conditioning controllers, seat controllers, etc.), but the omitted description of these units does not constitute an obstacle for a person skilled in the art to understand and implement the technical solutions of the present application.
In the smart cockpit system 21 shown in fig. 2, the control unit 213 may receive various signals (e.g., audio digital signals) from the external device 22 (e.g., a game machine, a cellular phone, a tablet computer, and a wearable device) via the in-vehicle communication unit 211, or generate audio digital signals using an internal audio signal processing chip.
Referring to FIG. 2, audio device 212 includes power amplifiers 2121-1 through 2121-n and associated speakers 2122-1 through 2122-n. The audio signals (e.g., audio analog signals) from the control unit 213 are amplified by the power amplifiers 2121-1 to 2121-n and then output to the respective speakers 2122-1 to 2122-n. It should be noted that speakers 2122-1 through 2122-n may be mounted in various locations including, but not limited to, an instrument desk, under the seat, near the a and C pillars, door jambs, trunk, and the like.
The control unit 213 may be implemented using the computing device shown in fig. 3. In particular, the computing device 30 shown in FIG. 3 includes at least one memory 310, at least one processor 320, a computer program 330 stored on the memory 310 and executable on the processor 320, and a communication interface 340.
Memory 310 includes non-volatile memory such as flash memory, ROM, hard drives, magnetic disks, optical disks, and various types of dynamic random access memory. Processor 320 may be various types of processors including, for example, but not limited to, a Central Processing Unit (CPU), a Digital Signal Processor (DSP), a neural Network Processor (NPU), a Graphics Processor (GPU), a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), and the like. The computer program 330, when executed on the processor 320, may implement or cause various operations or steps including one or more steps of a method of playing sound using an in-vehicle audio device as will be described below.
The communication interface 340 may be coupled with the in-vehicle communication unit 211 so as to enable communication between the control unit 213 and the external device 22.
It should be noted that the configuration of the computing device shown in fig. 3 is merely exemplary. In other embodiments, memory 310 may be omitted when the memory provided by the processor itself has sufficient capacity to store programs and data.
Fig. 4 shows a block diagram of another computing device that can implement the functions of the control unit in fig. 2, in which the transmission of audio signals is indicated by a thick solid line and the transmission of control signals is indicated by a thin solid line. The computing device 41 shown includes an audio signal generation unit 411 and an audio signal processing unit 412.
Referring to fig. 4, the audio signal generating unit 411 includes an audio signal generating chip 4111, a memory 4112, a DMA controller 4113 for performing access control on a DMA buffer area in the memory 4112, and an audio data transmission interface 4114 (e.g., an audio data interface I2S Tx based on an I2S (Inter-IC Sound) protocol). The Audio data generated by the Audio signal generating chip 4111 is output to an external storage device 42 (e.g., a hard disk), and then the Audio data is synthesized by the component audioflinger of the Android platform and the synthesized Audio signal is output to the DMA buffer area in the memory 4112. The audio signal in the DMA buffer area is sent to the audio signal processing unit 412 via the audio data transmission interface 4114 under the control of the DMA controller 4113.
The audio signal processing unit 412 includes an audio data receiving interface 4121 (e.g., an I2S (Inter-IC Sound) protocol-based audio data interface I2S Rx) and a digital signal processor 4122. The digital signal processor 4122 performs various processes on the audio signal received by the audio data receiving interface 4121, and outputs the processed audio signal to the power amplifiers (e.g., the power amplifiers 2121-1 to 2121-n in fig. 2) of the acoustic device.
Fig. 5 is a flow chart of a method of playing sound using a car audio device according to some embodiments of the present application. The embodiment shown in fig. 5 may be implemented by means of software and necessary general-purpose hardware (e.g. a combination of a general-purpose computer system and a computer program), or may be implemented by means of dedicated hardware. In the following description, the control unit shown in fig. 3 and 4 will be taken as an example of an apparatus for implementing the method shown in fig. 5.
The method shown in fig. 5 begins at step 510. In this step, the control unit receives an audio signal from an audio signal source external to the smart car system (e.g., an audio digital signal received from an audio signal generating device external to the smart car system), or generates an audio signal inside the control unit (e.g., an audio digital signal generated by the audio signal generating unit 411).
The audio signals (hereinafter referred to AS) acquired by the control unit may comprise audio signals corresponding to one or more channels, which may be referred to AS channel signals (hereinafter referred to AS 1 ......AS n ). Corresponding identifiers may be appended to the individual channel signals to distinguish them.
Each channel generally corresponds to a particular sound source. For example, in a stereo signal, left and right channels record sound sources from the left and right sides, respectively. It should be noted that a one-to-one correspondence between the number of channels and the number of sound sources is not necessary. For example, in a multi-channel surround sound system, there may be multiple channels corresponding to the same sound source to achieve a richer sound field effect. Furthermore, for stereo systems, the left and right channels generally correspond to the main left and right sound sources, but may also contain information of other sound sources.
Step 520 is then entered to extract sound characteristic information from the audio signal AS by running a computer program on a processor of the control unit, such AS the processor 320 in fig. 3 and the digital signal processor 4122 in fig. 4. In some embodiments, the sound characteristic information comprises, for example, the type of channels (which may be, for example, a front left channel, a front right channel, a center channel, a rear left channel, a rear right channel, a low frequency enhancement channel, etc.) and channel signals AS associated with the respective channels 1 ......AS n Is a strength of (a) is a strength of (b). The type of channel essentially describes or defines the direction of the sound source. In some embodiments, the ith channel signal AS i The strength of the signal may be measured by means of the total power of the individual frequency components of the signal or the amplitude of the signal, for example a common measurement method is to calculate the power of the signal using the average of the squares (i.e. taking the average of the squares of the signal amplitudes).
In some other embodiments of the present invention,the sound characteristic information may also include individual channel signals AS 1 ......AS n These features may be extracted using various signal processing methods (e.g., fourier transforms). The frequency characteristic included in the sound characteristic information of each channel signal may be one or more of the following: fundamental frequency, harmonics, frequency distribution, peak frequency and bandwidth, etc.
After completion of step 520, the flow shown in fig. 5 proceeds to step 530. In this step, by running a computer program on the processor of the control unit, a corresponding play policy is determined or generated based on the sound characteristic information and the configuration information of the in-vehicle acoustic apparatus. The playback strategy described here may indicate the playback of the individual channel signals AS 1 ......AS n Such as speakers 2122-1 through 2122-n in fig. 2).
Further, the play policy may also be used to indicate the processing mode of the respective channel signal. Specifically, in one processing mode, a corresponding channel signal may be directly output to the audio device; in another processing mode, the channel signal may be preprocessed and then the preprocessed channel signal may be output to the audio device. The manner of preprocessing includes, for example, adjusting the amplitude and phase of the different frequency components of the channel signal, etc.
In some embodiments, the configuration information includes the location of speakers in the vehicle interior (e.g., including, but not limited to, instrument desk, under seat, near the a and C pillars, door jambs, trunk, etc.) in the vehicle audio device (e.g., audio device 212 in fig. 2). Optionally, the configuration information further includes usable status and dynamic range of the speaker, etc. Illustratively, when the flag value of the state is '1', the corresponding speaker will be permitted to play the channel signal, and when the flag value of the state is '0', the corresponding speaker will be prohibited from playing the channel signal. The usable status may be used for some specific applications, for example, in order to avoid interference with the driver during driving, the speakers mounted on the instrument desk will be prevented from playing audio signals generated by the game software, so that the flag value of the usable status may be set to '0'.
In other embodiments, the configuration information may be stored in the form of a two-dimensional table, for example as shown in table 1.
TABLE 1
Speaker numbering Speaker mounting position Usable state
#1 Left part of instrument desk 0
#2 Right part of instrument desk 1
#3 Left C column 1
#4 Right C column 1
#5 Trunk box 1
In other embodiments, the play strategy may also be represented in the form of a two-dimensional table, for example as shown in table 2.
TABLE 2
Channel numbering Speaker numbering Processing mode
*1 #2 b
*2 #2 a
*3 #4 a
*4 #5 c
In table 2, for the channel numbered 1 signals, they are played with speaker #2 and processing mode b, for the other numbered channels signals, and so on. 'a', 'b' and 'c' represent types of processing modes of the channel signal. For example, type a indicates that the channel signal is not processed, i.e., the power amplifier output is directly selected, and types 'b' and 'c' indicate different ways of preprocessing the channel signal.
After completion of step 530, the flow shown in fig. 5 proceeds to step 540. In this step, the steps are based on the steps by running a computer program on the processor of the control unitThe play policy output channel determined in step 530 is first determined. Illustratively, the individual channel signals AS 1 ......AS n Amplified by a power amplifier (e.g., power amplifiers 2121-1-2121-n in fig. 2) and output to a corresponding speaker (e.g., speakers 2122-1-2122-n in fig. 2).
Fig. 6 is a flowchart of a play policy generation method according to further embodiments of the present application. The various steps of the illustrated method may be implemented by running a computer program on a processor of a control unit, such as the processor 320 in fig. 3 and the digital signal processor 4122 in fig. 4.
As shown in fig. 6, step 610 is entered after step 520 of fig. 5 is completed. In this step, for the ith channel signal AS i (1 st channel Signal AS when step 610 is first performed) 1 ) The configuration information (e.g., table 1) is searched for speakers that match the channel corresponding to the channel signal.
For example, assume a channel signal AS i Is the audio signal of the left channel (i.e. channel signal AS i If the sound source position of (a) is located on the left side of the sound receiver (e.g., a user and a recording apparatus located in a car), the directions of the left part and the left C-pillar of the instrument desk with respect to the sound receiver can be regarded as the directions corresponding to the left channel are coincident, and thus the speaker #1 and the speaker #3 in table 1 are determined as matched speakers. Further, in this example, if the usable state of speaker #1 is prohibited (flag value is '0'), speaker #3 may be determined AS the and channel signal AS only i And matched speakers.
In another example, assume a channel signal AS i As an audio signal from the upper channel, it can be seen from table 1 that no speaker is installed above the sound receiver. In other words, there is no speaker whose direction with respect to the sound receiver does not coincide with the direction corresponding to the channel. For this case, in some embodiments, matching speakers may be determined based on a "similarity" principle. Specifically, speakers #3, #4 mounted on the left and right C-posts are oriented with respect to the sound receiver (e.g., close to the upper leftSquare and upper right) is relatively close to the roof down direction, so speakers #3, #4 can be determined to be in communication with channel signal AS i And matched speakers.
Step 620 is then entered to determine the channel signal AS i Is a processing mode of (a). In some embodiments, if associated with the channel signal AS i When the direction of the channel of (a) coincides with the direction of the speaker with respect to the sound receiver (for example, the channel is the left channel), the processing mode may be set to the first mode. In the first mode, the channel signal is directly output to the audio device. If associated with the channel signal AS i When the direction of the channel of (a) is close to the direction of the speaker with respect to the sound receiver (for example, the channel is the upper channel), the processing mode may be set to the second mode. In the second mode, the channel signal is preprocessed, and then the preprocessed channel signal is output to the audio device. Optionally, the second processing mode may be further subdivided into a plurality of sub-modes, each sub-mode corresponding to a preprocessing mode. Various algorithms simulating sound source signals at different positions can be applied to the preprocessing of the channel signals. These algorithms include, but are not limited to, gao Jiepu-based algorithms, acoustic propagation model-based algorithms, artificial neural network algorithms, morphological filter-based algorithms, and the like.
In some embodiments of the present application, the following pretreatment methods may be employed: in the preprocessed channel signal, the fundamental frequency component remains unchanged and the power ratio of the harmonic components (e.g., 2-fold, 4-fold, and 8-fold harmonic components) is reduced geometrically from the fundamental frequency component item-by-item. The computational complexity of the preprocessing approach in the described embodiments is greatly reduced compared to algorithms such as higher order spectral based algorithms, acoustic propagation model based algorithms, artificial neural network algorithms, and morphological filter based algorithms. Further, studies conducted by the present inventors have shown that the hearing experience obtained by means of the above-described preprocessing approach also reaches a satisfactory level in the application scenario inside the vehicle (the difference between the signal played in analog mode and the signal played by the speaker in the corresponding position is hardly noticeable to the average user).
After completion of step 620, the flow shown in fig. 6 proceeds to step 630. In step 630, it will be determined whether a play policy has been generated for all channel signals, if so, step 540 in fig. 5 is entered, otherwise, step 640 is entered.
In step 640, the sequence number of the channel signal is incremented (i=i+1) to update the operation object of the subsequent step to the next channel signal. After completion of step 640, the flow shown in fig. 6 returns to step 610 to generate play strategies for other channel signals in the audio signal AS.
According to another aspect of the present application, there is also provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs one or more of the steps comprised in the methods described above with reference to fig. 5 and 6.
Computer-readable storage media, as referred to in this application, include various types of computer storage media, and can be any available media that can be accessed by a general purpose or special purpose computer. By way of example, a computer-readable storage medium may comprise a RAM, ROM, EPROM, E PROM, register, hard disk, removable disk, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage device, or any other temporary or non-temporary medium that can be used to carry or store desired program code elements in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Combinations of the above should also be included within the scope of computer-readable storage media. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
Those of skill would appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both.
To demonstrate interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Implementation of such functionality in hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Although only a few specific embodiments of this application have been described, those skilled in the art will appreciate that this application may be embodied in many other forms without departing from the spirit or scope thereof. Accordingly, the illustrated examples and embodiments are to be considered as illustrative and not restrictive, and the application is intended to cover various modifications and substitutions without departing from the spirit and scope of the application as defined by the appended claims.
The embodiments and examples set forth herein are presented to best explain the embodiments in accordance with the present technology and its particular application and to thereby enable those skilled in the art to make and use the application. However, those skilled in the art will recognize that the foregoing description and examples have been presented for the purpose of illustration and example only. The description as set forth is not intended to cover various aspects of the application or to limit the application to the precise form disclosed.

Claims (15)

1. A method for playing sound using a vehicle audio device, comprising:
A. acquiring an audio signal comprising channel signals associated with one or more channels;
B. extracting sound characteristic information from the audio signal, wherein the sound characteristic information at least comprises the direction corresponding to the sound channel and the intensity of the channel signal;
C. determining a playing strategy based on the sound characteristic information and configuration information of the vehicle-mounted sound equipment, wherein the configuration information comprises positions of speakers in the vehicle-mounted sound equipment in a vehicle, and the playing strategy indicates corresponding speakers for playing channel signals associated with various channels; and
D. based on the play strategy, channel signals associated with the respective channels are output to the respective speakers.
2. The method of claim 1, wherein the audio signal source is an audio processor within the intelligent cockpit system or an audio signal generating device external to the intelligent cockpit system.
3. The method of claim 1, wherein the sound characteristic information further includes frequency characteristics of channel signals associated with respective channels.
4. The method of claim 1, wherein the configuration information further comprises a usable status of the speaker.
5. A method according to claim 1 or 3, wherein step C comprises determining a matching speaker for each channel based on the configuration information.
6. The method of claim 5, wherein the matching includes a direction corresponding to the channel being coincident with or near a direction of a speaker relative to a sound receiver.
7. The method of claim 6, wherein the play policy further indicates a processing mode of channel signals associated with respective channels, and step C further comprises:
for the case where the direction corresponding to the channel is close to the direction of the speaker with respect to the sound receiver, the channel signal associated with the channel is processed based on the processing mode.
8. An intelligent cockpit system comprising:
an audio device including one or more speakers disposed inside the vehicle;
a control unit configured to perform the following operations:
A. acquiring an audio signal comprising channel signals associated with one or more sound sources;
B. extracting sound characteristic information from the audio signal, wherein the sound characteristic information at least comprises the direction represented by the sound channel and the intensity of the channel signal;
C. determining a playing strategy based on the sound characteristic information and configuration information of the vehicle-mounted sound equipment, wherein the configuration information comprises positions of speakers in the vehicle-mounted sound equipment in a vehicle, and the playing strategy indicates corresponding speakers for playing channel signals associated with various channels; and
D. based on the play strategy, channel signals associated with the respective channels are output to the respective speakers.
9. The intelligent cockpit system of claim 8 wherein the audio signal source is an audio processor within the intelligent cockpit system or an audio signal generating device external to the intelligent cockpit system.
10. The intelligent cabin system of claim 8, wherein the sound characteristic information further comprises frequency characteristics of channel signals associated with the respective channels.
11. The intelligent cockpit system of claim 8 wherein the configuration information further includes a usable status of the speakers.
12. The intelligent cabin system of claim 8 or 10, wherein the control unit is configured to perform operation C in the following manner: based on the configuration information, a matched speaker is determined for each channel.
13. The intelligent cabin system of claim 12, wherein the matching comprises the direction in which the channel corresponds being coincident with or near the direction of the speaker relative to the sound recipient.
14. The intelligent cabin system of claim 13, wherein the play policy further indicates a processing mode of channel signals associated with the respective channels, and operation C further comprises: for the case where the direction corresponding to the channel is close to the direction of the speaker with respect to the sound receiver, the channel signal associated with the channel is processed based on the processing mode.
15. A computer readable storage medium, in which a computer program adapted to be downloaded by a mobile terminal is stored, which computer program, when being executed by a processor, will carry out the method according to any one of claims 1-7.
CN202311669808.4A 2023-12-01 2023-12-01 Intelligent cabin system and method for playing sound by using vehicle-mounted sound equipment Pending CN117750271A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311669808.4A CN117750271A (en) 2023-12-01 2023-12-01 Intelligent cabin system and method for playing sound by using vehicle-mounted sound equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311669808.4A CN117750271A (en) 2023-12-01 2023-12-01 Intelligent cabin system and method for playing sound by using vehicle-mounted sound equipment

Publications (1)

Publication Number Publication Date
CN117750271A true CN117750271A (en) 2024-03-22

Family

ID=90249974

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311669808.4A Pending CN117750271A (en) 2023-12-01 2023-12-01 Intelligent cabin system and method for playing sound by using vehicle-mounted sound equipment

Country Status (1)

Country Link
CN (1) CN117750271A (en)

Similar Documents

Publication Publication Date Title
CN104715750B (en) Sound system including engine sound synthesizer
CN104136299B (en) For the system, method and the device that in car, sound are led
CN102224674A (en) Noise modifying overhead audio system
RU2730414C2 (en) Systems and methods for setting electronic sound improvement
EP3392619B1 (en) Audible prompts in a vehicle navigation system
WO2020120754A1 (en) Audio processing device, audio processing method and computer program thereof
CN112292872A (en) Sound signal processing device, mobile device, method, and program
CN117750271A (en) Intelligent cabin system and method for playing sound by using vehicle-mounted sound equipment
CN113053402A (en) Voice processing method and device and vehicle
WO2021175735A1 (en) Electronic device, method and computer program
CN111833840B (en) Noise reduction method, noise reduction device, noise reduction system, electronic equipment and storage medium
US11919446B2 (en) Apparatus and method for generating sound of electric vehicle
CN115442732A (en) Sound channel mapping method and device, vehicle-mounted playing device, vehicle and storage medium
CN113997863A (en) Data processing method and device and vehicle
US20220014865A1 (en) Apparatus And Method To Provide Situational Awareness Using Positional Sensors And Virtual Acoustic Modeling
CN113631427A (en) Signal processing device, acoustic reproduction system, and acoustic reproduction method
JP2020098230A (en) Sound reproducing device and sound reproduction method
US20230186893A1 (en) Apparatus and method for controlling vehicle sound
DE112018003683T5 (en) Method, device and system for generating spatial sound fields
US20240223989A1 (en) Signal generating apparatus, vehicle, and computer-implemented method of generating signals
US12010503B2 (en) Signal generating apparatus, vehicle, and computer-implemented method of generating signals
JPWO2018066215A1 (en) Control circuit and control method of control circuit
CN115214503A (en) In-vehicle sound control method and device and automobile
CN117719532A (en) Multichannel vehicle seat vibration feedback system, method and related equipment
JP2023182983A (en) Sound image prediction device and sound image prediction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination