CN115767854A - Acousto-optic synchronous linkage method and acousto-optic synchronous linkage system - Google Patents
Acousto-optic synchronous linkage method and acousto-optic synchronous linkage system Download PDFInfo
- Publication number
- CN115767854A CN115767854A CN202211432747.5A CN202211432747A CN115767854A CN 115767854 A CN115767854 A CN 115767854A CN 202211432747 A CN202211432747 A CN 202211432747A CN 115767854 A CN115767854 A CN 115767854A
- Authority
- CN
- China
- Prior art keywords
- light
- signal
- extraction period
- acousto
- frequency information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000001360 synchronised effect Effects 0.000 title claims abstract description 35
- 238000000034 method Methods 0.000 title claims abstract description 27
- 230000005236 sound signal Effects 0.000 claims abstract description 112
- 238000000605 extraction Methods 0.000 claims description 85
- 239000003086 colorant Substances 0.000 claims description 28
- 230000008859 change Effects 0.000 claims description 24
- 230000003287 optical effect Effects 0.000 claims description 20
- 230000004044 response Effects 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 5
- 230000000694 effects Effects 0.000 abstract description 4
- 230000006870 function Effects 0.000 description 14
- 230000033764 rhythmic process Effects 0.000 description 7
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60Q—ARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
- B60Q3/00—Arrangement of lighting devices for vehicle interiors; Lighting devices specially adapted for vehicle interiors
- B60Q3/70—Arrangement of lighting devices for vehicle interiors; Lighting devices specially adapted for vehicle interiors characterised by the purpose
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60Q—ARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
- B60Q3/00—Arrangement of lighting devices for vehicle interiors; Lighting devices specially adapted for vehicle interiors
- B60Q3/80—Circuits; Control arrangements
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F21—LIGHTING
- F21V—FUNCTIONAL FEATURES OR DETAILS OF LIGHTING DEVICES OR SYSTEMS THEREOF; STRUCTURAL COMBINATIONS OF LIGHTING DEVICES WITH OTHER ARTICLES, NOT OTHERWISE PROVIDED FOR
- F21V23/00—Arrangement of electric circuit elements in or on lighting devices
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F21—LIGHTING
- F21V—FUNCTIONAL FEATURES OR DETAILS OF LIGHTING DEVICES OR SYSTEMS THEREOF; STRUCTURAL COMBINATIONS OF LIGHTING DEVICES WITH OTHER ARTICLES, NOT OTHERWISE PROVIDED FOR
- F21V33/00—Structural combinations of lighting devices with other articles, not otherwise provided for
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05B—ELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
- H05B47/00—Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
- H05B47/10—Controlling the light source
- H05B47/155—Coordinated control of two or more light sources
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05B—ELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
- H05B47/00—Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
- H05B47/10—Controlling the light source
- H05B47/165—Controlling the light source following a pre-assigned programmed sequence; Logic control [LC]
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05B—ELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
- H05B47/00—Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
- H05B47/10—Controlling the light source
- H05B47/17—Operational modes, e.g. switching from manual to automatic mode or prohibiting specific operations
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F21—LIGHTING
- F21W—INDEXING SCHEME ASSOCIATED WITH SUBCLASSES F21K, F21L, F21S and F21V, RELATING TO USES OR APPLICATIONS OF LIGHTING DEVICES OR SYSTEMS
- F21W2106/00—Interior vehicle lighting devices
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02B—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
- Y02B20/00—Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
- Y02B20/40—Control techniques providing energy savings, e.g. smart controller or presence detection
Landscapes
- Engineering & Computer Science (AREA)
- Mechanical Engineering (AREA)
- General Engineering & Computer Science (AREA)
- Optical Communication System (AREA)
Abstract
The present disclosure discloses an acousto-optic synchronous linkage method, which includes: receiving a working mode selected by a user; receiving a sound signal from a media source; converting the sound signal into a light signal, wherein the light signal is at least related to frequency information in the sound signal; converting the sound signal into an audio signal; and executing a selected operating mode using the light signal and the audio signal. The acousto-optic synchronous linkage method can combine the music playing state with the atmosphere lamp in the automobile, realizes real-time acousto-optic integrated output, enables the atmosphere created by the atmosphere lamp and the sound effect to be fully exerted, and further greatly improves the using effect of the atmosphere lamp in the automobile.
Description
Technical Field
The present disclosure relates to automotive technologies, and more particularly, to an acousto-optic synchronous linkage method and an acousto-optic synchronous linkage system.
Background
With the continuous development of the automobile industry, the requirements of consumers on the configuration of automobiles are higher and higher, and the pursuit of quality is also higher and higher, wherein, in order to create the atmosphere in the automobiles, the atmosphere lamp also gradually appears in the configuration of vehicles of various levels, and the atmosphere lamp is a part of the vital importance of the visual atmosphere in the automobiles.
In the prior art, most atmosphere lamps need to be configured with a music rhythm controller independently, the multimedia processor needs to send audio data to the music rhythm controller, and the music rhythm controller needs to analyze audio and then obtain corresponding control instructions to send the control instructions to the atmosphere lamps so as to realize that the atmosphere lamps follow rhythm control of music. However, in practical applications, the audio data is transmitted to the musical rhythm controller for analysis, which easily causes the rhythm delay of the atmosphere lamp, and even causes the signal loss, and finally causes the atmosphere lamp not to follow the musical rhythm.
Disclosure of Invention
An object of the present invention is to solve at least one of the above problems and disadvantages in the prior art.
In view of the above problems, a first aspect of the present disclosure provides an acousto-optic synchronous linkage method, which includes:
receiving a working mode selected by a user;
receiving a sound signal from a media source;
converting the sound signal into a light signal, wherein the light signal is at least related to frequency information in the sound signal;
converting the sound signal into an audio signal; and
and executing a selected working mode by using the light signal and the audio signal.
According to an exemplary embodiment of the invention, converting the sound signal into the light signal further comprises:
extracting the frequency information in the sound signal;
converting the frequency information into proportional information of RGB three primary colors by using a color mixing algorithm;
and generating the light signal by using the proportion information of the three primary colors of RGB and the brightness function of the light signal.
According to an exemplary embodiment of the present invention, extracting the frequency information in the sound signal further comprises:
and extracting the frequency information according to an extraction period, wherein the extraction period is integral multiple of the sampling period of the sound signal.
According to an exemplary embodiment of the present invention, converting the frequency information into the proportional information of the three primary colors of RGB using the color mixing algorithm further comprises:
processing the frequency information in the extraction period by using the color mixing algorithm, and obtaining the proportion information of three primary colors of RGB in the extraction period according to the following formula:
g(R)=ΔF1/F*a1+F1/F*b1;
wherein g (R) represents a proportion of red light in an extraction period T, F represents a total energy density corresponding to the frequency information in the extraction period T, F1 is an energy density in a first frequency band in the frequency information, Δ F1 represents an energy density change in the first frequency band in the extraction period T, and a1 and b1 are weighting coefficients, respectively;
g(G)=ΔF2/f*a2+F2/F*b2;
wherein G (G) represents the proportion of green light in the extraction period T, F2 is the energy density in the second frequency band in the frequency information, Δ F2 represents the change in the energy density in the second frequency band in the extraction period T, and a2 and b2 are weighting coefficients;
g(B)=ΔF3/F*a3+F3/f*b3;
wherein g (B) represents the proportion of blue light in the extraction period T, F3 is the energy density in a third frequency band in the frequency information, Δ F3 represents the change in the energy density in the third frequency band in the extraction period T, and a3 and B3 are weighting coefficients;
and, G (R) + G (G) + G (B) =1.
According to an exemplary embodiment of the present invention, generating the lamp signal using the ratio information of the three primary colors of RGB and the luminance function of the lamp signal further comprises:
obtaining a brightness function L (T) of the light signal by using the energy density change of the frequency information in the extraction period T:
L(T)=k(ΔF1+ΔF2+ΔF3);
wherein L (T) represents the brightness of the lamp light in the extraction period T, and k is a constant coefficient representing the light intensity;
obtaining a color dynamic response algorithm model in the extraction period T by using the proportion information of the RGB three primary colors in the extraction period T and the brightness function of the light signal:
C(T)=g(R)L(T)+g(G)L(T)+g(B)L(T),
wherein C (T) represents a light signal including a light color and a light intensity during the extraction period T.
According to an exemplary embodiment of the present invention, the extraction period T is less than 500ms.
According to an exemplary embodiment of the present invention, receiving the sound signal from the media source further comprises:
receiving a sound signal decoded by the media source, wherein the decoded sound signal further comprises a tag representing a type of music,
wherein the ranges of the first frequency band, the second frequency band and the third frequency band are determined according to the music type.
According to an exemplary embodiment of the present invention, performing the selected operation mode using the light signal and the audio signal further comprises:
in case a first mode is selected, transmitting the light signal to all light channels and transmitting the audio signal to all sound channels;
in the event that a second mode is selected, sending the light signal to the selected one or more of the light channels and sending the audio signal to the selected one or more of the acoustic channels;
in case the third mode is selected, the light signal is sent to the selected one or more of the light channels and the audio signal is sent to all sound channels.
In view of the above problems, a second aspect of the present disclosure provides an acousto-optic synchronous linkage system, which includes:
a receiving unit for receiving an operation mode selected by a user and a sound signal from a media source;
a data processing unit for:
converting the sound signal into a light signal, wherein the light signal is at least related to frequency information in the sound signal;
converting the sound signal into an audio signal; and
and executing a selected working mode by using the light signal and the audio signal.
According to an exemplary embodiment of the present invention, the converting the sound signal into the light signal by the data processing unit further comprises:
extracting the frequency information in the sound signal;
converting the frequency information into proportional information of three primary colors of RGB by using a color mixing algorithm;
and generating the light signal by using the proportion information of the three primary colors of RGB and the brightness function of the light signal.
According to an exemplary embodiment of the present invention, the extracting of the frequency information in the sound signal by the data processing unit further comprises:
and extracting the frequency information according to an extraction period, wherein the extraction period is integral multiple of the sampling period of the sound signal.
According to an exemplary embodiment of the present invention, the data processing unit converting the frequency information into the scale information of three primary colors of RGB using a color mixing algorithm further comprises:
processing the frequency information in the extraction period by using the color mixing algorithm, and obtaining the proportion information of three primary colors of RGB in the extraction period according to the following formula:
g(R)=ΔF1/F*a1+F1/F*b1;
wherein g (R) represents a proportion of red light in an extraction period T, F represents a total energy density corresponding to the frequency information in the extraction period T, F1 is an energy density in a first frequency band in the frequency information, Δ F1 represents a change in the energy density in the first frequency band in the extraction period T, and a1 and b1 are weighting coefficients, respectively;
g(G)=ΔF2/f*a2+F2/F*b2;
wherein G (G) represents the proportion of green light in the extraction period T, F2 is the energy density in the second frequency band in the frequency information, Δ F2 represents the change in the energy density in the second frequency band in the extraction period T, and a2 and b2 are weighting coefficients;
g(B)=ΔF3/F*a3+F3/F*b3;
wherein g (B) represents the proportion of blue light in the extraction period T, F3 is the energy density in a third frequency band in the frequency information, Δ F3 represents the change in the energy density in the third frequency band in the extraction period T, and a3 and B3 are weighting coefficients;
and, G (R) + G (G) + G (B) =1.
According to an exemplary embodiment of the present invention, the obtaining the light signal by the data processing unit using the ratio information of the three primary colors of RGB and the luminance function of the light signal further comprises:
obtaining a brightness function L (T) of the light signal by using the energy density change of the frequency information in the extraction period T:
L(T)=k(ΔF1+ΔF2+ΔF3);
wherein L (T) represents the brightness of the lamp light in the extraction period T, and k is a constant coefficient representing the light intensity;
obtaining a color dynamic response algorithm model in the extraction period T by using the proportion information of the RGB three primary colors in the extraction period T and the brightness function of the light signal:
C(T)=g(R)L(T)+g(G)L(T)+g(B)L(T),
wherein C (T) represents a light signal including a light color and a light intensity during the extraction period T.
According to an exemplary embodiment of the invention, the system further comprises: a plurality of acoustic channels and a plurality of optical channels, wherein each acoustic channel is configured to receive the audio signal and convert it to sound, and each optical channel is configured to receive the light signal and convert it to light;
in the case that the first mode is selected, all optical channels receive the light signal and all acoustic channels receive the audio signal;
in the case that the second mode is selected, one or more of the light channels in all light channels receive the light signal and one or more of the sound channels in all sound channels receive the audio signal;
in case a third mode is selected, one or more of the light channels in all light channels receive the light signal and all sound channels receive the audio signal.
In the foregoing exemplary embodiment of the present invention, compared with the existing acousto-optic linkage technology, the disclosed acousto-optic synchronous linkage method and apparatus can implement acousto-optic integrated output in real time by using the correlation of the frequency information of the light signal and the sound signal, implement acousto-optic synchronous linkage in the geometric space in the automobile, and improve the user experience.
Drawings
The features, advantages and other aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description in conjunction with the accompanying drawings, in which several embodiments of the present disclosure are shown by way of illustration and not limitation, wherein:
FIG. 1 is a schematic flow chart of the acousto-optic synchronous linkage method disclosed in the present invention;
FIG. 2 is a schematic flow chart illustrating specific steps of the acousto-optic synchronous linkage method disclosed in FIG. 1; and
FIG. 3 is a schematic diagram of an acousto-optic synchronous linkage system according to the present invention.
Detailed Description
The technical scheme of the invention is further specifically described by the following embodiments and the accompanying drawings. In the specification, the same or similar reference numerals denote the same or similar components. The following description of the embodiments of the present invention with reference to the accompanying drawings is intended to explain the general inventive concept of the present invention and should not be construed as limiting the invention.
As used herein, the terms "include," "include," and similar terms are to be construed as open-ended terms, i.e., "including/including but not limited to," meaning that additional content can be included as well. The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment", and so on.
The invention mainly focuses on the following technical problems: how to realize the acousto-optic integrated linkage function in the automobile.
In order to solve the problems, the invention discloses an acousto-optic synchronous linkage method, which comprises the following steps: receiving a working mode selected by a user; receiving a sound signal from a media source; converting the sound signal into a light signal, wherein the light signal is at least related to frequency information in the sound signal; converting the sound signal into an audio signal; and executing the selected working mode by using the light signal and the audio signal.
FIG. 1 shows an example of the acousto-optic synchronous linkage method disclosed by the invention. The acousto-optic synchronous linkage method specifically comprises the following steps:
s10: and receiving the working mode selected by the user.
In this example, based on the long-term accumulated user experience data, at least three operating modes may be included, a first operating mode: all the sound channels and all the light channels in the automobile work simultaneously to realize the acousto-optic linkage mode of the whole automobile; a second working mode: a user selects a part of sound channels and a part of light channels to work simultaneously according to needs so as to realize the acousto-optic linkage mode of the sub-areas in the vehicle; the third working mode is as follows: in some special cases (for example, a sleeping infant in the car or a passenger in the car needs to take a rest in a dim environment), only part of the light channels and all the sound channels in the car work simultaneously to realize the acousto-optic linkage mode of the sub-area in the car.
S20: a sound signal is received from a media source. In this example, the media source is capable of subjecting sound signals obtained by the media source to an audio decoding process.
Specifically, the media source outputs a decoded sound signal, which also contains tags indicating the type of music, and the different types of music include different sound audio segments.
S30: converting the sound signal into a light signal, wherein the light signal is at least related to frequency information in the sound signal; in this example, this step comprises in particular the following sub-steps as in fig. 2:
s31: frequency information in the sound signal is extracted.
In this step, the frequency information in the sound signal is extracted at an extraction period, which is an integral multiple of the sampling period of the sound signal.
Because human body is hardly obviously perceived to sound perception within 500ms, therefore in order to solve the asynchronous problem of acousto-optic change that human body can perceive, the extraction cycle of light signal can be set to within 500ms, demarcate and debug out suitable time difference and guarantee human body to the uniformity of acousto-optic experience. That is, in the present example, the extraction period T is less than 500ms.
S32: and converting the frequency information into proportional information of three primary colors of RGB by using a color mixing algorithm. Specifically, the color mixing algorithm can be used to convert the frequency information in the sound signal into the optical light color, and the sound variation per unit time (e.g., extraction period T) can have the corresponding optical color output. The method specifically comprises the following steps:
processing the frequency information in the extraction period T by using a color mixing algorithm, and obtaining the proportion information of RGB three primary colors in the extraction period according to the following formula:
g(R)=ΔF1/F*a1+F1/F*b1;
wherein g (R) represents the proportion of red light in the extraction period T, F represents the total energy density corresponding to the frequency information in the extraction period T, F1 is the energy density in the first frequency band in the frequency information, Δ F1 represents the change in the energy density in the first frequency band in the extraction period T, and a1 and b1 are weighting coefficients, respectively;
g(G)=ΔF2/F*a2+F2/F*b2;
wherein G (G) represents the proportion of green light in the extraction period T, F2 is the energy density in the second frequency band in the frequency information, Δ F2 represents the change in the energy density in the second frequency band in the extraction period T, and a2 and b2 are weighting coefficients;
g(B)=ΔF3/F*a3+F3/F*b3;
wherein g (B) represents the proportion of blue light in the extraction period T, F3 is the energy density in the third frequency band in the frequency information, Δ F3 represents the change in the energy density in the third frequency band in the extraction period T, and a3 and B3 are weighting coefficients;
and, G (R) + G (G) + G (B) =1.
In the present example, the ranges of the first frequency band, the second frequency band, and the third frequency band are determined according to the type of music. For example, when the frequency band range of the first sound signal determined according to the music genre is 20Hz-20KHz, the first frequency band may be set to 20-200Hz, the second frequency band may be set to 200Hz-3KHz, and the third frequency band may be set to 3KHz-20KHz. Or, when the frequency range of the second sound signal determined according to the music type is 200Hz-5KHz, the first frequency range may be set to 200-800Hz, the second frequency range may be set to 800Hz-2.5KHz, and the third frequency range may be set to 2.5KHz-5KHz.
S33: and generating the light signal by using the proportion information of the three primary colors of RGB and the brightness function of the light signal. In this example, the light intensity in the intensity function is related to the voltage value, and the volume level of the sound signal is also related to the voltage value, so that the volume change in the sound signal can be related to the light intensity. Specifically, the method comprises the following steps:
firstly, the energy density change of the frequency information in the extraction period T is utilized to obtain a brightness function L (T) of the lamp light signal:
L(T)=k(ΔF1+ΔF2+ΔF3);
where L (T) represents the luminance of the lamp light during the extraction period T, and k is a constant coefficient representing the light intensity.
Secondly, obtaining a color dynamic response algorithm model in the extraction period T by utilizing the proportion information of the RGB three primary colors in the extraction period T and the brightness function of the light signal:
C(T)=g(R)L(T)+g(G)L(T)+g(B)L(T),
where C (T) denotes a lamp light signal including a lamp light color and a lamp light intensity during the extraction period T.
The method of converting the sound signal into the light signal disclosed in step S30 can determine the final light color through the energy density change in each frequency band and the energy ratio in each frequency band in the extraction period T, and can implement a fast response dynamic change.
S40: the sound signal is converted into an audio signal. This step may employ existing audio signal conversion techniques and will not be described in detail herein.
S50: the selected mode of operation is performed using the light signal and the audio signal. The method specifically comprises the following steps:
under the condition that the first mode is selected, the lamplight signals are sent to all the optical channels, and the audio signals are sent to all the acoustic channels, so that all the optical channels and all the acoustic channels work simultaneously, and the integral acousto-optic linkage mode in the automobile is realized.
And under the condition that the second mode is selected, the light signal is sent to the selected one or more optical channels, and the audio signal is sent to the selected one or more acoustic channels, so that the optical channel only receiving the light signal and the acoustic channel only receiving the audio signal work simultaneously, and the acousto-optic linkage mode of the part area in the automobile is realized.
In case the third mode is selected, the light signal is sent to the selected one or more light channels and the audio signal is sent to all sound channels. Similar to the second mode, the description is omitted.
In addition, as shown in fig. 3, the present disclosure also discloses an acousto-optic synchronous linkage system, which includes: a media source 310, a receiving unit 320, a data processing unit 330, a first actuator 340, and a second actuator 350.
Specifically, the media source 310 performs audio decoding processing on the sound signal, and transmits the processed sound signal to the receiving unit 320, and the receiving unit 320 is further configured to receive an operating mode (e.g., a first mode, a second mode, a third mode, etc.) selected by a user.
The data processing unit 330 is configured to implement steps S30-S50 shown in fig. 1 and fig. 2, and send the obtained sound signal to the first actuator 340, and send the obtained light signal to the second actuator 350, which are not described herein again.
Specifically, the data Processing Unit 330 includes at least a DSP (Digital Signal Processing) Processing module and an MCU (Microcontroller Unit) Processing module. The DSP processing module is configured to execute steps S20 to S40 shown in fig. 1, and then send the processed light signal and audio signal to the MCU processing module, and the MCU processing module selectively sends the audio signal to the first actuator 340 and sends the light signal to the second actuator 350 according to the received working mode selected by the user. Particularly, the steps S20-S40 are executed based on the DSP processing module, so that the audio signal and the light signal can be effectively ensured to be output according to the required time sequence, and the consistency of acousto-optic linkage is further ensured.
Further, the first actuator 340 includes a plurality of acoustic channels and the second actuator 350 includes a plurality of optical channels. Each acoustic channel is for receiving an audio signal and converting it to sound, and each optical channel is for receiving a light signal and converting it to light.
In case the first mode is selected, all optical channels receive the light signal from the data processing unit 330 and all acoustic channels receive the audio signal from the data processing unit 330.
Specifically, the MCU processing module sends the audio signal to all the acoustic channels of the first actuator 340 and sends the light signal to all the optical channels of the second actuator 350 according to the first mode selected by the user, so as to achieve the full-domain acousto-optic linkage effect in the vehicle.
In case the second mode is selected, only one or more of all optical channels receive the light signal from the data processing unit 330 and only one or more of all acoustic channels receive the audio signal from the data processing unit 330.
Specifically, the MCU processing module selectively transmits the audio signal to one or more sound channels of the first actuator 340 and the light signal to one or more light channels of the second actuator 350 according to the second mode selected by the user, so as to realize the acousto-optic linkage effect of the inner region of the automobile.
In case the third mode is selected, only one or more of all optical channels receive the light signal from the data processing unit 330 and all acoustic channels receive the audio signal from the data processing unit 330.
Specifically, the MCU processing module selectively transmits audio signals to all of the acoustic channels in the first actuator 340 and selectively transmits light signals to one or more of the optical channels in the second actuator 350 according to a third mode selected by the user.
The acousto-optic synchronous linkage method and the acousto-optic synchronous linkage system disclosed by the invention overcome the problem of how to effectively fuse the sound signal and the optical signal, and the light signal is obtained by utilizing the relevance of the frequency information in the light signal and the sound signal, so that the requirements of real-time performance and no delay of acousto-optic linkage are met, and finally, the acousto-optic requirement of people on riding experience can be met.
The above are merely alternative embodiments of the present disclosure and are not intended to limit the embodiments of the present disclosure, which may be modified and varied by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the embodiments of the present disclosure should be included in the scope of protection of the embodiments of the present disclosure.
Although embodiments of the present disclosure have been described with reference to several particular embodiments, it should be understood that embodiments of the present disclosure are not limited to the particular embodiments disclosed. The embodiments of the disclosure are intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
Claims (14)
1. An acousto-optic synchronous linkage method comprises the following steps:
receiving a working mode selected by a user;
receiving a sound signal from a media source;
converting the sound signal into a light signal, wherein the light signal is at least related to frequency information in the sound signal;
converting the sound signal into an audio signal; and
and executing a selected working mode by using the light signal and the audio signal.
2. The acousto-optic synchronous linkage method according to claim 1, wherein converting the sound signal into a light signal further comprises:
extracting the frequency information in the sound signal;
converting the frequency information into proportional information of RGB three primary colors by using a color mixing algorithm;
and generating the light signal by using the proportion information of the three primary colors of RGB and the brightness function of the light signal.
3. The acousto-optic synchronous linkage method according to claim 2, wherein extracting the frequency information in the sound signal further comprises:
and extracting the frequency information according to an extraction period, wherein the extraction period is integral multiple of the sampling period of the sound signal.
4. The acousto-optic synchronous linkage method according to claim 3, wherein converting the frequency information into proportional information of three primary colors of RGB using a color mixing algorithm further comprises:
processing the frequency information in the extraction period by using the color mixing algorithm, and obtaining the proportion information of the three primary colors of RGB in the extraction period according to the following formula:
g(R)=ΔF1/F*a1+F1/F*b1;
wherein g (R) represents a proportion of red light in an extraction period T, F represents a total energy density corresponding to the frequency information in the extraction period T, F1 is an energy density in a first frequency band in the frequency information, Δ F1 represents a change in the energy density in the first frequency band in the extraction period T, and a1 and b1 are weighting coefficients, respectively;
g(G)=ΔF2/F*a2+F2/F*b2;
wherein G (G) represents the proportion of green light in the extraction period T, F2 is the energy density in the second frequency band in the frequency information, Δ F2 represents the change in the energy density in the second frequency band in the extraction period T, and a2 and b2 are weighting coefficients;
g(B)=ΔF3/F*a3+F3/F*b3;
wherein g (B) represents the proportion of blue light in the extraction period T, F3 is the energy density in a third frequency band in the frequency information, Δ F3 represents the change in the energy density in the third frequency band in the extraction period T, and a3 and B3 are weighting coefficients;
and, G (R) + G (B) =1.
5. The acousto-optic synchronous linkage method according to claim 4, wherein the generating the lamp signal using the ratio information of the three primary colors RGB and the brightness function of the lamp signal further comprises:
obtaining a brightness function L (T) of the light signal by using the energy density change of the frequency information in the extraction period T:
L(T)=k(ΔF1+ΔF2+ΔF3);
wherein L (T) represents the brightness of the lamp light in the extraction period T, and k is a constant coefficient representing the light intensity;
obtaining a color dynamic response algorithm model in the extraction period T by using the proportion information of the three primary colors of RGB in the extraction period T and the brightness function of the light signal:
C(T)=g(R)L(T)+g(G)L(T)+g(B)L(T),
wherein C (T) represents a light signal including a light color and a light intensity during the extraction period T.
6. The acousto-optic synchronous linkage method according to claim 4, characterized in that the extraction period T is less than 500ms.
7. The acousto-optic synchronous linkage method according to claim 4, wherein receiving the sound signal from the media source further comprises:
receiving a sound signal decoded by the media source, wherein the decoded sound signal further comprises a tag representing a music type,
wherein the ranges of the first frequency band, the second frequency band, and the third frequency band are determined according to the music genre.
8. The acousto-optic synchronous linkage method according to claim 1, wherein performing the selected operation mode using the light signal and the audio signal further comprises:
in case a first mode is selected, transmitting the light signal to all light channels and transmitting the audio signal to all sound channels;
in the event that a second mode is selected, sending the light signal to the selected one or more of the light channels and sending the audio signal to the selected one or more of the acoustic channels;
in case the third mode is selected, the light signal is sent to the selected one or more of the light channels and the audio signal is sent to all sound channels.
9. An acousto-optic synchronous linkage system, comprising:
a receiving unit for receiving an operation mode selected by a user and a sound signal from a media source;
a data processing unit for:
converting the sound signal into a light signal, wherein the light signal is at least related to frequency information in the sound signal;
converting the sound signal into an audio signal; and
and executing a selected working mode by using the light signal and the audio signal.
10. The acousto-optic synchronous linkage system according to claim 9, wherein the data processing unit converts the sound signal into a light signal further comprises:
extracting the frequency information in the sound signal;
converting the frequency information into proportional information of RGB three primary colors by using a color mixing algorithm;
and generating the light signal by using the proportion information of the three primary colors of RGB and the brightness function of the light signal.
11. The acousto-optic synchronous linkage system according to claim 10, wherein the data processing unit extracting the frequency information from the sound signal further comprises:
and extracting the frequency information according to an extraction period, wherein the extraction period is integral multiple of the sampling period of the sound signal.
12. The acousto-optic synchronous linkage system according to claim 11, wherein the data processing unit converts the frequency information into proportional information of three primary colors RGB using a color mixing algorithm further comprises:
processing the frequency information in the extraction period by using the color mixing algorithm, and obtaining the proportion information of three primary colors of RGB in the extraction period according to the following formula:
g(R)=ΔF1/F*a1+F1/F*b1;
wherein g (R) represents a proportion of red light in an extraction period T, F represents a total energy density corresponding to the frequency information in the extraction period T, F1 is an energy density in a first frequency band in the frequency information, Δ F1 represents a change in the energy density in the first frequency band in the extraction period T, and a1 and b1 are weighting coefficients, respectively;
g(G)=ΔF2/F*a2+F2/F*b2;
wherein G (G) represents the proportion of green light in the extraction period T, F2 is the energy density in the second frequency band in the frequency information, Δ F2 represents the change in the energy density in the second frequency band in the extraction period T, and a2 and b2 are weighting coefficients;
g(B)=ΔF3/f*a3+F3/f*b3;
wherein g (B) represents the proportion of blue light in the extraction period T, F3 is the energy density in a third frequency band in the frequency information, Δ F3 represents the change in the energy density in the third frequency band in the extraction period T, and a3 and B3 are weighting coefficients;
and, G (R) + G (G) + G (B) =1.
13. The acousto-optic synchronous linkage system according to claim 12, wherein the data processing unit generates the light signal using the ratio information of the three primary colors RGB and the luminance function of the light signal further comprises:
obtaining a brightness function L (T) of the light signal by using the energy density change of the frequency information in the extraction period T:
L(T)=k(ΔF1+ΔF2+ΔF3);
wherein L (T) represents the brightness of the lamp light in the extraction period T, and k is a constant coefficient representing the light intensity;
obtaining a color dynamic response algorithm model in the extraction period T by using the proportion information of the three primary colors of RGB in the extraction period T and the brightness function of the light signal:
C(T)=g(R)L(T)+g(G)L(T)+g(B)L(T),
wherein C (T) represents a light signal including a light color and a light intensity during the extraction period T.
14. The acousto-optic synchronized linkage system according to claim 9, further comprising: a plurality of acoustic channels and a plurality of optical channels, wherein each acoustic channel is configured to receive and convert the audio signal into sound and each optical channel is configured to receive and convert the light signal into light;
in the case that the first mode is selected, all optical channels receive the light signal and all acoustic channels receive the audio signal;
in the case that the second mode is selected, one or more of the light channels in all light channels receive the light signal and one or more of the sound channels in all sound channels receive the audio signal;
in case a third mode is selected, one or more of the light channels in all light channels receive the light signal and all sound channels receive the audio signal.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211432747.5A CN115767854A (en) | 2022-11-16 | 2022-11-16 | Acousto-optic synchronous linkage method and acousto-optic synchronous linkage system |
PCT/CN2023/131956 WO2024104413A1 (en) | 2022-11-16 | 2023-11-16 | Acousto-optic synchronous linkage method and acousto-optic synchronous linkage system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211432747.5A CN115767854A (en) | 2022-11-16 | 2022-11-16 | Acousto-optic synchronous linkage method and acousto-optic synchronous linkage system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115767854A true CN115767854A (en) | 2023-03-07 |
Family
ID=85371912
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211432747.5A Pending CN115767854A (en) | 2022-11-16 | 2022-11-16 | Acousto-optic synchronous linkage method and acousto-optic synchronous linkage system |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115767854A (en) |
WO (1) | WO2024104413A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024104413A1 (en) * | 2022-11-16 | 2024-05-23 | 延锋国际汽车技术有限公司 | Acousto-optic synchronous linkage method and acousto-optic synchronous linkage system |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150115841A1 (en) * | 2013-10-30 | 2015-04-30 | Wistron Corporation | Method and apparatus for producing situational acousto-optic effect |
CN206575642U (en) * | 2017-03-15 | 2017-10-20 | 新乡学院 | A kind of acoustooptic interaction device |
CN110418454A (en) * | 2019-06-10 | 2019-11-05 | 武汉格罗夫氢能汽车有限公司 | It is a kind of and interior in the associated automobile atmosphere lamp control method of broadcast frequency |
CN112422175A (en) * | 2020-10-27 | 2021-02-26 | 苏州浪潮智能科技有限公司 | Cascade device |
CN214335517U (en) * | 2021-03-15 | 2021-10-01 | 自贡和光同尘科技工作室 | Acousto-optic synchronous control equipment based on single chip microcomputer |
CN114340076A (en) * | 2021-07-15 | 2022-04-12 | 南京工业大学 | Control method and system of rhythm atmosphere lamp based on analog sound source sampling |
CN115315051A (en) * | 2022-07-11 | 2022-11-08 | 浙江意博高科技术有限公司 | Method and system for controlling light change through sound |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115767854A (en) * | 2022-11-16 | 2023-03-07 | 延锋国际汽车技术有限公司 | Acousto-optic synchronous linkage method and acousto-optic synchronous linkage system |
-
2022
- 2022-11-16 CN CN202211432747.5A patent/CN115767854A/en active Pending
-
2023
- 2023-11-16 WO PCT/CN2023/131956 patent/WO2024104413A1/en unknown
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150115841A1 (en) * | 2013-10-30 | 2015-04-30 | Wistron Corporation | Method and apparatus for producing situational acousto-optic effect |
CN206575642U (en) * | 2017-03-15 | 2017-10-20 | 新乡学院 | A kind of acoustooptic interaction device |
CN110418454A (en) * | 2019-06-10 | 2019-11-05 | 武汉格罗夫氢能汽车有限公司 | It is a kind of and interior in the associated automobile atmosphere lamp control method of broadcast frequency |
CN112422175A (en) * | 2020-10-27 | 2021-02-26 | 苏州浪潮智能科技有限公司 | Cascade device |
CN214335517U (en) * | 2021-03-15 | 2021-10-01 | 自贡和光同尘科技工作室 | Acousto-optic synchronous control equipment based on single chip microcomputer |
CN114340076A (en) * | 2021-07-15 | 2022-04-12 | 南京工业大学 | Control method and system of rhythm atmosphere lamp based on analog sound source sampling |
CN115315051A (en) * | 2022-07-11 | 2022-11-08 | 浙江意博高科技术有限公司 | Method and system for controlling light change through sound |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024104413A1 (en) * | 2022-11-16 | 2024-05-23 | 延锋国际汽车技术有限公司 | Acousto-optic synchronous linkage method and acousto-optic synchronous linkage system |
Also Published As
Publication number | Publication date |
---|---|
WO2024104413A1 (en) | 2024-05-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109089355A (en) | A kind of car bulb control method and system based on music signal | |
CN115767854A (en) | Acousto-optic synchronous linkage method and acousto-optic synchronous linkage system | |
KR970068676A (en) | Apparatus for sharpening color images | |
CN102007532B (en) | Method and apparatus for processing audio signal | |
CN110267394A (en) | A kind of atmosphere lamp control method and system | |
KR20070116174A (en) | Audio encoding and decoding | |
CN108437890B (en) | Control system and method for adjusting backlight of whole vehicle and vehicle | |
CN101060597A (en) | Display system, video processing apparatus, and video processing method | |
WO2007105927A1 (en) | Method and apparatus for converting image to sound | |
CN113573442A (en) | Method, apparatus, and computer-readable storage medium for audio control of scenes | |
CN103928036A (en) | Method and device for generating audio file according to image | |
CN114828359A (en) | Music-based atmosphere lamp display method, device, equipment and storage medium | |
CN101295468B (en) | Backlight driving device and display | |
CN112888121A (en) | Atmosphere lamp intelligent interaction system based on central computing platform | |
Monetti et al. | Characterizing synchronization in time series using information measures extracted from symbolic representations | |
CN1753453A (en) | Color image processor | |
CN107077824B (en) | Image display control apparatus, transmission apparatus, image display control method | |
CN112492726B (en) | Atmosphere lamp control system and method | |
CN104156371A (en) | Method and device for browsing images with hue changing along with musical scales | |
KR100893223B1 (en) | Method and apparatus for converting image to sound | |
KR100653915B1 (en) | Illuninator controller and method for control the same | |
CN115534844A (en) | Vehicle-mounted atmosphere lamp music rhythm control method and system | |
CN1750607A (en) | Device and method for processing picture of network TV | |
CN115571047A (en) | Vehicle-mounted atmosphere lamp rhythm control method, device, equipment, chip and system | |
JP5166794B2 (en) | Viewing environment control device and viewing environment control method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |