CN114974293A - Large-scale field intelligent visual effect display method and system based on acousto-optic linkage - Google Patents
Large-scale field intelligent visual effect display method and system based on acousto-optic linkage Download PDFInfo
- Publication number
- CN114974293A CN114974293A CN202210919091.3A CN202210919091A CN114974293A CN 114974293 A CN114974293 A CN 114974293A CN 202210919091 A CN202210919091 A CN 202210919091A CN 114974293 A CN114974293 A CN 114974293A
- Authority
- CN
- China
- Prior art keywords
- sound source
- linkage
- light source
- frequency
- sound
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 230000000007 visual effect Effects 0.000 title claims abstract description 46
- 238000009877 rendering Methods 0.000 claims abstract description 63
- 238000004364 calculation method Methods 0.000 claims description 46
- 238000012545 processing Methods 0.000 claims description 17
- 230000006870 function Effects 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 15
- 238000010586 diagram Methods 0.000 claims description 14
- 238000004088 simulation Methods 0.000 claims description 7
- 238000001514 detection method Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 6
- 230000005236 sound signal Effects 0.000 claims description 6
- 230000008569 process Effects 0.000 description 6
- 230000003044 adaptive effect Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/06—Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
- G10L21/10—Transforming into visible information
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09F—DISPLAYING; ADVERTISING; SIGNS; LABELS OR NAME-PLATES; SEALS
- G09F25/00—Audible advertising
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/06—Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
- G10L21/10—Transforming into visible information
- G10L21/14—Transforming into visible information by displaying frequency domain information
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02B—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
- Y02B20/00—Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
- Y02B20/40—Control techniques providing energy savings, e.g. smart controller or presence detection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Quality & Reliability (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Stereophonic System (AREA)
Abstract
The invention provides an intelligent visual effect display method and system for a large-scale field based on acousto-optic linkage. The scheme includes the steps of collecting light intensity of an observation position of a key field at the current moment through a photoresistor, adjusting the volume of N sound sources nearest to the key field according to the light intensity of the observation position of the key field, collecting the volume and frequency of each sound source through a sensor, calculating a real-time linkage light source signal node according to the volume and frequency of each sound source, adjusting the rendering power of each light source according to the real-time linkage light source signal node, and displaying acousto-optic linkage in a three-dimensional interface according to the rendering power of each light source and the volume and frequency of each sound source. According to the scheme, the environment in a large field is intelligently rendered and the time-sharing visual effect is adjusted in an acousto-optic self-adaptive linkage mode.
Description
Technical Field
The invention relates to the technical field of acousto-optic control, in particular to an intelligent visual effect display method and system for a large-scale field based on acousto-optic linkage.
Background
The large-scale field needs to be subjected to real-time online adjustment of sound, temperature and brightness, rendering of the large-scale field is achieved, efficient environment adjustment is carried out, and performance display of audiences can be facilitated.
Before the technology of the invention, the prior art mainly extracts the temperature in real time and manually adjusts the brightness and the sound on line, but when the sound and the brightness need to be adjusted at high frequency, the sound and the brightness adjusting personnel need to be worn in for many times to realize quick and effective matching.
Disclosure of Invention
In view of the above problems, the invention provides an intelligent visual effect display method and system for a large-scale field based on acousto-optic linkage, which can intelligently render and adjust time-sharing visual effect for the environment in the large-scale field in an acousto-optic adaptive linkage mode.
According to the first aspect of the embodiment of the invention, an intelligent visual effect display method for a large site based on acousto-optic linkage is provided.
In one or more embodiments, preferably, the method for displaying the intelligent visual effect of the large-scale field based on the acousto-optic linkage comprises the following steps:
acquiring light intensity of an observation position of a key field at the current moment through a photoresistor;
adjusting the volume of N sound sources nearest to the key field according to the light intensity of the observation position of the key field;
collecting the volume and frequency of each sound source through a sensor;
calculating real-time linkage light source signal nodes according to the volume and the frequency of each sound source;
adjusting the rendering power of each light source according to the real-time linkage light source signal node;
and displaying acousto-optic linkage in a three-dimensional interface according to the rendering power of each light source and the volume and frequency of each sound source.
In one or more embodiments, preferably, the acquiring, by a photoresistor, the light intensity of the observed position of the key field at the current time specifically includes:
setting an observation position of a current key field;
and acquiring the light intensity of the observation position of the key field through the photoresistor at intervals of 1 second according to the observation position of the key field.
In one or more embodiments, preferably, the adjusting, according to the light intensity of the observation position of the key site, the sound volumes of the N sound sources nearest to the key site specifically includes:
obtaining an input value of a current scene number;
extracting a preset adjustment percentage corresponding to the current scene number from a preset scene corresponding table according to the input value of the current scene number;
and judging whether the light intensity of the observation position of the key field is larger than a preset light intensity fixed value or not, if so, adjusting the size of N sound sources nearest to the observation position of the key field according to the preset adjustment percentage, and if not, not processing.
In one or more embodiments, preferably, the acquiring, by a sensor, a volume and a frequency of each sound source specifically includes:
configuring a recording device at each sound source position to obtain a real-time audio signal;
frequency extraction and volume extraction are performed for each audio signal, and the volume and frequency of each sound source are saved.
In one or more embodiments, preferably, the calculating a real-time linkage light source signal node according to the volume and the frequency of each sound source specifically includes:
to obtain the firsttCalculating the sound source superposition index of each node by utilizing a first calculation formula according to the sound source volume of the time node; the nodes specifically refer to each position needing acousto-optic linkage control;
judging whether a second calculation formula is met, if so, sending a linkage control command, and if not, performing processing;
after receiving the linkage control command, acquiring the firsttCalculating the sound source frequency of the time node, and calculating each audio frequency superposition index by using a third calculation formula;
after obtaining a new audio frequency superposition index, judging whether the audio frequency superposition index meeting a fourth calculation formula is met, if so, sending a linkage condition command, and if not, not processing;
after receiving the linkage condition command, calculating a real-time linkage light source signal node by using a fifth calculation formula;
the first calculation formula is:
wherein,p i t j__ is as followsjThe node is attPair of time nodesiThe sound source superposition index of the sound source,S i () Is as followsiThe source attenuation function of the sound source,d j i_ is as followsjNode and the firstiThe distance of the sound source(s) is,Y i t_ is as followsiThe sound source is intThe sound source volume of the time node,S i () Obtained by preliminary detection;
the second calculation formula is:
wherein,P j is as followsjThe sound source margin of the node is set,zis the total number of sound sources;
the third calculation formula is:
wherein,F i t j__ is as followsjThe node is at the firsttPair of time nodesiThe acoustic frequency superposition index of the sound source,P i () Is as followsiThe fluctuation function of the acoustic frequency of the sound source,F i t_ is at the firsttFirst of a time nodeiThe frequency of the sound source is such that,P i () Obtained by preliminary detection;
the fourth calculation formula is:
wherein,F min is the preset minimum sound source frequency,F max is a preset maximum sound source frequency;
the fifth calculation formula is:
wherein,Aa set of line-of-sight ranges for the sound rendering nodes,Bin order to control the light source signal node in real time,Gthe light source signal nodes are linked in real time.
In one or more embodiments, preferably, the adjusting the rendering power of each light source according to the real-time linkage light source signal node specifically includes:
when new data appear in the real-time linkage light source signal nodes, the rendering power of the real-time linkage light source signal nodes corresponding to each sound source is calculated by using a sixth calculation formula;
calculating a rendering power of each light source using a seventh calculation formula;
adjusting the output power of each light source in real time according to the rendering power of each light source to complete online light source control;
the sixth calculation formula is:
wherein,g i_G is as followsjThe rendering power of the light source signal nodes is linked in real time corresponding to each sound source,n i_G is composed ofjThe total number of real-time linkage light source signal nodes corresponding to each sound source,Kconverting a preset rendering power coefficient;
the seventh calculation formula is:
wherein,gis the rendering power of the light source(s),ALLis the total number of sound source nodes.
In one or more embodiments, preferably, the displaying of acousto-optic linkage in a three-dimensional interface according to the rendering power of each light source and the volume and frequency of each sound source specifically includes:
performing acousto-optic linkage display in a three-dimensional interface according to the rendering power of each light source and the volume and frequency of each sound source
Carrying out three-dimensional scanning on the displayed area to obtain a three-dimensional model diagram;
marking the position of a light source and the position of a sound source on the three-dimensional model map;
rendering by using the power of each light source according to the light source position and adjusting the brightness of the light source in an equal proportion;
adjusting the color brightness of the sound source according to the sound source position by adjusting the volume of each sound source in equal proportion;
and adjusting the color purity of the sound source according to the sound source position by adjusting the frequency of each sound source in an equal proportion.
According to a second aspect of the embodiment of the invention, an intelligent visual effect display system based on acousto-optic linkage for a large-scale field is provided.
In one or more embodiments, preferably, the large-scale field intelligent visual effect display system based on acousto-optic linkage comprises:
the brightness acquisition module is used for acquiring the light intensity of the observation position of the key field at the current moment through the photoresistor;
the sound adjusting module is used for adjusting the sound volumes of N sound sources which are most adjacent to the key field according to the light intensity of the observation position of the key field;
the sound acquisition module is used for acquiring the volume and the frequency of each sound source through the sensor;
the linkage calculation module is used for calculating real-time linkage light source signal nodes according to the volume and the frequency of each sound source;
the brightness adjusting module is used for adjusting the rendering power of each light source according to the real-time linkage light source signal nodes;
and the scene simulation module is used for displaying acousto-optic linkage in a three-dimensional interface according to the rendering power of each light source and the volume and frequency of each sound source.
According to a third aspect of embodiments of the present invention, there is provided a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the method according to any one of the first aspect of embodiments of the present invention.
According to a fourth aspect of embodiments of the present invention, there is provided an electronic device, comprising a memory and a processor, the memory being configured to store one or more computer program instructions, wherein the one or more computer program instructions are executed by the processor to implement the method of any one of the first aspect of embodiments of the present invention.
The technical scheme provided by the embodiment of the invention can have the following beneficial effects:
in the scheme, automatic matching and rendering of the field environment are realized through the adaptive online linkage of sound and brightness.
According to the scheme, the scene actual acousto-optic self-adaptive linkage display is carried out according to the preset given simulation scene, and then the time-sharing visual effect is adjusted.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of an intelligent visual effect displaying method for a large-scale field based on acousto-optic linkage according to an embodiment of the invention.
Fig. 2 is a flowchart of acquiring light intensity of an observation position of a key field at the current time through a photo resistor in the large-scale field intelligent visual effect display method based on acousto-optic linkage according to an embodiment of the present invention.
Fig. 3 is a flowchart of adjusting the volumes of N sound sources nearest to the key field according to the light intensity of the observed position of the key field in the method for displaying the intelligent visual effect of the large field based on the acousto-optic linkage according to an embodiment of the present invention.
Fig. 4 is a flow chart of acquiring the volume and frequency of each sound source through a sensor in a large-scale field intelligent visual effect display method based on acousto-optic linkage according to an embodiment of the invention.
Fig. 5 is a flowchart of calculating real-time linkage light source signal nodes according to the volume and frequency of each sound source in the intelligent visual effect displaying method for large-scale fields based on acousto-optic linkage according to an embodiment of the invention.
Fig. 6 is a flowchart of adjusting the rendering power of each light source according to the real-time linkage light source signal node in the large-scale field intelligent visual effect display method based on acousto-optic linkage according to an embodiment of the present invention.
Fig. 7 is a flowchart of displaying acousto-optic linkage in a three-dimensional interface according to the rendering power of each light source and the volume and frequency of each sound source in the method for displaying intelligent visual effects of a large-scale field based on acousto-optic linkage according to an embodiment of the invention.
Fig. 8 is a structural diagram of an intelligent visual effect display system for a large-scale field based on acousto-optic linkage according to an embodiment of the invention.
Fig. 9 is a block diagram of an electronic device in one embodiment of the invention.
Detailed Description
In some of the flows described in the present specification and claims and in the above figures, a number of operations are included that occur in a particular order, but it should be clearly understood that these operations may be performed out of order or in parallel as they occur herein, with the order of the operations being indicated as 101, 102, etc. merely to distinguish between the various operations, and the order of the operations by themselves does not represent any order of performance. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The large-scale field needs to be subjected to real-time online adjustment of sound, temperature and brightness, rendering of the large-scale field is achieved, efficient environment adjustment is carried out, and performance display of audiences can be facilitated.
Before the technology of the invention, the prior art mainly extracts the temperature in real time and manually adjusts the brightness and the sound on line, but when the sound and the brightness need to be adjusted at high frequency, the sound and the brightness adjusting personnel need to be worn in for many times to realize quick and effective matching.
The embodiment of the invention provides an intelligent visual effect display method and system for a large-scale field based on acousto-optic linkage. According to the scheme, the environment in a large field is intelligently rendered and the time-sharing visual effect is adjusted in an acousto-optic self-adaptive linkage mode.
According to the first aspect of the embodiment of the invention, the intelligent visual effect display method for the large-scale field based on the acousto-optic linkage is provided.
Fig. 1 is a flowchart of an intelligent visual effect displaying method for a large-scale field based on acousto-optic linkage according to an embodiment of the invention.
In one or more embodiments, preferably, the method for displaying the intelligent visual effect of the large-scale site based on the acousto-optic linkage comprises the following steps:
s101, acquiring light intensity of an observation position of a key field at the current moment through a photoresistor;
s102, adjusting the volume of N sound sources most adjacent to the key field according to the light intensity of the observation position of the key field;
s103, collecting the volume and frequency of each sound source through a sensor;
s104, calculating real-time linkage light source signal nodes according to the volume and the frequency of each sound source;
s105, adjusting the rendering power of each light source according to the real-time linkage light source signal nodes;
and S106, performing acousto-optic linkage display in a three-dimensional interface according to the rendering power of each light source and the volume and frequency of each sound source.
In the embodiment of the invention, the environment in a large field is intelligently rendered and the time-sharing visual effect is adjusted in an acousto-optic adaptive linkage mode, in the process, on one hand, the automatic matching and rendering of the field environment are realized through adaptive on-line linkage of sound and brightness, on the other hand, the on-site actual acousto-optic adaptive linkage display is carried out according to a preset given simulation scene, and the time-sharing visual effect is adjusted.
Fig. 2 is a flowchart of acquiring light intensity of an observation position of a key field at the current time through a photo resistor in the large-scale field intelligent visual effect display method based on acousto-optic linkage according to an embodiment of the present invention.
As shown in fig. 2, in one or more embodiments, preferably, the acquiring, by a photoresistor, light intensity of an observed position of a key field at a current time specifically includes:
s201, setting an observation position of a current key site;
s202, collecting the light intensity of the observation position of the key field through the photoresistor at intervals of 1 second according to the observation position of the key field.
In the embodiment of the invention, a mode is provided for directly acquiring the light intensity of each observation position, and the light intensity of the positions is basic data for subsequent analysis.
Fig. 3 is a flowchart of adjusting the volumes of N sound sources nearest to the key field according to the light intensity of the observed position of the key field in the method for displaying the intelligent visual effect of the large field based on the acousto-optic linkage according to an embodiment of the present invention.
As shown in fig. 3, in one or more embodiments, preferably, the adjusting, according to the light intensity of the observation position of the key site, the sound volumes of the N sound sources nearest to the key site specifically includes:
s301, obtaining an input value of a current scene number;
s302, extracting a preset adjusting percentage corresponding to the current scene number from a preset scene corresponding table according to the input value of the current scene number;
and S303, judging whether the light intensity of the observation position of the key field is larger than a preset light intensity fixed value or not, if so, adjusting the size of N sound sources closest to the observation position of the key field according to the preset adjustment percentage, and if not, not processing.
In the embodiment of the invention, in order to realize linkage control of light intensity to sound sources in different scenes, firstly, light intensity is collected, sound source volume at each position is adjusted according to the light intensity after collection, so that the sound source volume in an area with high light intensity is increased by a certain preset adjustment percentage, and the control in different scenes is realized by combining different preset percentage adjustment different amplitudes, wherein N is an integer larger than 0 and smaller than 10, and preferably, N is 3.
Fig. 4 is a flow chart of acquiring the volume and frequency of each sound source through a sensor in a large-scale field intelligent visual effect display method based on acousto-optic linkage according to an embodiment of the invention.
As shown in fig. 4, in one or more embodiments, preferably, the acquiring, by a sensor, the volume and the frequency of each sound source specifically includes:
s401, configuring a recording device at each sound source position to obtain real-time audio signals;
s402, frequency extraction and volume extraction are carried out on each audio signal, and the volume and the frequency of each sound source are saved.
In the embodiment of the invention, in order to further perform linkage control of the sound source to the light source according to the previous light intensity, sound information is further collected through the collecting equipment, frequency and volume are extracted according to the sound information, and the extracted data is subjected to subsequent data analysis.
Fig. 5 is a flowchart of calculating real-time linkage light source signal nodes according to the volume and frequency of each sound source in the intelligent visual effect displaying method for large-scale fields based on acousto-optic linkage according to an embodiment of the invention.
As shown in fig. 5, in one or more embodiments, preferably, the calculating the real-time linkage light source signal node according to the volume and the frequency of each sound source specifically includes:
s501, obtaining the sound source volume of a tth time node, and calculating the sound source superposition index of each node by using a first calculation formula; the nodes specifically refer to each position needing acousto-optic linkage control;
s502, judging whether a second calculation formula is met, if so, sending a linkage control command, and if not, processing;
s503, after receiving the linkage control command, acquiring the sound source frequency of the t-th time node, and calculating each audio frequency superposition index by using a third calculation formula;
s504, after obtaining a new audio frequency superposition index, judging whether the audio frequency superposition index meeting a fourth calculation formula is met, if so, sending a linkage condition command, and if not, not processing;
s505, after receiving the linkage condition command, calculating a real-time linkage light source signal node by using a fifth calculation formula;
the first calculation formula is:
wherein,p i t j__ is as followsjThe node is attPair of time nodesiThe sound source superposition index of the sound source,S i () Is as followsiThe source attenuation function of the sound source,d j i_ is as followsjNode and the firstiThe distance of the sound source(s) is,Y i t_ is as followsiThe sound source is intThe sound source volume of the time node,S i () Obtained by preliminary detection;
the second calculation formula is:
wherein,P j is as followsjThe sound source margin of the node is set,zis the total number of sound sources;
the third calculation formula is:
wherein,F i t j__ is as followsjThe node is attPair of time nodesiThe acoustic frequency superposition index of the sound source,P i () Is as followsiThe fluctuation function of the acoustic frequency of the sound source,F i t_ is at the firsttFirst of a time nodeiThe frequency of the sound source is such that,P i () Obtained by preliminary detection;
the fourth calculation formula is:
wherein,F min is a preset minimumThe frequency of the sound source is varied,F max is a preset maximum sound source frequency;
the fifth calculation formula is:
wherein,Aa set of line-of-sight ranges for the sound rendering nodes,Bin order to control the light source signal node in real time,Gthe method is a real-time linkage light source signal node.
In the embodiment of the invention, in order to calculate the real-time linkage light source signal nodes according to the volume and the frequency of each sound source, each node is independently judged, and the real-time linkage light source signal nodes corresponding to each node are obtained after judgment, and the nodes are basic nodes for time-sharing control. Wherein, the acquisition mode of A is as follows: adjusting all light source signal nodes to a preset fixed brightness, taking a sound rendering node as an original point, and if shooting is directly carried out through a camera device, setting the light source signal nodes with the brightness exceeding the preset brightness in the camera device in a sight range set of the sound rendering node; the light source signal nodes are specifically a number corresponding to each light source, and the light source signal nodes controllable in real time are a set of light sources which can be controlled currently.
Fig. 6 is a flowchart of adjusting the rendering power of each light source according to the real-time linkage light source signal node in the large-scale field intelligent visual effect display method based on acousto-optic linkage according to an embodiment of the present invention.
As shown in fig. 6, in one or more embodiments, preferably, the adjusting the rendering power of each light source according to the real-time linkage light source signal node specifically includes:
s601, when new data appear in the real-time linkage light source signal nodes, calculating the rendering power of the real-time linkage light source signal nodes corresponding to each sound source by using a sixth calculation formula;
s602, utilizing a seventh calculation formula to calculate the rendering power of each light source;
s603, adjusting the output power of each light source in real time according to the rendering power of each light source to complete online light source control;
the sixth calculation formula is:
wherein,g i_G is as followsjThe rendering power of the light source signal nodes is linked in real time corresponding to each sound source,n i_G is composed ofjThe total number of real-time linkage light source signal nodes corresponding to each sound source,Kconverting a preset rendering power coefficient;
the seventh calculation formula is:
wherein,gis the rendering power of the light source(s),ALLis the total number of sound source nodes.
In the embodiment of the invention, in order to effectively control the light sources and distribute energy, the level of all light sources corresponding to each sound source needing to be prompted is determined, and then the superposition of the sound sources is carried out, so that the light source rendering independently carried out at each moment is carried out, the optimal light source rendering is realized, and the sufficiency of the light sources at each moment is ensured and is linked with the sound source at the corresponding position.
Fig. 7 is a flowchart of displaying acousto-optic linkage in a three-dimensional interface according to the rendering power of each light source and the volume and frequency of each sound source in the method for displaying intelligent visual effects of a large-scale field based on acousto-optic linkage according to an embodiment of the invention.
As shown in fig. 7, in one or more embodiments, preferably, the displaying of acousto-optic linkage in a three-dimensional interface according to the rendering power of each light source and the volume and frequency of each sound source specifically includes:
s701, performing acousto-optic linkage display in a three-dimensional interface according to the rendering power of each light source and the volume and frequency of each sound source
S702, three-dimensional scanning is carried out on the displayed area to obtain a three-dimensional model diagram;
s703, marking the position of the light source and the position of the sound source on the three-dimensional model diagram;
s704, rendering by using the power of each light source according to the light source position and adjusting the brightness of the light source in an equal proportion;
s705, adjusting the color brightness of the sound source according to the sound source position by adjusting the volume of each sound source in equal proportion;
and S706, adjusting the color purity of the sound source according to the sound source position by adjusting the frequency of each sound source in an equal proportion.
In the embodiment of the invention, in order to twinkling show the acousto-optic linkage effect, a three-dimensional image is built, further effect rendering is carried out on the three-dimensional image, the relation between the change of the image and sound cannot be shown only by adjusting the light source in the effect rendering process, the change of the sound source corresponds to the change of the color brightness, linkage display is further realized, in the process of adjusting the color brightness of the sound source by adjusting the volume of each sound source in an equal proportion according to the position of the sound source, the volume of the sound source position is 0, the corresponding color brightness is 0, and the corresponding color brightness of the sound source position volume is 100 decibels or more and the corresponding color brightness is 100 db; and adjusting the color purity of the sound source according to the sound source position by adjusting the frequency of each sound source in equal proportion, wherein the preset minimum sound source frequency corresponds to the purity of 0 percent, and the preset minimum sound source frequency corresponds to the purity of 0 percent.
According to a second aspect of the embodiment of the invention, an intelligent visual effect display system based on acousto-optic linkage for a large-scale field is provided.
Fig. 8 is a structural diagram of an intelligent visual effect display system for a large-scale field based on acousto-optic linkage according to an embodiment of the invention.
In one or more embodiments, preferably, the large-scale field intelligent visual effect display system based on acousto-optic linkage comprises:
the brightness acquisition module 801 is used for acquiring the light intensity of the observation position of the key field at the current moment through the photoresistor;
a sound adjusting module 802, configured to adjust, according to the light intensity of the observation position of the key site, the volumes of N sound sources nearest to the key site;
a sound collecting module 803, configured to collect, through a sensor, the volume and frequency of each sound source;
the linkage calculation module 804 is used for calculating real-time linkage light source signal nodes according to the volume and the frequency of each sound source;
a brightness adjusting module 805, configured to adjust rendering power of each light source according to the real-time linkage light source signal node;
and the scene simulation module 806 is configured to perform acousto-optic linkage display in a three-dimensional interface according to the rendering power of each light source and the volume and frequency of each sound source.
In the embodiment of the invention, the data operation is carried out by combining the sound acquisition module, the brightness acquisition module, the linkage calculation module, the sound adjustment module, the brightness adjustment module and the scene simulation module, and then the integral adjustment of the intelligent visual effect of a large field is realized by combining the modular design.
According to a third aspect of embodiments of the present invention, there is provided a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the method according to any one of the first aspect of embodiments of the present invention.
According to a fourth aspect of the embodiments of the present invention, there is provided an electronic apparatus. Fig. 9 is a block diagram of an electronic device in one embodiment of the invention. The electronic device shown in fig. 9 is a general large-scale field intelligent visual effect display device based on acousto-optic linkage. Referring to fig. 9, the electronic device 900 includes one or more processors 902 (only one shown), memory 904, and a wireless module 906 coupled to each other. The memory 904 stores programs that can execute the contents of the foregoing embodiments, and the processor 902 can execute the programs stored in the memory 904.
The processor 902 may include one or more processing cores, among others. The processor 902 interfaces with various components throughout the electronic device 900 using various interfaces and circuitry to perform various functions of the electronic device 900 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 904 and invoking data stored in the memory 904. Alternatively, the processor 902 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 902 may integrate one or a combination of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. The CPU mainly processes an operating system, a user interface, a target application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is to be understood that the modem may not be integrated into the processor 902, but may be implemented solely by a communication chip.
The Memory 904 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 904 may be used to store instructions, programs, code sets, or instruction sets. The memory 904 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The stored data area may also store data created during use by the electronic device 900 (such as the aforementioned text documents), and the like.
The wireless module 906 is configured to receive and transmit electromagnetic waves, and achieve interconversion between the electromagnetic waves and electrical signals, so as to communicate with a communication network or other devices, for example, communicate with a base station based on a mobile communication protocol. The wireless module 906 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. The wireless module 906 may communicate with various networks, such as the internet, an intranet, a wireless network, or with other electronic devices via a wireless network. The wireless network may comprise a cellular telephone network, a wireless local area network, or a metropolitan area network. The wireless networks described above may use a variety of communication standards, protocols, and technologies, including but not limited to WLAN protocols and bluetooth protocols, and may even include those protocols that have not yet been developed.
The technical scheme provided by the embodiment of the invention can have the following beneficial effects:
in the scheme, automatic matching and rendering of the field environment are realized through self-adaptive online linkage of sound and brightness.
According to the scheme, the scene real acousto-optic self-adaptive linkage display is carried out according to the preset given simulation scene, and then the time-sharing visual effect is adjusted.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (10)
1. An intelligent visual effect display method for a large site based on acousto-optic linkage is characterized by comprising the following steps:
acquiring light intensity of an observation position of a key field at the current moment through a photoresistor;
adjusting the volume of N sound sources nearest to the key field according to the light intensity of the observation position of the key field;
collecting the volume and frequency of each sound source through a sensor;
calculating real-time linkage light source signal nodes according to the volume and the frequency of each sound source;
adjusting the rendering power of each light source according to the real-time linkage light source signal node;
and displaying acousto-optic linkage in a three-dimensional interface according to the rendering power of each light source and the volume and frequency of each sound source.
2. The method for displaying the intelligent visual effect of the large-scale field based on the acousto-optic linkage as claimed in claim 1, wherein the step of acquiring the light intensity of the observation position of the key field at the current moment through the photoresistor specifically comprises the following steps:
setting an observation position of a current key field;
and acquiring the light intensity of the observation position of the key field through the photoresistor at intervals of 1 second according to the observation position of the key field.
3. The method for displaying the intelligent visual effect of the large-scale field based on the acousto-optic linkage as claimed in claim 1, wherein the adjusting the volume of the N sound sources nearest to the key field according to the light intensity of the observation position of the key field specifically comprises:
obtaining an input value of a current scene number;
extracting a preset adjustment percentage corresponding to the current scene number from a preset scene corresponding table according to the input value of the current scene number;
and judging whether the light intensity of the observation position of the key field is larger than a preset light intensity fixed value or not, if so, adjusting the size of N sound sources nearest to the observation position of the key field according to the preset adjustment percentage, and if not, not processing.
4. The method for displaying the intelligent visual effect of the large site based on the acousto-optic linkage as claimed in claim 1, wherein the acquiring the volume and the frequency of each sound source by the sensor specifically comprises:
configuring a recording device at each sound source position to obtain a real-time audio signal;
frequency extraction and volume extraction are performed for each audio signal, and the volume and frequency of each sound source are saved.
5. The method for displaying the intelligent visual effect of the large site based on the acousto-optic linkage as claimed in claim 1, wherein the calculating of the real-time linkage light source signal node according to the volume and the frequency of each sound source specifically comprises:
to obtain the firsttCalculating the sound source superposition index of each node by utilizing a first calculation formula according to the sound source volume of the time node; the nodes specifically refer to each position needing acousto-optic linkage control;
judging whether a second calculation formula is met, if so, sending a linkage control command, and if not, performing processing;
after receiving the linkage control command, acquiring the first steptThe sound source frequency of the time node is calculated, and each audio frequency superposition index is calculated by using a third calculation formula;
after obtaining a new audio frequency superposition index, judging whether the audio frequency superposition index meeting a fourth calculation formula is met, if so, sending a linkage condition command, and if not, not processing;
after receiving the linkage condition command, calculating a real-time linkage light source signal node by using a fifth calculation formula;
the first calculation formula is:
wherein,p i t j__ is as followsjThe node is attPair of time nodesiThe sound source superposition index of the sound source,S i () Is as followsiThe source attenuation function of the sound source,d j i_ is as followsjNode and the firstiThe distance of the sound source(s) is,Y i t_ is as followsiThe sound source is intThe sound source volume of the time node,S i () Obtained by preliminary detection;
the second calculation formula is:
wherein,P j is as followsjThe sound source margin of the node is set,zis the total number of sound sources;
the third calculation formula is:
wherein,F i t j__ is a firstjThe node is attPair of time nodesiThe acoustic frequency superposition index of the sound source,P i () Is as followsiThe fluctuation function of the acoustic frequency of the sound source,F i t_ is at the firsttFirst of a time nodeiThe frequency of the sound source is such that,P i () Obtained by preliminary detection;
the fourth calculation formula is:
wherein,F min is the preset minimum sound source frequency,F max is a preset maximum sound source frequency;
the fifth calculation formula is:
wherein,Aa set of line-of-sight ranges for the sound rendering nodes,Bin order to control the light source signal node in real time,Gthe light source signal nodes are linked in real time.
6. The method for displaying the intelligent visual effect of the large site based on the acousto-optic linkage as claimed in claim 5, wherein the adjusting the rendering power of each light source according to the real-time linkage light source signal node specifically comprises:
when new data appear in the real-time linkage light source signal nodes, the rendering power of the real-time linkage light source signal nodes corresponding to each sound source is calculated by using a sixth calculation formula;
using a seventh calculation formula to calculate the rendering power of each light source;
adjusting the output power of each light source in real time according to the rendering power of each light source to complete online light source control;
the sixth calculation formula is:
wherein,g i_G is as followsjThe rendering power of the light source signal nodes is linked in real time corresponding to each sound source,n i_G is composed ofjThe total number of real-time linkage light source signal nodes corresponding to each sound source,Kconverting a preset rendering power coefficient;
the seventh calculation formula is:
wherein,gis the rendering power of the light source(s),ALLis the total number of sound source nodes.
7. The method for displaying the intelligent visual effect of the large site based on the acousto-optic linkage as claimed in claim 4, wherein the displaying of the acousto-optic linkage in the three-dimensional interface according to the rendering power of each light source and the volume and frequency of each sound source specifically comprises:
performing acousto-optic linkage display in a three-dimensional interface according to the rendering power of each light source and the volume and frequency of each sound source
Carrying out three-dimensional scanning on the displayed area to obtain a three-dimensional model diagram;
marking the position of a light source and the position of a sound source on the three-dimensional model map;
rendering by using the power of each light source according to the light source position and adjusting the brightness of the light source in an equal proportion;
adjusting the color brightness of the sound source according to the sound source position by adjusting the volume of each sound source in equal proportion;
and adjusting the color purity of the sound source according to the sound source position by adjusting the frequency of each sound source in an equal proportion.
8. The utility model provides a large-scale place intelligence is looked and is imitated display system based on reputation linkage which characterized in that, this system includes:
the brightness acquisition module is used for acquiring the light intensity of the observation position of the key field at the current moment through the photoresistor;
the sound adjusting module is used for adjusting the sound volumes of N sound sources most adjacent to the key field according to the light intensity of the observation position of the key field;
the sound acquisition module is used for acquiring the volume and frequency of each sound source through the sensor;
the linkage calculation module is used for calculating real-time linkage light source signal nodes according to the volume and the frequency of each sound source;
the brightness adjusting module is used for adjusting the rendering power of each light source according to the real-time linkage light source signal nodes;
and the scene simulation module is used for displaying acousto-optic linkage in a three-dimensional interface according to the rendering power of each light source and the volume and frequency of each sound source.
9. A computer-readable storage medium on which computer program instructions are stored, which computer program instructions, when executed by a processor, implement the method of any one of claims 1-7.
10. An electronic device comprising a memory and a processor, wherein the memory is configured to store one or more computer program instructions, wherein the one or more computer program instructions are executed by the processor to implement the method of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210919091.3A CN114974293B (en) | 2022-08-02 | 2022-08-02 | Large-scale field intelligent visual effect display method and system based on acousto-optic linkage |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210919091.3A CN114974293B (en) | 2022-08-02 | 2022-08-02 | Large-scale field intelligent visual effect display method and system based on acousto-optic linkage |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114974293A true CN114974293A (en) | 2022-08-30 |
CN114974293B CN114974293B (en) | 2022-10-25 |
Family
ID=82969219
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210919091.3A Active CN114974293B (en) | 2022-08-02 | 2022-08-02 | Large-scale field intelligent visual effect display method and system based on acousto-optic linkage |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114974293B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
BE1015101A7 (en) * | 2002-09-10 | 2004-10-05 | Light intensity variation detecting system, has electronic modules with sensors which detect projected light and variation of light intensity, and electronic components to transform information which follows from light into sound | |
CN1739127A (en) * | 2003-01-17 | 2006-02-22 | 摩托罗拉公司 | Audio file format with mapped lighting effects and method for controlling lighting effects using an audio file format |
US20130049636A1 (en) * | 2003-01-17 | 2013-02-28 | Motorola Mobility, Inc. | Electronic Device for Controlling Lighting Effects Using an Audio File |
US20160034248A1 (en) * | 2014-07-29 | 2016-02-04 | The University Of North Carolina At Chapel Hill | Methods, systems, and computer readable media for conducting interactive sound propagation and rendering for a plurality of sound sources in a virtual environment scene |
CN108761815A (en) * | 2018-06-21 | 2018-11-06 | 利亚德光电股份有限公司 | The display methods and system of image |
CN109442279A (en) * | 2018-11-30 | 2019-03-08 | 甘肃镂刻时光文化传媒有限公司 | A kind of stage lighting with human body sensing |
US10770092B1 (en) * | 2017-09-22 | 2020-09-08 | Amazon Technologies, Inc. | Viseme data generation |
CN111712025A (en) * | 2020-07-02 | 2020-09-25 | 广州博锐电子有限公司 | Intelligent lamplight acousto-optic linkage control system and control method thereof |
-
2022
- 2022-08-02 CN CN202210919091.3A patent/CN114974293B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
BE1015101A7 (en) * | 2002-09-10 | 2004-10-05 | Light intensity variation detecting system, has electronic modules with sensors which detect projected light and variation of light intensity, and electronic components to transform information which follows from light into sound | |
CN1739127A (en) * | 2003-01-17 | 2006-02-22 | 摩托罗拉公司 | Audio file format with mapped lighting effects and method for controlling lighting effects using an audio file format |
US20130049636A1 (en) * | 2003-01-17 | 2013-02-28 | Motorola Mobility, Inc. | Electronic Device for Controlling Lighting Effects Using an Audio File |
US20160034248A1 (en) * | 2014-07-29 | 2016-02-04 | The University Of North Carolina At Chapel Hill | Methods, systems, and computer readable media for conducting interactive sound propagation and rendering for a plurality of sound sources in a virtual environment scene |
US10770092B1 (en) * | 2017-09-22 | 2020-09-08 | Amazon Technologies, Inc. | Viseme data generation |
CN108761815A (en) * | 2018-06-21 | 2018-11-06 | 利亚德光电股份有限公司 | The display methods and system of image |
CN109442279A (en) * | 2018-11-30 | 2019-03-08 | 甘肃镂刻时光文化传媒有限公司 | A kind of stage lighting with human body sensing |
CN111712025A (en) * | 2020-07-02 | 2020-09-25 | 广州博锐电子有限公司 | Intelligent lamplight acousto-optic linkage control system and control method thereof |
Non-Patent Citations (3)
Title |
---|
Y. FENG 等: ""Research on simulation of sound propagation system,"", 《2010 INTERNATIONAL CONFERENCE ON COMPUTER APPLICATION AND SYSTEM MODELING 》 * |
张阳等: "虚拟现实中三维音频关键技术现状及发展", 《电声技术》 * |
黄宗珊 等: ""音乐可视化研究特征选择及表达方式综述"", 《科技视界》 * |
Also Published As
Publication number | Publication date |
---|---|
CN114974293B (en) | 2022-10-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111461089B (en) | Face detection method, and training method and device of face detection model | |
CN110232696A (en) | A kind of method of image region segmentation, the method and device of model training | |
CN109344291A (en) | A kind of video generation method and device | |
CN107330858A (en) | Picture processing method and device, electronic equipment and storage medium | |
CN103164231A (en) | Input method virtual keyboard skin management method and device | |
CN112491442B (en) | Self-interference elimination method and device | |
CN105554283A (en) | Information processing method and electronic devices | |
CN109815363A (en) | Generation method, device, terminal and the storage medium of lyrics content | |
CN104883299A (en) | Router configuration method, system and router | |
CN111760294B (en) | Method and device for controlling non-player game characters in game | |
CN108898082A (en) | Image processing method, picture processing unit and terminal device | |
CN108985954A (en) | A kind of method and relevant device of incidence relation that establishing each mark | |
CN105280203B (en) | A kind of audio frequency playing method and user equipment | |
Gao et al. | The intelligent integration of interactive installation art based on artificial intelligence and wireless network communication | |
CN112906806A (en) | Data optimization method and device based on neural network | |
CN109346102B (en) | Method and device for detecting audio beginning crackle and storage medium | |
CN114974293B (en) | Large-scale field intelligent visual effect display method and system based on acousto-optic linkage | |
CN117522760B (en) | Image processing method, device, electronic equipment, medium and product | |
CN106445710A (en) | Method for determining interactive type object and equipment thereof | |
CN104284011A (en) | Information processing method and electronic device | |
CN116404662A (en) | Method and system for regulating and controlling optimal load of partitioned power quality | |
CN111292171A (en) | Financial product pushing method and device | |
CN106131747A (en) | A kind of audio adding method and user terminal | |
CN112820302B (en) | Voiceprint recognition method, voiceprint recognition device, electronic equipment and readable storage medium | |
CN108932704A (en) | Image processing method, picture processing unit and terminal device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |