CN115396784A - Remote tuning method and system - Google Patents

Remote tuning method and system Download PDF

Info

Publication number
CN115396784A
CN115396784A CN202211017602.9A CN202211017602A CN115396784A CN 115396784 A CN115396784 A CN 115396784A CN 202211017602 A CN202211017602 A CN 202211017602A CN 115396784 A CN115396784 A CN 115396784A
Authority
CN
China
Prior art keywords
data
tuned
analog
sound
simulated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211017602.9A
Other languages
Chinese (zh)
Other versions
CN115396784B (en
Inventor
马敏
陈洋
陈玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hansang Nanjing Technology Co ltd
Original Assignee
Hansang Nanjing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hansang Nanjing Technology Co ltd filed Critical Hansang Nanjing Technology Co ltd
Priority to CN202211017602.9A priority Critical patent/CN115396784B/en
Publication of CN115396784A publication Critical patent/CN115396784A/en
Application granted granted Critical
Publication of CN115396784B publication Critical patent/CN115396784B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/01Aspects of volume control, not necessarily automatic, in sound systems

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Stereophonic System (AREA)

Abstract

The embodiment of the specification provides a method and a system for remote tuning, wherein the method comprises the following steps: predicting simulated sound data based on at least one of user input, environment data of the equipment to be tuned and distribution data of the equipment to be tuned; and sending the analog sound data to the remote tuning terminal so that the remote tuning terminal plays audio based on the analog sound data.

Description

Remote tuning method and system
Technical Field
The specification relates to the technical field of information, in particular to a method and a system for remote tuning.
Background
Increasingly, the playing devices (such as sound boxes and the like) are popular with consumers, and tuning (such as adjusting the volume, sound effects and the like) of the playing devices is essential to provide better audio-visual experience for listeners. In some scenarios, for some reasons, the debugger may not be able to listen to the sound played by the playback device on site, so that the debugger can only debug the sound according to experience.
Therefore, it is desirable to provide a method and system for remote tuning to better remotely tune a playback device.
Disclosure of Invention
One or more embodiments of the present disclosure provide a method of remote tuning. The method for remote tuning comprises the following steps: predicting simulated sound data based on at least one of user input, environment data of a device to be tuned and distribution data of the device to be tuned; and sending the analog sound data to a remote tuning terminal so that the remote tuning terminal plays audio based on the analog sound data.
One or more embodiments of the present specification provide a system for remote tuning, comprising: the system comprises a prediction module, a data processing module and a data processing module, wherein the prediction module is used for predicting simulated sound data based on at least one of input of a user, environment data of a device to be tuned and distribution data of the device to be tuned; and the analog module is used for sending the analog sound data to a remote tuning terminal so that the remote tuning terminal plays audio based on the analog sound data.
One or more embodiments of the present specification provide a computer readable storage medium storing computer instructions that, when executed by a processor, implement a method of remote tuning.
One or more embodiments of the present specification provide a remote tuning terminal, comprising: a speaker array; the loudspeaker array plays audio based on analog sound data, wherein the analog sound data is determined based on at least one of input of a user, environment data of the equipment to be tuned and distribution data of the equipment to be tuned.
Drawings
The present description will be further explained by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals refer to like structures, wherein:
FIG. 1 is a schematic illustration of an application scenario for remote tuning shown in some embodiments herein;
FIG. 2 is an exemplary block diagram of a remote tuning system shown in accordance with some embodiments of the present description;
FIG. 3 is an exemplary flow diagram of remote tuning shown in accordance with some embodiments herein;
FIG. 4 is an exemplary schematic diagram of a top view of an environment in which a device to be tuned is located, according to some embodiments of the present description;
FIG. 5 is an exemplary diagram illustrating the determination of simulated sound effects based on a first predictive model according to some embodiments of the present description;
FIG. 6 is an exemplary structural diagram of a first predictive model in accordance with some embodiments of the present description.
FIG. 7 is an exemplary block diagram illustrating a second predictive model-based determination of simulated volume in accordance with some embodiments of the present description.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, the present description can also be applied to other similar scenarios on the basis of these drawings without inventive effort. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "apparatus", "unit" and/or "module" as used herein is a method for distinguishing different components, elements, parts, portions or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this specification and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used in this description to illustrate operations performed by a system according to embodiments of the present description. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
Fig. 1 is a schematic diagram of an application scenario of remote tuning according to some embodiments of the present description. As shown in fig. 1, the application scenario 100 of remote tuning may include a device to be tuned 110, a remote tuning terminal 120, a network 130, and a processing device 140, where the processing device 140 is configured to perform the method of remote tuning shown in some embodiments of the present specification.
The device to be tuned 110 is a device that needs to be tuned. For more on the device to be tuned, see fig. 3 and its associated description.
The remote tuning terminal 120 is a device that plays analog sound to the user to enable the user to listen to the tuning effect on trial. In some embodiments, the remote tuning terminal 120 may include a speaker array of multiple speakers for playing audio. In some embodiments, the remote tuning terminal 120 may be a remote tuning helmet 150. As shown in fig. 1, the remote tuning helmet 150 may include a speaker array 150-1, a noise reducer 150-2, and a microphone 150-3. Speaker array 150-1 is used to play audio. The microphone 150-3 is used to capture the sound of the environment in which the user wearing the remote tuning helmet 150 is located. The noise reducer 150-2 is used to remove the sound of the environment where the user wearing the remotely tuned helmet 150 is located, based on the sound collected by the microphone 150-3.
The network 130 may connect the various components of the system and/or connect the system with external resource components. The network 130 allows communication between the various components and with other components outside the system to facilitate the exchange of data and/or information. For example, the processing device 140 may receive environment data, distribution data of the device to be tuned 110 through the network 130. As another example, the processing device 140 may receive input from a user of the remote tuning terminal 120 via the network 130. As another example, the processing device 140 may also transmit analog sound data to the remote tuning terminal 120 via the network 130. The network may be implemented in various ways, such as a local area network, a USB connection, etc.
The processing device 140 may be used to process data and/or information from at least one component of the application scenario 100 or an external data source. For example, the processing device 140 may predict the simulated sound data based on at least one of the input of the user, the environmental data of the device to be tuned 110, and the distribution data of the device to be tuned 110. For another example, the processing device 140 may transmit the analog sound data to the remote tuning terminal 120 to cause the remote tuning terminal 120 to play audio based on the analog sound data. The processing device 140 may be a stand-alone device or may be built into the remote tuning terminal 120.
FIG. 2 is an exemplary block diagram of a remote tuning system shown in accordance with some embodiments of the present description. In some embodiments, the remote tuning system 200 may include a prediction module 210, a simulation module 220.
The prediction module 210 may be configured to predict the simulated sound data based on at least one of a user input, environmental data of the device to be tuned, and distribution data of the device to be tuned. In some embodiments, the environmental data of the equipment to be tuned may include at least one of temperature, humidity, flow rate, and spatial data of an environment in which the equipment to be tuned is located, and the spatial data may include one or more of the following characteristics: the type of environment, the size of the environment, and the parameters of the sound transmission obstacles. In some embodiments, the analog sound data may include analog volume and/or analog sound effects, wherein the analog sound effects may include at least one of analog surround mode, analog gain, and analog ambient sound.
In some embodiments, the prediction module 210 may be further configured to process the environmental data of the device to be tuned and/or the distribution data of the device to be tuned based on the analog sound effect determination algorithm to determine the analog sound effect.
In some embodiments, the remote tuning terminal may include a speaker array formed by a plurality of speakers, and the prediction module 210 may be further configured to determine a target speaker position in the speaker array based on the environmental data of the device to be tuned and/or the distribution data of the device to be tuned to generate the simulated sound effect.
In some embodiments, the prediction module 210 may be further configured to process the user input and the environment data of the device to be tuned based on an analog volume determination algorithm to determine an analog volume.
The analog module 220 may be configured to send the analog sound data to the remote tuning terminal to enable the remote tuning terminal to play audio based on the analog sound data.
It should be noted that the above description of the remote tuning system and its modules is for convenience of description only and should not limit the present disclosure to the scope of the illustrated embodiments. It will be appreciated by those skilled in the art that, given the teachings of the present system, any combination of modules or sub-system configurations may be used to connect to other modules without departing from such teachings. In some embodiments, the prediction module and the simulation module disclosed in fig. 2 may be different modules in a system, or may be a module that implements the functions of two or more of the above-described modules. For example, each module may share one memory module, and each module may have its own memory module. Such variations are within the scope of the present disclosure.
Fig. 3 is an exemplary flow diagram of remote tuning shown in accordance with some embodiments herein. As shown in fig. 3, the process 300 includes the following steps. In some embodiments, the flow 300 may be performed by the processing device 140.
And step 310, predicting the simulated sound data based on at least one of the input of the user, the environment data of the equipment to be tuned and the distribution data of the equipment to be tuned. In some embodiments, step 310 may be performed by prediction module 210.
The user refers to a person or thing involved in tuning. For example, the user may include a person listening to audio played by the remote tuning terminal (e.g., wearing a remote tuning helmet).
In some embodiments, the user input may include parameter adjustment values for tuning the device to be tuned. The parameter types include: volume, sound effect. The user input may further include: playing content, playing music type (e.g., track), playing duration, etc.
The user can input through the remote tuning terminal. For example, a button capable of adjusting the volume is arranged on the remote tuning terminal, and the user can input the volume through the button, so that the volume of the audio tried to be heard by the user can be controlled.
The type of the adjustable parameter on the remote tuning terminal can be designed according to the type of the adjustable parameter on the equipment to be tuned, so that the parameter adjusting value of the adjustable parameter can be determined by a user based on the audio played from the remote tuning terminal, and the adjusting mode of the equipment to be tuned is finally determined.
In some embodiments, a tunable parameter (e.g., volume) on the remote tuning terminal may set a default value. For example, when the user does not input an adjustment value of the volume, the volume adjustment of the device to be tuned may be a default value.
The equipment to be tuned is equipment needing tuning. For example, the device to be tuned may include a speaker, a microphone, a loud speaker, and the like.
The environmental data of the device to be tuned refers to data related to the environment of the environment in which the device to be tuned is located.
In some embodiments, the environmental data of the equipment to be tuned may include at least one of temperature, humidity, flow rate, and spatial data of an environment in which the equipment to be tuned is located, and the spatial data may include one or more of the following characteristics: the type of environment, the size, the parameters of the sound transmission obstacles.
The temperature and humidity of the environment where the equipment to be tuned is located can be obtained by acquiring stored or input data. For example, the temperature of the environment where the to-be-tuned device is located may be detected by a temperature sensor disposed in the environment, the processing device may acquire the temperature by communicating with the temperature sensor, the humidity of the environment where the to-be-tuned device is located may be detected by a humidity sensor disposed in the environment, and the processing device may acquire the humidity by communicating with the humidity sensor.
The flow of people may be used to indicate the density of people. In some embodiments, the volume of people in the environment of the equipment to be tuned may be the number of people in the environment of the equipment to be tuned at the current time. For example, if there are 15 people at the current time, then the flow of people may be 15.
In some embodiments, the processing device may determine the flow of people through an image recognition algorithm or model based on images acquired by cameras deployed in the environment. In some embodiments, the processing device may also determine the flow of people through other means (e.g., ingress and egress gate count, etc.).
Spatial data refers to data relating to space, structure, etc. that may affect sound delivery. The spatial data may include parameters of the type of environment, size, sound transmission obstacles.
The type of environment may be distinguished according to the function, use, and the like of the environment. For example, the type of environment may include a lobby area, a clothing area, a convention place, an office, and the like. Depending on the type of environment, the sound may be affected differently. For example, when the type of environment is a lobby area, there are typically fewer items in the environment and the environment is less spacious, possibly with an enhanced effect on the sound. For another example, when the environment is a garment area, the items in the environment are often more placed, and the environment is more complex, and may have a reducing effect on sound.
The size of the environment refers to the size of the three-dimensional space of the environment. For example, the size of the environment in which the apparatus to be processed is located is 50m 3
The parameter of the sound transmission obstacle of the environment refers to a parameter related to the obstacle affecting sound transmission in the environment.
In some embodiments, the parameters of the acoustic transmission barrier of the environment may include wall parameters.
Wall parameters refer to parameters associated with a wall. In some embodiments, the wall parameters may include the number of walls in the environment, the size of the walls. In some embodiments, the wall parameters may also include other information including, but not limited to, wall type (e.g., lime wall, wood wall, brick wall, etc.), wall thickness, location of the wall, etc.
Spatial data may be acquired in a variety of ways. In some embodiments, the spatial data of the environment in which the device is located may be pre-stored in the storage device, and the processing device may read directly from the storage device. In some embodiments, spatial data of the environment in which the device is located may be obtained based on a house-type map stored in a storage device or uploaded to a remote tuning terminal. The house type diagram refers to the installation position of the equipment to be tuned in the space and the diagram of the space structure. The information of the floor plan can be represented by various feature extraction modes. Spatial data (e.g., wall parameters) of the environment in which the device is located can be determined from the house layout. For example, the house pattern is input into an image recognition model, and the image recognition model outputs wall parameters.
In some embodiments, the parameters of the sound transmission obstacles of the environment may also include a propagation parameter matrix.
The propagation parameter matrix refers to a matrix formed by parameters related to sound propagation when the sound propagates in the environment where the equipment to be tuned is located. In some embodiments, each device to be tuned may correspond to a matrix of propagation parameters.
Different rows or columns of the propagation parameter matrix represent at least one propagation parameter at different first angles. In some embodiments, the propagation parameters may include a first angle, a second angle, a first distance, a second distance, a material of the obstacle at the intersection, and the like.
The first angle refers to an angle of the first ray generated with the utterance position as an origin. The different first rays correspond to different first angles. In some embodiments, the first angle may be represented in a number of ways in the three-dimensional spatial coordinate system. For example, the first ray includes an angle with the ground plane, etc. The first angle may be determined in a variety of ways. For example, the first angle may be preset. For another example, a plurality of points are selected on the spherical surface, a connecting line between each point and the center of the sphere is used as a ray, and the angle between the ray and the ground plane is used as a first angle.
The sounding position refers to the position of the device to be tuned corresponding to the propagation parameter matrix.
The second angle refers to an angle of a ray that passes through the target intersection point with the listening position as the origin. The target intersection point refers to an intersection point of the first ray and the obstacle to which the first ray is incident. The second angle is similar to the first angle and will not be described again.
The listening position refers to a position where the user is likely to listen to audio in the environment where the device to be tuned is located. The listening position may be preset based on task requirements.
The first distance refers to a distance between the sound emission position and the target intersection point.
The second distance refers to the distance between the listening position and the target intersection.
The material of the obstacle at the intersection point refers to the material of the obstacle on which the first ray strikes. For example, the barrier material at the intersection may include lime, tile, redwood, and the like.
By way of example, fig. 4 is an exemplary schematic diagram of a top view of the environment in which the device to be tuned is located. As shown in fig. 4, the sound production location 410 is the corresponding location of the device to be tuned in its environment. The listening position 420 is a position preset based on task requirements. Taking the sound production position as an origin, a ray is emitted outwards at a preset angle, the ray is the first ray 430, and the preset angle is a first angle corresponding to the first ray 430. The intersection formed by the first ray 430 and the obstruction (e.g., wall) is the target intersection 440. Among the rays with the acoustic position 420 as the origin, the angle corresponding to the ray passing through the target intersection point 440 is determined as the second angle. The distance between the sound emitting position 410 and the target intersection point 440 is the first distance 450, the distance between the sound listening position 420 and the target intersection point 440 is the second distance 460, and the material of the obstacle at the intersection point is the material of the obstacle at the target intersection point 440.
In some embodiments, the propagation parameter matrix may be constructed based on a variety of possible methods, for example, the propagation parameters may be acquired by field mapping, real-time image recognition, and the like to construct the propagation parameter matrix.
In some embodiments of the present description, by introducing a propagation parameter matrix into parameters of a sound transmission obstacle of an environment, a finer obstacle material can be obtained at each point based on a propagation route of sound, so as to more fully describe a distribution situation of the sound transmission obstacle of the environment, and a more accurate result can be obtained when the parameters of the sound transmission obstacle of the environment are subsequently used in an algorithm or a model.
In some embodiments of the present description, by introducing information such as a flow rate of a person, spatial data, and the like, environment data can be more comprehensively represented, so that when the environment data is used for simulating sound data, more accurate simulation data can be obtained.
The distribution data of the equipment to be tuned refers to data related to the position and distribution of the equipment to be tuned in space. In some embodiments, the distribution data of the devices to be tuned may include position coordinate information of the devices to be tuned in space. In some embodiments, the distribution data of the devices to be tuned may also include other information, such as the number of devices to be tuned, the distance between the devices to be tuned, and the like.
The distribution information of the equipment to be tuned can be acquired in various ways. For example, the processing device may determine distribution information of the devices to be tuned based on the house type map. For another example, the processing device may capture an image of the device to be tuned in the environment through a camera in the environment, and recognize and acquire the image.
The analog sound data refers to data for causing the sound-producing apparatus to produce a sound similar to an actual effect. The analog sound data may be represented by a sound waveform or other data form.
In some embodiments, the analog sound data may include an analog volume.
The analog volume refers to data for causing the sound-producing device to emit a volume similar to an actual effect. In some embodiments, the analog volume may correspond to a waveform amplitude in the sound waveform.
In some embodiments, the analog sound data may include analog sound effects, wherein the analog sound effects may include at least one of analog surround mode, analog gain, and analog ambient sound.
The analog sound effect refers to data for causing the sound producing apparatus to produce a sound effect similar to an actual effect. In some embodiments, the simulated sound effects may correspond to waveform shapes in the sound waveform.
The analog sound effects may include at least one of analog surround mode, analog gain, and analog ambient sound.
The simulated surround mode refers to data for causing the sound-producing device to emit a surround mode similar to an actual effect. The surround mode is a speaker placement mode corresponding to a mode of creating a more realistic listening effect by increasing speaker placement at a reasonable position.
The analog gain refers to data for causing the sound-producing apparatus to emit a gain similar to an actual effect. The gain refers to a degree of increasing or decreasing the volume, and for example, the gain may be a magnification or a reduction of the volume.
The simulated environmental sound refers to data for causing the sound-making device to emit environmental sound similar to an actual effect. Ambient sound refers to the sound of the surrounding environment that can be heard. For example, the ambient sound may be a background sound mixed with various surrounding noises (e.g., a noisy human voice, noise due to some motion, etc.).
The waveform amplitude corresponding to the analog volume and the waveform form corresponding to the analog sound effect can jointly form a sound waveform corresponding to the analog sound data.
In some embodiments of the present disclosure, by introducing an analog volume and an analog sound effect, and defining the analog sound effect to include at least one of an analog surround mode, an analog gain, and a simulated environmental sound, the analog sound data can be further subdivided into a plurality of components, so that a finer and more accurate sound waveform is generated when the sound is simulated, and the simulation effect is better.
In some embodiments, the simulated sound data may be predicted by a simulated sound determination algorithm based on at least one of a user's input, environmental data of the device to be tuned, and distribution data of the device to be tuned.
In some embodiments, the analog sound determination algorithm includes an analog volume determination algorithm and an analog sound effect determination algorithm. Wherein the analog volume determination algorithm is used to determine the volume of audio played at the remote tuning terminal. And the analog sound effect determining algorithm is used for determining the sound effect of the audio played at the remote tuning terminal.
In some embodiments, the simulated sound effect determination algorithm may include a simulated surround mode determination sub-algorithm, which may determine a simulated surround mode based on the distribution data. In some embodiments, the input to the analog surround mode determination sub-algorithm may include distribution data and the output may include an analog surround mode. Different surround modes correspond to different distribution data, and the corresponding relationship can be preset. For example, the reference distribution data and its corresponding reference surround pattern are stored by a database. The simulated surround mode determining sub-algorithm may search the distribution data in the database, determine the closest reference distribution data, and further use the reference surround mode corresponding to the reference distribution data as the simulated surround mode. The simulated surround mode determination sub-algorithm may be any other feasible algorithm.
In some embodiments, the analog sound effect determination algorithm may include an analog gain determination sub-algorithm that may determine the analog gain based on environmental data of the device to be tuned. In some embodiments, the input to the analog gain determination sub-algorithm may include environmental data of the device to be tuned and the output may include the analog gain. For example, the analog gain determination sub-algorithm may reflect a correspondence between the spaciousness and the gain of the environment in which the device to be tuned is located. The analog gain determining sub-algorithm may determine the analog gain based on the degree of spaciousness of the environment where the equipment to be tuned is located (for example, the degree of spaciousness may be determined by the type of the environment where the equipment to be tuned is located, and different types correspond to different degrees of spaciousness), where the higher the degree of spaciousness of the environment where the equipment to be tuned is located is, the higher the degree of amplification of the sound is, and the larger the analog gain output by the algorithm is. The analog gain determination sub-algorithm may be any other feasible algorithm.
In some embodiments, the analog gain determination sub-algorithm may further include a first gain determination sub-algorithm, a second gain determination sub-algorithm, a third gain determination sub-algorithm, and a gain fusion sub-algorithm.
The first gain determination sub-algorithm refers to a correlation algorithm for determining the first gain. The first gain may refer to a gain of spatial magnitude to sound. In some embodiments, the input of the first gain determination sub-algorithm may be the magnitude of the environment in which it is located, and the output may be the first gain. For example, the first gain determination sub-algorithm may reflect a correspondence between the magnitude of the environment in which the device to be tuned is located and the gain. The first gain determining sub-algorithm may determine the first gain based on the size of the environment where the equipment to be tuned is located, and the larger the space of the environment where the equipment to be tuned is located is, the higher the degree of amplification of the final sound is, and the larger the first gain output by the algorithm is. The first gain determining sub-algorithm may also be any other feasible algorithm.
The second gain determination sub-algorithm refers to a correlation algorithm for determining the second gain. The second gain may refer to a gain of the spatial type to sound. In some embodiments, the input to the second gain determination sub-algorithm may be the type of environment in which it is located, and the output may be the second gain. For example, the second gain determination sub-algorithm may reflect a correspondence between the type of environment in which the device to be tuned is located and the gain. The second gain determining sub-algorithm may determine the second gain based on the type of the environment in which the device to be tuned is located, where the type of the environment in which the device to be tuned is a lobby area (generally, there are fewer articles in the environment in the type), and then the sound is more easily reflected to generate an echo effect, and is louder in hearing, and the second gain output by the algorithm is also larger. The second gain determining sub-algorithm may alternatively be any other feasible algorithm.
The third gain determination sub-algorithm refers to a correlation algorithm for determining the third gain. The third gain may refer to a gain of an obstacle in space to sound. In some embodiments, the input to the third gain determination sub-algorithm may be a parameter of a sound transmission barrier of the environment in which it is located, and the output may be the third gain. For example, the third gain determination sub-algorithm may reflect the correspondence between the gain and the parameter of the sound transmission obstacle of the environment in which the device to be tuned is located. The third gain determination sub-algorithm may determine the third gain based on wall parameters in parameters of sound transmission barriers of an environment where the third gain is output, where the smaller the sound absorption coefficient of the wall type is (for example, the sound absorption coefficient of a marble wall to sound of each frequency is smaller than that of a concrete wall), the larger the wall thickness is within a suitable range, the easier the sound is reflected and is not easy to penetrate and be absorbed, and the stronger the echo effect is. The third gain determining sub-algorithm may also be any other feasible algorithm.
The gain fusion sub-algorithm refers to a correlation algorithm for fusing at least one gain. In some embodiments, the input of the gain fusion sub-algorithm may be the first gain, the second gain, the third gain, and the output may be an analog gain. In some embodiments, the gain fusion sub-algorithm may perform weighted fusion (e.g., weighted summation, etc.) on the first gain, the second gain, and the third gain, respectively, based on the gain weight vector, to obtain the analog gain. The gain weight vector may include a weight of the first gain, a weight of the second gain, and a weight of the third gain.
The weight of each gain can be determined in a number of ways. For example, the weight of the first gain, the weight of the second gain, and the weight of the third gain may be preset. For another example, the gain weight vector may be determined based on playback characteristics of the remote tuning device. The play characteristics of the remote tuning device at least comprise a play time characteristic and a play content characteristic. According to a preset rule established by experience, when different songs are played, different gain weight vectors are selected based on the playing time and the playing content of the played songs. For another example, the gain weight vector may be determined by a fusion model, where the input of the fusion model includes obstacle parameters and obstacle distribution data in the environment where the device to be tuned is located, and the output is the gain weight vector. The obstacle distribution data includes: and the ratio of obstacles with strong absorption to low, medium and high frequency sounds. Training samples trained by the fusion model can be obtained through historical tuning data.
In some embodiments, the simulated sound effects determination algorithm may include a simulated ambient sound determination sub-algorithm that may determine the simulated ambient sound based on environmental data of the device to be tuned. In some embodiments, the inputs to the simulated ambient sound determination sub-algorithm may include temperature, humidity, and volume of people in the environment in which the device to be tuned is located, and the outputs may include simulated ambient sounds. The simulated environment sound determination sub-algorithm may adopt any feasible algorithm, for example, the simulated environment sound determination sub-algorithm may determine the simulated environment sound from a plurality of preset environment sounds according to a preset matching rule based on the temperature, humidity, and traffic of the environment where the equipment to be tuned is located, where the matching rule may be: and each preset environment sound corresponds to a group of temperature, humidity and flow rate, the temperature, humidity and flow rate of the environment where the equipment to be tuned is located are compared with the temperature, humidity and flow rate corresponding to the preset environment sound, and the preset environment sound with the maximum similarity is determined as the simulated environment sound output by the simulated environment sound determination sub-algorithm. For another example, the environmental sound may be preset, and the simulated environmental sound determination sub-algorithm may adjust the preset environmental sound (for example, adjust the volume, etc.) based on the temperature, the humidity, and the flow rate of the environment in which the device to be tuned is located, and then output the simulated environmental sound.
In some embodiments, the simulated ambient sound determination sub-algorithm may include a comfort determination sub-algorithm and an ambient sound matching sub-algorithm.
The comfort level determination sub-algorithm refers to the relevant algorithm for determining comfort level. Comfort may refer to the degree of comfort of a person in a particular environment. In some embodiments, the input of the comfort level determination sub-algorithm may be the temperature, humidity, and flow rate of the environment in which the equipment to be tuned is located, and the output may be the comfort level. The comfort level determining sub-algorithm can be selected from any feasible algorithm, for example, the comfort level determining sub-algorithm can be calculated to obtain the comfort level through a preset formula based on the temperature and the humidity, and then the obtained comfort level is adjusted according to the flow of people (for example, under the condition that the flow of people is large, the surrounding environment is noisy, the comfort level is relatively poor, and the comfort level needs to be reduced at the moment), so that the final comfort level is obtained.
The ambient sound matching sub-algorithm refers to a correlation algorithm for matching ambient sound based on comfort and human flow. In some embodiments, the input of the ambient sound matching sub-algorithm may be the flow rate and comfort of the environment in which the device is located, and the output may be simulated ambient sound. The ambient sound matching sub-algorithm may be any feasible algorithm. For example, the ambient sound matching sub-algorithm may determine the simulated ambient sound as follows: firstly, matching a corresponding preset environmental sound according to the size of the pedestrian flow; and then according to the comfort level, properly strengthening or weakening the obtained environmental sound to obtain the final simulated environmental sound.
In some embodiments, the analog volume determination algorithm may determine the analog volume based on user input, environmental data of the device to be tuned. In some embodiments, the inputs to the analog volume determination sub-algorithm may include user inputs, temperature of the environment in which the device to be tuned is located, humidity, and flow of people, and the outputs may include analog volume. For example, the analog volume determining algorithm may adjust the volume input by the user up or down according to the temperature, the humidity, and the flow of people (for example, the comfort level corresponding to the current temperature, the current humidity, and the current flow of people is low, in which case, people may feel that the sound is loud and noisy, and at this time, the volume value needs to be adjusted up), and finally obtain the output analog volume. The analog volume determination sub-algorithm may alternatively be any other feasible algorithm.
In some embodiments, the simulated audio effect determination algorithm may include a first predictive model.
In some embodiments, the simulated sound effect may be determined by processing environmental data of the device to be tuned and/or distribution data of the device to be tuned based on a first prediction model, where the first prediction model is a machine learning model. Further details regarding the first predictive model and determining the simulated sound effects can be found in FIG. 5 and its associated description.
In some embodiments, the remote tuning terminal may include a speaker array formed by a plurality of speakers, and the processing device may determine a target speaker position in the speaker array based on environmental data of the device to be tuned and/or distribution data of the device to be tuned to generate the simulated sound effect.
The target speaker position may refer to a position of a speaker in the speaker array that needs to be operated (i.e., needs to play audio). In some embodiments, the target speaker location may be represented in a variety of ways (e.g., numerical number, location coordinates, etc.). Taking the number as an example, the speaker array includes 10 speakers, which are numbered 1-10 in sequence, and if the speakers No. 5 and No. 6 are finally determined as the speakers needing to work, the target sound-emitting positions are: 5,6.
In some embodiments, the processing device may determine a simulated surround mode based on environmental data of the devices to be tuned and/or distribution data of the devices to be tuned, and determine a target speaker position in the speaker array based on the simulated surround mode to generate a simulated sound effect. For more on determining the simulated surround mode, reference may be made to other parts of the description, such as the description part of the simulated surround mode determination algorithm, the first prediction model, etc. In some embodiments, the target speaker position may be included in the simulated surround mode, and the processing device may obtain the target speaker position directly from the simulated surround mode. The processing device may control the speakers of the target speaker locations in the speaker array to play to generate the simulated sound effects. For example, as shown in fig. 1, the processing device may turn on a speaker corresponding to the target loud-speaking position in the speaker array 150-1 of the remote tuning helmet 150 based on the obtained target loud-speaking position, and turn off the remaining speakers.
In some embodiments of the present specification, by determining a target speaker position in a speaker array and generating a simulated sound effect based on environment data of a device to be tuned and/or distribution data of the device to be tuned, a simulated surround mode can be fully considered from a physical structure when the remote tuning device plays audio, so that audio corresponding to simulated sound data heard by an end user is closer to audio in a real environment.
In some embodiments, the simulated volume determination algorithm may include a second predictive model.
In some embodiments, the simulated volume may be determined by processing the user's input and environmental data of the device to be tuned based on a second predictive model, the second predictive model being a machine learning model. Further details regarding the second predictive model and determining the simulated volume may be found in fig. 7 and its associated description.
And step 320, sending the analog sound data to the remote tuning terminal so that the remote tuning terminal plays audio based on the analog sound data.
In some embodiments, the processing device may also obtain feedback from the user based on the remote tuning terminal.
The feedback of the user refers to the feeling of the user after listening to the audio. In some embodiments, the user's feedback may be represented in binary (e.g., "acceptable" or "unacceptable"). The remote tuning terminal can be provided with a button, a switch and other structures for user feedback, and a user can perform corresponding feedback by touching the button, shifting the switch and other modes. In some embodiments, the user feedback may also be an adjustment directly to the analog sound effects or the analog volume, such as increasing or decreasing the volume, switching surround modes, and the like. It will be appreciated that no adjustment is acceptable, otherwise unacceptable.
In some embodiments, the processing device may control the device to be tuned in response to user feedback. For example, when the feedback indicates "acceptable," the processing device may use the current simulated sound data as sound data for the audio played by the device to be tuned in the environment. For another example, when the feedback indicates "unacceptable", a new round of simulation is performed based on the user's adjustment and played remotely to the user to obtain new feedback until the user's feedback is "acceptable".
In some embodiments of the present description, by introducing feedback of a user, a method for remote tuning may form a reference based on the feedback of the user, so as to better tune a device to be tuned.
In some embodiments of the present description, the analog sound data is predicted based on at least one of the input of the user, the environment data of the device to be tuned, and the distribution data of the device to be tuned, so that the accuracy of sound simulation can be greatly improved, and the audio corresponding to the analog sound data can be efficiently played to the user.
FIG. 5 is a block diagram illustrating an exemplary architecture for determining simulated sound effects based on a first predictive model according to some embodiments of the present disclosure.
As shown in fig. 5, the input of the first prediction model 530 may include environmental data 510 of the device to be tuned and/or distribution data 520 of the device to be tuned, and the output may include simulated sound effects 540; the environment data 510 of the equipment to be tuned can comprise at least one of the temperature 510-1, the humidity 510-2, the human flow 510-3 and the space data 510-4 of the environment where the equipment to be tuned is located. The first predictive model may be a Deep Neural Network (DNN) or the like.
In some embodiments, the first predictive model 530 may be comprised of a surround mode determination model that may be used to determine simulated surround modes, a gain determination model that may be used to determine simulated gains, and an ambient sound determination model that may be used to determine simulated ambient sounds. In some embodiments, the simulated surround mode determination sub-algorithm may include a surround mode determination model. In some embodiments, the analog gain determination sub-algorithm may include a gain determination model. In some embodiments, the simulated ambient sound determination sub-algorithm may include an ambient sound determination model. For more on the surround mode determination model, the gain determination model and the ambient sound determination model, reference may be made to fig. 6 and its associated description.
In some embodiments, the first predictive model 530 may be derived through training. For example, training samples may be input into the initial first prediction model 550, a loss function may be constructed based on the output of the initial first prediction model 550, and parameters of the initial first prediction model 550 may be iteratively updated based on the loss function until the preset conditions are met and training is completed.
In some embodiments, the first training sample 560 may include environmental data of the sample device to be tuned and/or distribution data of the sample device to be tuned, and the label of the first training sample 560 is a simulated sound effect corresponding to the environmental data of the sample device to be tuned and/or the distribution data of the sample device to be tuned. The first training sample and the label may be obtained based on historical data. And determining the user satisfaction degree in the historical data as a first training sample and a label.
In some embodiments of the present description, the environmental data of the device to be tuned and/or the distribution data of the device to be tuned are processed based on the first prediction model, and the simulated sound effect is determined, so that the first prediction model can learn the internal rules of the simulated sound effect corresponding to the environmental data and the distribution data based on a large amount of historical data, and thus the simulated sound effect is determined more accurately.
FIG. 6 is an exemplary structural diagram of a first predictive model in accordance with some embodiments described herein.
As shown in fig. 6, the first prediction model 530 may be composed of a surround mode determination model 630-1, a gain determination model 630-2, and an ambient sound determination model 630-3.
The surround mode determination model 630-1 may be used to determine a simulated surround mode. As shown in fig. 6, the input of the surround mode determination model 630-1 may include distribution data 520 of the devices to be tuned, and the output may include a simulated surround mode 640-1. In some embodiments, the surround mode determination model may be a machine learning model. For example, the surround mode determination model may be DNN or the like.
The surround mode determination model may be trained in the same way as the first prediction model or in some other way.
The gain determination model 630-2 may be used to determine the analog gain. As shown in FIG. 6, the input to the gain determination model 630-2 may include spatial data 510-4 of the environment in which the device to be tuned is located, and the output may include analog gain 640-2. In some embodiments, the gain determination model may be a machine learning model. For example, the gain determination model may be DNN or the like.
The gain determination model may be trained in the same way as the first prediction model or in some other way.
The ambient sound determination model 630-3 may be used to determine simulated ambient sound. As shown in fig. 6, the inputs of the ambient sound determination model 630-3 may include the temperature 510-1, the humidity 510-2, and the human flow 510-3 of the environment in which the device to be tuned is located, and the output may include the simulated ambient sound 640-3. In some embodiments, the ambient sound determination model may be a machine learning model. For example, the ambient sound determination model may be DNN or the like.
The ambient sound determination model may be trained in the same way as the first prediction model or in some other way.
In some embodiments, the amplified analog gain 640-4 may be determined based on the analog gain 640-2 output by the gain determination model 630-2 and the first amplification factor 650, and the analog gain 640-2 may be replaced with the amplified analog gain 640-4 as the gain of the audio played by the last remote tuning terminal.
The first amplification factor may be used to amplify the analog gain output by the gain determination model. The amplification of the gain may include an increase or decrease in the value of the gain. For example, if the analog gain of the gain determination model output is X and the first amplification factor is 1.2, the amplified analog gain is 1.2X.
In some embodiments, the amplified simulated ambient sound 640-5 may be determined based on the simulated ambient sound 640-3 output by the ambient sound determination model and the second amplification factor 660, and the simulated ambient sound 640-3 may be replaced by the amplified simulated ambient sound 640-5 as the ambient sound of the audio played by the final remote tuning terminal.
The second amplification factor may be used to amplify the simulated ambient sound output by the ambient sound determination model. The amplification of the ambient sound may comprise strengthening/weakening the intensity of the ambient sound. For example, if the simulated environmental sound output by the environmental sound determination model is the sound waveform X, and the second amplification factor is 1.2, the sound waveform corresponding to the amplified simulated environmental sound may be: the amplitude of the sound waveform X at each time is stretched to 1.2 times the original sound waveform.
In some embodiments, the first amplification factor may be determined based on simulated ambient sound output by the ambient sound determination model. For example, the first amplification factor may be determined by a preset formula based on a high frequency ratio of the simulated ambient sound, which may refer to a ratio of a high frequency sound waveform in the simulated ambient sound to the entire sound waveform. In some embodiments, the first amplification factor may also be related to the play characteristics of the device to be tuned in the user's input. The first amplification factor may also be determined by various feasible methods based on the play characteristics of the device to be tuned. For example, different first amplification factors can be selected according to the playing duration and the playing content of the song when different songs are played according to preset rules established empirically; for another example, the first amplification factor may be determined based on a similarity between a high frequency ratio in the played content and a high frequency ratio of the simulated environmental sound.
In some embodiments, the second amplification factor may be based on an analog gain determination of the gain determination model output. For example, the second amplification factor may be determined based on the magnitude of the analog gain, and the larger the analog gain, the smaller the second amplification factor may be.
In some embodiments of the present description, by introducing the first amplification factor and the second amplification factor, the influence of different environmental sound frequencies, different playing contents, and the like on the analog gain can be effectively embodied, and the suppression effect of the analog gain on the environmental sound when the playing contents are gained can be embodied.
In some embodiments of the present description, by dividing the first prediction model into three separately predicted models, such that each part of the simulated audio effect can be predicted with one separately trained model, the prediction accuracy of each part can be improved, thereby improving the prediction accuracy of the final simulated audio effect.
Fig. 7 is a diagram illustrating an exemplary structure for determining a simulated volume based on a second predictive model in accordance with some embodiments of the present disclosure.
As shown in fig. 7, the inputs of the second predictive model 730 may include the user's input 710 and the environmental data 510 of the device to be tuned, and the output may include the simulated volume 740; the environment data 510 of the device to be tuned input into the second prediction model 730 can comprise at least one of the temperature 510-1, the humidity 510-2 and the human flow 510-3 of the environment in which the device to be tuned is located. The second predictive model may be DNN or the like.
In some embodiments, the second predictive model 730 may be obtained by training. For example, training samples may be input into the initial second prediction model 750, a loss function may be constructed based on the output of the initial second prediction model 750, and parameters of the initial second prediction model 750 may be iteratively updated based on the loss function until a preset condition is satisfied and training is completed.
In some embodiments, the second training sample 760 may include the input of the sample user and the environmental data of the sample device to be tuned, and the label of the second training sample is the simulated volume corresponding to the input of the sample user and the environmental data of the sample device to be tuned. Training samples and labels may be obtained based on historical data. And determining the user satisfaction degree in the historical data as a second training sample and a label.
In some embodiments of the present specification, the input of the user and the environment data of the device to be tuned are processed based on the second prediction model to determine the simulated volume, so that the second prediction model can learn the intrinsic rules of the simulated volume corresponding to the input of the user and the environment data based on a large amount of historical data, thereby determining the simulated volume more accurately.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be regarded as illustrative only and not as limiting the present specification. Various modifications, improvements and adaptations to the present description may occur to those skilled in the art, although not explicitly described herein. Such alterations, modifications, and improvements are intended to be suggested in this specification, and are intended to be within the spirit and scope of the exemplary embodiments of this specification.
Also, the description uses specific words to describe embodiments of the specification. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the specification is included. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the specification may be combined as appropriate.
Additionally, the order in which the elements and sequences of the process are recited in the specification, the use of alphanumeric characters, or other designations, is not intended to limit the order in which the processes and methods of the specification occur, unless otherwise specified in the claims. While certain presently contemplated useful embodiments of the invention have been discussed in the foregoing disclosure by way of various examples, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein described. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the present specification, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to imply that more features are required than are expressly recited in the claims. Indeed, the embodiments may be characterized as having less than all of the features of a single disclosed embodiment.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters set forth in the specification and claims are approximations that may vary depending upon the desired properties sought to be obtained by a particular embodiment. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
For each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited in this specification, the entire contents of each are hereby incorporated by reference into the specification. Except where the application history document does not conform to or conflict with the contents of the present specification, it is to be understood that the application history document, as used herein in the present specification or appended claims, is intended to define the broadest scope of the present specification (whether presently or later in the specification) rather than the broadest scope of the present specification. It is to be understood that the descriptions, definitions and/or uses of terms in the accompanying materials of this specification shall control if they are inconsistent or contrary to the descriptions and/or uses of terms in this specification.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present disclosure. Other variations are also possible within the scope of the present description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the specification can be considered consistent with the teachings of the specification. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.

Claims (10)

1. A method of remote tuning, comprising:
predicting analog sound data based on at least one of user input, environmental data of a device to be tuned and distribution data of the device to be tuned;
and sending the analog sound data to a remote tuning terminal so that the remote tuning terminal plays audio based on the analog sound data.
2. The method of claim 1, wherein the analog sound data comprises analog volume and/or analog audio effects, wherein the analog audio effects comprise at least one of analog surround mode, analog gain, and analog ambient sound.
3. The method of claim 2, wherein predicting the simulated sound data based on at least one of the user input, the environmental data of the device to be tuned, and the distribution data of the device to be tuned comprises:
and processing the environmental data of the equipment to be tuned and/or the distribution data of the equipment to be tuned based on a simulated sound effect determination algorithm to determine the simulated sound effect.
4. The method of claim 2, the remote tuning terminal comprising a speaker array of a plurality of speakers,
the predicting, based on at least one of a user input, environmental data of a device to be tuned, and distribution data of the device to be tuned, analog sound data includes:
determining a target loudspeaker position in the loudspeaker array based on the environment data of the equipment to be tuned and/or the distribution data of the equipment to be tuned so as to generate the simulated sound effect.
5. The method of claim 2, wherein predicting simulated sound data based on at least one of user input, environmental data of a device to be tuned, and distribution data of at least one sound source included in the device to be tuned comprises:
and processing the input of the user and the environment data of the equipment to be tuned based on a simulation volume determination algorithm, and determining the simulation volume.
6. A system for remote tuning, comprising:
the system comprises a prediction module, a data processing module and a data processing module, wherein the prediction module is used for predicting simulated sound data based on at least one of input of a user, environment data of a device to be tuned and distribution data of the device to be tuned;
and the analog module is used for sending the analog sound data to a remote tuning terminal so that the remote tuning terminal plays audio based on the analog sound data.
7. A computer readable storage medium storing computer instructions which, when executed by a processor, implement the method of claims 1-5.
8. A remote tuning terminal, comprising: a speaker array;
the loudspeaker array plays audio based on analog sound data, wherein the analog sound data is determined based on at least one of input of a user, environment data of the equipment to be tuned and distribution data of the equipment to be tuned.
9. The remote tuning terminal of claim 8, the analog sound data comprising analog volume and/or analog sound effects, wherein the analog sound effects comprise at least one of analog surround mode, analog gain, and analog ambient sound.
10. The remote tuning terminal of claim 9, the speakers of the target speaker locations in the speaker array playing to generate the simulated sound effects.
CN202211017602.9A 2022-08-23 2022-08-23 Remote tuning method and system Active CN115396784B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211017602.9A CN115396784B (en) 2022-08-23 2022-08-23 Remote tuning method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211017602.9A CN115396784B (en) 2022-08-23 2022-08-23 Remote tuning method and system

Publications (2)

Publication Number Publication Date
CN115396784A true CN115396784A (en) 2022-11-25
CN115396784B CN115396784B (en) 2023-12-08

Family

ID=84119790

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211017602.9A Active CN115396784B (en) 2022-08-23 2022-08-23 Remote tuning method and system

Country Status (1)

Country Link
CN (1) CN115396784B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116405836A (en) * 2023-06-08 2023-07-07 安徽声讯信息技术有限公司 Microphone tuning method and system based on Internet

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080205681A1 (en) * 2005-03-18 2008-08-28 Tonium Ab Hand-Held Computing Device With Built-In Disc-Jockey Functionality
JP2011250243A (en) * 2010-05-28 2011-12-08 Panasonic Corp Sound volume adjustment system and sound volume adjustment method
US20150171813A1 (en) * 2013-12-12 2015-06-18 Aliphcom Compensation for ambient sound signals to facilitate adjustment of an audio volume
CN104966522A (en) * 2015-06-30 2015-10-07 广州酷狗计算机科技有限公司 Sound effect regulation method, cloud server, stereo device and system
CN108628963A (en) * 2018-04-18 2018-10-09 芜湖乐锐思信息咨询有限公司 Audio-video system based on big data technology
CN108873987A (en) * 2018-06-02 2018-11-23 熊冠 A kind of intelligence control system and method for stereo of stage
WO2018236006A1 (en) * 2017-06-19 2018-12-27 이재호 Aoip-based on-site sound center system enabling sound adjustment according to characteristics of site
CN210225734U (en) * 2019-09-23 2020-03-31 宁波中荣声学科技有限公司 Regulating and controlling system for stereo playing of sound box

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080205681A1 (en) * 2005-03-18 2008-08-28 Tonium Ab Hand-Held Computing Device With Built-In Disc-Jockey Functionality
JP2011250243A (en) * 2010-05-28 2011-12-08 Panasonic Corp Sound volume adjustment system and sound volume adjustment method
US20150171813A1 (en) * 2013-12-12 2015-06-18 Aliphcom Compensation for ambient sound signals to facilitate adjustment of an audio volume
CN104966522A (en) * 2015-06-30 2015-10-07 广州酷狗计算机科技有限公司 Sound effect regulation method, cloud server, stereo device and system
WO2018236006A1 (en) * 2017-06-19 2018-12-27 이재호 Aoip-based on-site sound center system enabling sound adjustment according to characteristics of site
CN108628963A (en) * 2018-04-18 2018-10-09 芜湖乐锐思信息咨询有限公司 Audio-video system based on big data technology
CN108873987A (en) * 2018-06-02 2018-11-23 熊冠 A kind of intelligence control system and method for stereo of stage
CN210225734U (en) * 2019-09-23 2020-03-31 宁波中荣声学科技有限公司 Regulating and controlling system for stereo playing of sound box

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116405836A (en) * 2023-06-08 2023-07-07 安徽声讯信息技术有限公司 Microphone tuning method and system based on Internet
CN116405836B (en) * 2023-06-08 2023-09-08 安徽声讯信息技术有限公司 Microphone tuning method and system based on Internet

Also Published As

Publication number Publication date
CN115396784B (en) 2023-12-08

Similar Documents

Publication Publication Date Title
CN102447697B (en) Method and system of semi-private communication in open environments
US20180295463A1 (en) Distributed Audio Capture and Mixing
JP6055657B2 (en) GAME SYSTEM, GAME PROCESSING CONTROL METHOD, GAME DEVICE, AND GAME PROGRAM
CN112185406A (en) Sound processing method, sound processing device, electronic equipment and readable storage medium
CN103366756A (en) Sound signal reception method and device
CN109885162B (en) Vibration method and mobile terminal
KR102115222B1 (en) Electronic device for controlling sound and method for operating thereof
CN105229947A (en) Audio mixer system
CN101843116B (en) Audio module for the acoustic monitoring of a monitoring region, monitoring system for the monitoring region, method for generating a sound environment, and computer program
CN105159066B (en) A kind of intelligent music Room regulation and control method and regulation device
CN115396784B (en) Remote tuning method and system
Küçük et al. Real-time convolutional neural network-based speech source localization on smartphone
Chen et al. Audio-visual embodied navigation
CN107484069A (en) The determination method and device of loudspeaker present position, loudspeaker
GB2582991A (en) Audio generation system and method
WO2019153382A1 (en) Intelligent speaker and playing control method
CN116830605A (en) Apparatus, method and computer program for implementing audio rendering
CN109800724A (en) A kind of loudspeaker position determines method, apparatus, terminal and storage medium
CN117479076A (en) Sound effect adjusting method and device, electronic equipment and storage medium
Falcon Perez Machine-learning-based estimation of room acoustic parameters
WO2021151023A1 (en) System and method of active noise cancellation in open field
KR102065030B1 (en) Control method, apparatus and program of audio tuning system using artificial intelligence model
CN110930991B (en) Far-field speech recognition model training method and device
Dziwis et al. Machine Learning-Based Room Classification for Selecting Binaural Room Impulse Responses in Augmented Reality Applications
KR102650763B1 (en) Psychoacoustic enhancement based on audio source directivity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant