CN116610280A - Dynamic partitioning method and system for distributed audio equipment - Google Patents

Dynamic partitioning method and system for distributed audio equipment Download PDF

Info

Publication number
CN116610280A
CN116610280A CN202310568695.2A CN202310568695A CN116610280A CN 116610280 A CN116610280 A CN 116610280A CN 202310568695 A CN202310568695 A CN 202310568695A CN 116610280 A CN116610280 A CN 116610280A
Authority
CN
China
Prior art keywords
audio
target
partition scheme
data
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310568695.2A
Other languages
Chinese (zh)
Inventor
闫柏燊
夷洪庚
张敬宁
黄培阳
陈洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hansang Nanjing Technology Co ltd
Original Assignee
Hansang Nanjing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hansang Nanjing Technology Co ltd filed Critical Hansang Nanjing Technology Co ltd
Priority to CN202310568695.2A priority Critical patent/CN116610280A/en
Publication of CN116610280A publication Critical patent/CN116610280A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stereophonic System (AREA)

Abstract

The embodiment of the specification provides a distributed audio equipment dynamic partitioning method and a distributed audio equipment dynamic partitioning system, wherein the method comprises the steps of acquiring spatial distribution data and stream data of a target area, wherein the spatial distribution data comprises at least one of audio equipment distribution data and three-dimensional spatial data; determining a target scene effect based on the spatial distribution data and/or the people stream data, and determining a preliminary partition scheme by combining the target scene effect; and determining the audio information to be played, and determining a target partition scheme based on the preliminary partition scheme and the audio information to be played.

Description

Dynamic partitioning method and system for distributed audio equipment
Technical Field
The present disclosure relates to the field of audio devices, and in particular, to a method and system for dynamic partitioning of distributed audio devices.
Background
In scenes such as markets and squares, some audio devices are generally and fixedly installed to play audio for attracting customers. In order to make the customer have better experience, different playing effects can be obtained by playing the audio equipment in a partitioned mode. However, due to different environmental structures of different markets, squares and the like, and the variation of people stream positions and quantity, the audio is played by using fixed subareas, and the ideal listening effect cannot be achieved.
It is therefore desirable to provide a method for dynamically partitioning a distributed audio device that can dynamically partition a device in a fixed installation location to meet the needs of different scenarios and improve listening effects.
Disclosure of Invention
One or more embodiments of the present specification provide a distributed audio device dynamic partitioning method, the method including: acquiring spatial distribution data and stream data of a target area, wherein the spatial distribution data comprises at least one of audio equipment distribution data and three-dimensional space data; determining a target scene effect based on the spatial distribution data and/or the people stream data, and determining a preliminary partition scheme by combining the target scene effect; and determining audio information to be played, and determining a target partition scheme based on the preliminary partition scheme and the audio information to be played.
One or more embodiments of the present specification provide a distributed audio device dynamic partitioning system, the system comprising: the system comprises a data acquisition module, a data processing module and a data processing module, wherein the data acquisition module is used for acquiring spatial distribution data and stream data of a target area, and the spatial distribution data comprises at least one of audio equipment distribution data and three-dimensional spatial data; the first determining module is used for determining a target scene effect based on the space distribution data and/or the people stream data and determining a preliminary partition scheme by combining the target scene effect; and the second determining module is used for determining the audio information to be played and determining a target partition scheme based on the preliminary partition scheme and the audio information to be played.
One or more embodiments of the present specification provide a distributed audio device dynamic partitioning apparatus, the apparatus comprising at least one processor and at least one memory; the at least one memory is configured to store computer instructions; the at least one processor is configured to execute at least some of the computer instructions to implement the distributed audio device dynamic partitioning method of any of the above embodiments.
One or more embodiments of the present specification provide a computer-readable storage medium storing computer instructions that, when read by a computer, perform the method of dynamic partitioning of a distributed audio device as in any of the embodiments above.
Drawings
The present specification will be further elucidated by way of example embodiments, which will be described in detail by means of the accompanying drawings. The embodiments are not limiting, in which like numerals represent like structures, wherein:
FIG. 1 is a schematic illustration of an application scenario of a distributed audio device dynamic partitioning system according to some embodiments of the present description;
FIG. 2 is an exemplary block diagram of a distributed audio device dynamic partitioning system according to some embodiments of the present description;
FIG. 3 is an exemplary flow chart of a method of dynamic partitioning of a distributed audio device according to some embodiments of the present description;
FIG. 4A is an exemplary schematic diagram of a relaxation mode shown in accordance with some embodiments of the present description;
FIG. 4B is an exemplary schematic diagram of a 5.1 mode shown in accordance with some embodiments of the present description;
FIG. 4C is an exemplary schematic diagram of a surround sound mode shown in accordance with some embodiments of the present description;
FIG. 4D is yet another exemplary schematic diagram of a surround sound mode shown in accordance with some embodiments of the present description;
FIG. 5 is an exemplary schematic diagram of a target scene effect prediction model shown in accordance with some embodiments of the present description;
FIG. 6 is an exemplary diagram illustrating a determination of a target partition scheme according to some embodiments of the present disclosure;
FIG. 7 is an exemplary diagram illustrating yet another determine target partitioning scheme according to some embodiments of the present disclosure;
FIG. 8 is an exemplary diagram of an update current partition scheme according to some embodiments of the present description.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present specification, the drawings that are required to be used in the description of the embodiments will be briefly described below. It is apparent that the drawings in the following description are only some examples or embodiments of the present specification, and it is possible for those of ordinary skill in the art to apply the present specification to other similar situations according to the drawings without inventive effort. Unless otherwise apparent from the context of the language or otherwise specified, like reference numerals in the figures refer to like structures or operations.
It will be appreciated that "system," "apparatus," "unit" and/or "module" as used herein is one method for distinguishing between different components, elements, parts, portions or assemblies at different levels. However, if other words can achieve the same purpose, the words can be replaced by other expressions.
As used in this specification and the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
A flowchart is used in this specification to describe the operations performed by the system according to embodiments of the present specification. It should be appreciated that the preceding or following operations are not necessarily performed in order precisely. Rather, the steps may be processed in reverse order or simultaneously. Also, other operations may be added to or removed from these processes.
Fig. 1 is a schematic illustration of an application scenario of a distributed audio device dynamic partitioning system according to some embodiments of the present description.
In some embodiments, the application scenario 100 of the distributed audio device dynamic partitioning system may include an audio device 110, a detection device 120, a processing device 130, a network 140, and a storage device 150.
The audio device 110 refers to a hardware device for playing audio. For example, the audio device 110 may include, but is not limited to, a sound box, a power amplifier, and the like. In some embodiments, the audio device 110 may be pre-installed in a fixed location within the target area.
The detection device 120 refers to a device that collects relevant information within a target area. For example, the detection device 120 may include a radar 120-1, an infrared sensor 120-2, a pickup device 120-3, or the like, or any combination thereof. The radar 120-1 can be used for collecting arrangement information, people stream data and the like of different audio devices in a target area; the infrared sensor 120-2 may be used to acquire a thermal imaging profile of a target area to acquire people stream data; the sound pickup apparatus 120-3 may be used to collect sound data of a target area.
The processing device 130 may be used to process information and/or data related to the application scenario 100 of the distributed audio device dynamic partitioning system. Such as location information of the audio device, people stream information, etc. In some embodiments, processing device 130 may process data, information, and/or processing results obtained from other devices or system components and execute program instructions based on such data, information, and/or processing results to perform one or more functions described herein. For example, processing device 130 may obtain information and/or data collected by detection device 120 and determine spatial distribution data and people stream data for the target area based on the information and/or data.
Network 140 may include any suitable network capable of facilitating the exchange of information and/or data of application scenario 100 of a distributed audio device dynamic partitioning system. In some embodiments, one or more components of the application scenario 100 of the distributed audio device dynamic partitioning system (e.g., the audio device 110, the detection device 120, the processing device 130, the storage device 150, etc.) may exchange information and/or data with one or more components of the application scenario 100 of the distributed audio device dynamic partitioning system over the network 140.
Storage device 150 may store data, instructions, and/or any other information. Storage device 150 may include one or more storage components, each of which may be a separate device or may be part of another device. In some embodiments, the storage device 150 may include Random Access Memory (RAM), read Only Memory (ROM), removable memory, and the like, or any combination thereof. In some embodiments, the storage device 150 may be connected to the network 140 to enable communication with one or more components in the application scenario 100 of the distributed audio device dynamic partitioning system.
It should be noted that the application scenario is provided for illustrative purposes only and is not intended to limit the scope of the present description. Many modifications and variations will be apparent to those of ordinary skill in the art in light of the present description. For example, the application scenario may also include a database. As another example, application scenarios may be implemented on other devices to implement similar or different functionality. However, variations and modifications do not depart from the scope of the present description.
Fig. 2 is an exemplary block diagram of a distributed audio device dynamic partitioning system according to some embodiments of the present description. As shown in fig. 2, the distributed audio device dynamic partitioning system 200 may include a data acquisition module 210, a first determination module 220, and a second determination module 230.
The data acquisition module 210 may be configured to acquire spatial distribution data and stream data of the target area, where the spatial distribution data includes at least one of audio device distribution data and three-dimensional spatial data. For a detailed description of the spatial distribution data and the people stream data, reference may be made to fig. 3 and its related description.
The first determination module 220 may be configured to determine a target scene effect based on the spatial distribution data and/or the people stream data, and determine a preliminary partitioning scheme in conjunction with the target scene effect. A detailed description of the target scene effect and the preliminary partitioning scheme may be found in fig. 3 and its related description.
In some embodiments, the first determining module 220 is further configured to process the spatial distribution data and/or the people stream data through a target scene effect prediction model, and determine the target scene effect, where the target scene effect prediction model is a machine learning model. For a detailed description of the target scene effect prediction model, see fig. 5 and its associated description.
The second determining module 230 may be configured to determine the audio information to be played, and determine the target partition scheme based on the preliminary partition scheme and the audio information to be played. A detailed description of the audio information to be played and the target partition scheme may be found in fig. 3 and its related description.
In some embodiments, the second determining module 230 is further configured to obtain an audio feature of the target region; based on the audio characteristics, predicting audio information to be played; and determining a target partition scheme based on the preliminary partition scheme and the audio information to be played. For a detailed description of determining a target partition scheme based on the preliminary partition scheme and audio information to be played, see fig. 6 and its related description.
In some embodiments, the second determining module 230 may be further configured to determine candidate audio device distribution information of the preliminary partition scheme according to the constraint condition based on the target scene effect and the audio information to be played; determining at least one candidate partition scheme based on the candidate audio device distribution information; acquiring the playing scene effect of each candidate partition scheme, and judging whether the playing scene effect meets the preset condition; responding to the playing scene effect meeting the preset condition, and determining a candidate partition scheme corresponding to the playing scene effect as a target partition scheme; and in response to the playing scene effect not meeting the preset condition, increasing the number of the audio devices and updating the candidate partition schemes. For a detailed description of determining a target partition scheme, see FIG. 7 and its associated description.
In some embodiments, the distributed audio device dynamic partitioning system 200 may also include a demand determination module 240.
The demand judgment module 240 may be configured to collect current audio data of a current partition scheme of the target area based on the detection device; judging whether the playing scene effect of the current audio data meets the requirement of a user or not; in response to the non-compliance, the current partition scheme is updated. For a detailed description of updating the current partitioning scheme, see fig. 8 and its associated description.
In some embodiments, the requirement determining module 240 may be further configured to obtain a difference value between scene information corresponding to the current partition scheme and scene information when predicting the target scene effect, where the scene information includes a play content type and/or a user feature; and updating the current partition scheme in response to the difference value being greater than the preset difference threshold. For a detailed description of updating the current partitioning scheme based on the difference value and the preset difference threshold, see fig. 8 and its related description.
It should be noted that the above description of the system and its components is for descriptive convenience only and is not intended to limit the present disclosure to the scope of the illustrated embodiments. It will be understood by those skilled in the art that, given the principles of the system, it is possible to combine the individual components arbitrarily or to connect the constituent subsystems with other components without departing from such principles. For example, the data acquisition module 210, the first determination module 220, and the second determination module 230 may be integrated in one component. For another example, each component may share a single storage device, or each component may have a respective storage device. Such variations are within the scope of the present description.
Fig. 3 is an exemplary flow chart of a method of dynamic partitioning of a distributed audio device according to some embodiments of the present description. In some embodiments, the process 300 may be performed by the processing device 130. As shown in fig. 3, flow 300 may include steps 310, 320, and 330.
In step 310, spatial distribution data and people stream data of the target area are acquired.
The target area refers to an area in which the audio device is arranged. For example, the target area may include, but is not limited to, a mall, a square, etc. where the audio device is disposed. For more content on audio devices, see fig. 1 and its associated description.
Spatially distributed data refers to data or information related to the arrangement of the target area. For example, the spatial distribution data may include, but is not limited to, an area, a shape of a target area, a distribution location of audio devices disposed in the target area, and the like.
In some embodiments, the spatial distribution data may include at least one of audio device distribution data and three-dimensional spatial data.
Audio device distribution data refers to data or information related to the arrangement of audio devices. For example, the audio device distribution data may include, but is not limited to, a distribution location, a distribution density, etc. of the audio devices.
Three-dimensional space data refers to data or information related to a target area. The three-dimensional data may include the shape, area, height, material, etc. of the wall in the target area, or other data in the target area that may affect audio transmission.
In some embodiments, the processing device 130 may obtain the spatial distribution data of the target region in a variety of ways. For example, the processing device 130 may obtain the spatial distribution data of the target region by obtaining user input.
In some embodiments, the processing device 130 may acquire the spatial correlation data once every interval (e.g., a week) by a sensor (e.g., an infrared sensor, etc.) disposed on the audio device, and render three-dimensional spatial data of the target region in real-time based on the spatial correlation data.
In some embodiments of the present disclosure, infrared radiation data is collected by a sensor disposed on an audio device, and three-dimensional space data of a target area is drawn based on the infrared radiation data, so that when structural arrangement of the target area changes, the processing device can update the collected three-dimensional space data accordingly, and unnecessary data calculation and storage are reduced by reasonably controlling the data collection frequency, which is beneficial to energy saving.
People stream data refers to data or information related to the distribution of people in a target area. For example, the people stream data may include, but is not limited to, distribution locations of people in the target area, distribution densities, total number of people, and the like. In some embodiments, the people stream data may also include user features. For more on user features see fig. 8 and its associated description.
In some embodiments, processing device 130 may obtain the people stream data for the target area in a variety of ways. For example, the processing device 130 may obtain the people stream data of the target area through a sensor (e.g., infrared sensor, etc.) disposed on the audio device. For another example, the processing device 130 may collect noise at different locations of the target area through a pickup device disposed on the audio device, and determine the people stream data through software based on the noise at the different locations.
Step 320, determining a target scene effect based on the spatial distribution data and/or the people stream data, and determining a preliminary partition scheme in combination with the target scene effect.
The target scene effect refers to a parameter related to the play effect of audio.
In some embodiments, the target scene effect may include at least one of a soothing mode, a 5.1 mode, a surround sound mode.
The comfort mode refers to a mode in which the playback volume of the activated audio device is less than a volume threshold (e.g., the mode shown in fig. 4A). The volume threshold may be an empirical value, a default value, a preset value, etc.
The 5.1 mode refers to a mode in which the active audio devices are located in the center and around, respectively (e.g., the mode shown in fig. 4B). The specific region shape of the 5.1 mode may vary from one active audio device to another.
The surround sound mode refers to a mode in which the activated audio device surrounds the person (e.g., the mode shown in fig. 4C, 4D).
In fig. 4A to 4D, a circle represents an audio device disposed in a target area, a circle with a hatched portion is an audio device in an activated state, and a circle without a hatched portion is an audio device in a closed state.
In some embodiments, processing device 130 may determine the target scene effect based on the spatial distribution data and/or the people stream data in a variety of ways. For example, the processing device 130 may determine the target scene effect through a preset data lookup table based on the spatial distribution data and/or the people stream data. Different spatial distribution data and/or target scene effects corresponding to the people stream data are recorded in the preset data comparison table. The preset data comparison table can be preset based on priori knowledge or historical data.
In some embodiments, the user may preset the target scene effect in advance according to the actual requirement.
In some embodiments, processing device 130 may determine the target scene effect based on the spatial distribution data and/or the people stream data via vector database matching. For example, processing device 130 may construct a first target vector based on the spatial distribution data and/or the people stream data; determining, by the first vector database, a first association vector based on the first target vector; and determining the reference target scene effect corresponding to the first association vector as the target scene effect corresponding to the first target vector.
The first target vector refers to a vector constructed based on spatially distributed data and/or people stream data. There are a number of ways to construct the first target vector. For example, the processing device 130 may input spatially distributed data and/or stream data into an Embedding Layer (Embedding Layer) for processing to obtain the first target vector. In some embodiments, the embedded layer may be derived by co-training with a target scene effect prediction model. For more on the target scene effect prediction model see fig. 5 and its associated description.
The first vector database comprises a plurality of first reference vectors, and each first reference vector in the plurality of first reference vectors has a corresponding reference target scene effect.
The first reference vector refers to a vector constructed based on historical spatial distribution data and/or historical people stream data of the target area in a historical time period, and the reference target scene effect corresponding to the first reference vector can be the historical target scene effect of the target area in the historical time period. The construction method of the first reference vector can be referred to as the construction method of the first target vector.
In some embodiments, the processing device 130 may calculate the vector distance between the first target vector and the first reference vector, respectively, to determine the target scene effect of the first target vector. For example, a first reference vector whose vector distance from a first target vector satisfies a preset condition is used as a first correlation vector, and a reference target scene effect corresponding to the first correlation vector is used as a target scene effect corresponding to the first target vector. The preset conditions may be set according to circumstances. For example, the preset condition may be that the vector distance is minimum or that the vector distance is less than a distance threshold, or the like. Vector distances may include, but are not limited to, cosine distances, mahalanobis distances, euclidean distances, and the like.
In some embodiments of the present disclosure, a target scene effect is determined by vector database matching based on spatial distribution data and/or people stream data, which improves accuracy of the determined target scene effect, reduces computation amount, and shortens computation time.
In some embodiments, processing device 130 may process the spatial distribution data and/or the people stream data through a target scene effect prediction model that is a machine learning model to determine a target scene effect. For more on determining the target scene effect by the target scene effect prediction model, see fig. 5 and its related description.
The preliminary partition scheme refers to a partition scheme of an audio device that is scheduled to be activated. For example, the preliminary partitioning scheme may include, but is not limited to, a distribution location, a distribution density, etc. of the audio devices that are intended to be activated. In the preliminary zoning scheme, a zoning area needs to contain most of the personnel in the zoning area. Wherein, the partitioned area refers to an area surrounded by the activated audio device. For example only, using the surround sound mode as an example, when the number of people is small, the processing device 130 may choose to activate the audio device in a small range, as shown in fig. 4C. When the number of people is large, the processing device 130 may choose to activate the audio device over a larger range to include most of the people in the zone area, as shown in fig. 4D.
In some embodiments, the processing device 130 may determine the preliminary partitioning scheme based on the target scene effect in a variety of ways. For example, the storage device may preset a correspondence relationship storing different target scene effects and the preliminary partition scheme, and the processing device 130 may access the storage device based on the determined target scene effects, and determine the preliminary partition scheme through the correspondence relationship.
Step 330, determining audio information to be played, and determining a target partition scheme based on the preliminary partition scheme and the audio information to be played.
The audio information to be played refers to information or data related to the audio to be played. For example, the audio information to be played may include, but is not limited to, the type of content to be played (e.g., song, movie, poetry, drama, phase, etc.), channel type (e.g., mono, binaural, etc.), volume size, etc.
In some embodiments, the processing device 130 may determine the audio information to be played in a variety of ways. For example, the audio information to be played may be preset in advance.
In some embodiments, the processing device 130 may obtain audio features of the target region; based on the audio characteristics, audio information to be played is predicted. For more content of the predicted audio information to be played based on the audio characteristics, see fig. 5 and its associated description.
The target partition scheme refers to a partition scheme that determines an active audio device.
In some embodiments, the processing device 130 may determine the target partition scheme based on the preliminary partition scheme and the audio information to be played in a variety of ways. For example, the processing device 130 may determine the target partition scheme through a preset data lookup table based on the preliminary partition scheme and the audio information to be played. Different primary partition schemes and target partition schemes corresponding to audio information to be played are recorded in the preset data comparison table.
In some embodiments of the present disclosure, a target scene effect is determined by spatial distribution data and/or people stream data, a preliminary partition scheme is generated, and the target partition scheme is determined in combination with audio information to be played, so that audio devices in different positions are activated for different scene arrangements and people distribution, so that an ideal listening effect is achieved, and meanwhile, waste of resources caused by excessive use of the audio devices is avoided.
It should be noted that the above description of the process 300 is for purposes of example and illustration only and is not intended to limit the scope of applicability of the present disclosure. Various modifications and changes to flow 300 will be apparent to those skilled in the art in light of the present description. However, such modifications and variations are still within the scope of the present description.
FIG. 5 is an exemplary schematic diagram of a target scene effect prediction model shown in accordance with some embodiments of the present description.
In some embodiments, the processing device 130 may process the spatial distribution data 510 and/or the people stream data 520 through the target scene effect prediction model 530 to determine the target scene effect 540.
The target scene effect prediction model may be a machine learning model for determining target scene effects. The target scene effect prediction model may be a Neural Networks (NN) model or other model. For example, a recurrent neural network (Recurrent Neural Network, RNN) model, and the like.
In some embodiments, the input of the target scene effect prediction model 530 may include spatial distribution data 510 and/or people stream data 520; the output may include a target scene effect 540. For more on spatial distribution data, people stream data and target scene effects see fig. 3 and its related description.
In some embodiments, the target scene effect prediction model 530 may be trained from a plurality of first training samples 560-1 with first labels 560-2. For example, a plurality of first training samples 560-1 with first labels 560-2 may be input into the initial target scene effect prediction model 550, a penalty function may be constructed from the results of the first labels 560-2 and the initial target scene effect prediction model 550, and parameters of the initial target scene effect prediction model 550 may be iteratively updated based on the penalty function. When the loss function of the initial target scene effect prediction model 550 meets the iterative preset condition, model training is completed, resulting in a trained target scene effect prediction model 530. The preset iteration condition may be that the loss function converges, the number of iterations reaches a threshold value, etc.
In some embodiments, the first training sample 560-1 may include sample spatial distribution data and/or sample people stream data for a sample target area. First label 560-2 may include a sample target scene effect for a sample target region corresponding to the set of first training samples. In some embodiments, first training sample 560-1 may be obtained based on historical data (e.g., historical spatial distribution data and/or historical people stream data for a target area), and first tag 560-2 may be obtained by a target area measurement of historical time.
In some embodiments of the present disclosure, the target scene effect prediction model is used to process the spatial distribution data and/or the people stream data, so as to determine the target scene effect, and may consider the influence of multiple factors at the same time, so that the determination of the target scene effect is efficient and accurate, and the error of manual determination is avoided.
FIG. 6 is an exemplary diagram illustrating a determination of a target partition scheme according to some embodiments of the present description.
In some embodiments, the processing device 130 may obtain the audio features 610 of the target region; based on the audio features 610, audio information 620 to be played is predicted; based on the preliminary partition scheme 630 and the audio information to be played 620, a target partition scheme 640 is determined. For more on the audio information to be played, the preliminary partition scheme and the target partition scheme, see fig. 3 and the related description thereof.
Audio characteristics refer to parameters or information related to audio.
In some embodiments, the processing device 130 may obtain the audio characteristics of the target region in a variety of ways. For example, the processing device 130 may obtain the audio characteristics of the target region by accessing a storage device. For another example, the processing device 130 may acquire audio features by picking up the sound of the target area through a microphone disposed on the audio device.
In some embodiments, the processing device 130 may predict the audio information to be played based on the audio characteristics in a variety of ways. For example, the processing device 130 may treat the audio features as audio information to be played.
In some embodiments, processing device 130 may determine audio information to play through vector database matching based on the audio characteristics. For example, the processing device 130 may construct a second target vector based on the audio features; determining, by the second vector database, a second association vector based on the second target vector; and determining the reference audio information to be played corresponding to the second association vector as the audio information to be played corresponding to the second target vector.
The second target vector may be a vector representation of the audio feature.
The second vector database comprises a plurality of second reference vectors, and each of the plurality of second reference vectors has corresponding reference audio information to be played.
The second reference vector may be a vector representation of a historical audio feature of the target area in the historical time period, and the reference audio information to be played corresponding to the second reference vector may be the historical audio information to be played of the target area in the historical time period.
In some embodiments, the processing device 130 may calculate a vector distance between the second target vector and the second reference vector, respectively, and determine audio information to be played of the second target vector. Regarding the manner of determining the audio information to be played of the second target vector, reference may be made to the manner of determining the target scene effect of the first target vector in fig. 3.
In some embodiments of the present disclosure, audio information to be played is determined by matching a vector database based on audio features of a target area, so that accuracy of the determined audio information to be played is improved, calculation amount is reduced, and calculation time is shortened.
In some embodiments, the processing device 130 may process the audio features through an audio feature prediction model to predict audio information to be played.
The audio feature prediction model may be a machine learning model for predicting audio information to be played. The audio feature prediction model may be a neural network model or other model. Such as a recurrent neural network model, etc.
In some embodiments, the input of the audio feature prediction model may include the type of target region (e.g., mall, square, etc.), people stream data, and audio features; the output may include audio information to be played. For more content on the stream data, audio information to be played, see fig. 3 and its associated description.
In some embodiments, the audio feature prediction model may be trained from a plurality of second training samples with second labels. As for the training manner of the audio feature prediction model, reference may be made to the training manner of the target scene effect prediction model in fig. 5.
In some embodiments, the second training sample may include sample audio features of the sample target region. The second label may include sample to-be-played audio information of a sample target area corresponding to the set of second training samples. In some embodiments, the second training sample and the second tag may be obtained based on historical data (e.g., historical audio characteristics of the target region, historical audio information to be played).
In some embodiments of the present disclosure, based on the type of the target area, the audio feature and the people stream data, the audio information to be played in the target area is predicted by the audio feature prediction model, which can consider the influence of multiple factors, so that the determination of the audio information to be played is more efficient and accurate, and the error of manual determination is avoided.
In some embodiments, the processing device 130 may determine candidate audio device distribution information for the preliminary partition scheme according to constraints based on the target scene effect and the audio information to be played; determining at least one candidate partition scheme based on the candidate audio device distribution information; acquiring the playing scene effect of each candidate partition scheme, and judging whether the playing scene effect meets the preset condition; and determining the candidate partition scheme corresponding to the playing scene effect as a target partition scheme in response to the playing scene effect meeting the preset condition. For more details on determining the target partition scheme, see FIG. 7 and its associated description.
In some embodiments of the present disclosure, audio information to be played in a target area is predicted by using audio features of the target area, so that accuracy of the determined audio information to be played is improved, a target partition scheme determined later is more targeted, and a better listening effect can be achieved.
FIG. 7 is an exemplary diagram illustrating yet another determination of target partition scheme according to some embodiments of the present disclosure.
In some embodiments, the processing device 130 may determine candidate audio device distribution information 710 for the preliminary partition scheme according to constraints based on the target scene effect 540 and the audio information to be played 620; determining at least one candidate partition scheme 720 based on the candidate audio device distribution information 710; a play scene effect 730 of each candidate partition scheme 720 is obtained, and a determining step 740 is performed based on the play scene effect 730: whether the effect of the playing scene meets the preset condition or not; in response to the play scene effect 730 satisfying the preset condition, determining the candidate partition scheme 720 corresponding to the play scene effect 730 as the target partition scheme 640; in response to the playback scenario effect 730 not meeting the preset condition, step 740-1 is performed: the number of audio devices is increased to update the candidate partition scheme 720. For more on the target scene effect, audio information to be played, preliminary partition scheme and target partition scheme, see fig. 3 and its related description.
Candidate audio devices refer to audio devices that satisfy constraints. The candidate audio device distribution information refers to data or information related to the arrangement of the candidate audio devices. Such as the location of the distribution, the density of the distribution, etc. of the candidate audio devices.
Constraint refers to a condition that candidate audio devices need to satisfy.
In some embodiments, the constraints may include constraints of the target scene effect. For example, constraints on the target scene effect may include, but are not limited to, requirements on the shape of the zone region, requirements on the number of candidate audio devices, and the like. For example, if the constraint on the target scene effect is "the target scene effect is in surround sound mode", the candidate audio devices may be distributed as shown in fig. 4C. More on the target scene effect, the partitioned area can be found in fig. 3 and its related description.
In some embodiments, the constraints may also include other constraints, such as constraints of the audio information to be played. The constraint condition of the audio information to be played may include a requirement for a sound channel, and the like. For example, if the constraint condition of the audio information to be played is "the channel type is a left-right channel mode", the candidate audio devices may be distributed in a bilateral symmetry manner. For more content on the audio information to be played, see fig. 3 and its associated description.
In some embodiments, the processing device 130 may determine a preset initial scene shape based on the target scene effect; generating a preliminary partition scheme based on the people stream data and a preset initial scene shape; candidate audio devices are determined based on the preliminary partition scheme. For more content on people stream data see fig. 3 and its associated description.
The initial scene shape refers to the shape of the area constituted by the audio device that is intended to be activated. The preset initial scene shape refers to an initial scene shape corresponding to a preset target scene effect. For example, if the preset target scene effect is in the relaxing mode, the corresponding initial scene shape may be the shape shown in fig. 4A. For more on the preset target scene effect see fig. 3 and its related description.
In some embodiments, the processing device 130 may determine the preset initial scene shape based on the preset target scene effect in a variety of ways. For example, the processing device 130 may determine a preset initial scene shape through a preset lookup table based on a preset target scene effect. The preset comparison table records initial scene shapes corresponding to different target scene effects. The preset reference table can be preset based on priori knowledge or historical data.
In some embodiments, the processing device 130 may generate the preliminary partitioning scheme based on the people stream data and the preset initial scene shape in a variety of ways. For example, the storage device 150 may preset a correspondence relationship storing different initial scene shapes, people stream data, and preliminary partitioning schemes, and the processing device 130 may access the storage device 150 based on the determined initial scene shapes and people stream data, and determine the preliminary partitioning schemes through the correspondence relationship.
In some embodiments, the processing device 130 may determine the candidate audio devices based on the preliminary partition scheme in a variety of ways. For example, the processing device 130 may determine the distribution locations and distribution densities of the candidate audio devices based on the preliminary partition scheme, and determine the candidate audio devices based on the distribution locations and the distribution densities.
In some embodiments of the present disclosure, a preset initial scene shape is determined through a target scene effect, a preliminary partition scheme is generated based on stream data and the preset initial scene shape, candidate audio devices are determined, the effect requirements of different scenes are met, the influence of personnel distribution is considered, the audio devices in different positions are activated, the rationality of determining the candidate audio devices is improved, and the ideal listening effect is achieved.
The candidate partition scheme refers to a partition scheme of a candidate audio device that is scheduled to be activated. For example, the candidate partition scheme may include, but is not limited to, the number of candidate audio devices that are intended to be activated, the distribution location, the distribution density, the power of the individual audio devices, the voicing angle, the audio device type, and the like.
In some embodiments, the processing device 130 may determine the candidate partition scheme based on the candidate audio device distribution information in a variety of ways. For example only, the processing device 130 may generate a plurality of candidate partition schemes containing candidate audio devices with symmetric distribution locations, taking a left-right channel pattern as an example.
The playback scene effect refers to parameters related to the playback effect when playing audio based on the candidate partition scheme.
In some embodiments, the processing device 130 may determine the play scene effect based on the candidate partition scheme in a variety of ways. For example, the processing device 130 may determine the playback scene effect by presetting a playback scene comparison table based on the candidate partition scheme. The preset play scene comparison table records play scene effects corresponding to different candidate partition schemes.
In some embodiments, the processing device 130 may process the candidate partition scheme through a play scene effect model to predict a play scene effect.
The playscene effect model may be a machine learning model for predicting playscene effects. The playback scenario effect model may be a neural network model or other model.
In some embodiments, the input of the play scene effect model may include a candidate partitioning scheme; the output may include a play scene effect.
In some embodiments, the playback scene effect model may be trained from a plurality of third training samples with third labels. As for the training manner of the playback scene effect model, reference may be made to the training manner of the target scene effect prediction model in fig. 5.
In some embodiments, the third training samples may include sample candidate partition schemes. The third label may include a sample play scene effect of a sample candidate partition scheme corresponding to the set of third training samples. In some embodiments, the third training samples and the third tag may be obtained based on historical data.
In some embodiments, the input of the playback scene effect model may also include spatial distribution data, people stream data, and audio information to be played. For more on spatially distributed data, stream data and audio information to be played, see fig. 3 and its associated description.
In some embodiments, the third training samples of the playback scene effect model may further include sample spatial distribution data, sample stream data, and sample audio data to be played corresponding to the third training samples.
In some embodiments of the present disclosure, based on spatial distribution data, stream data, audio information to be played, and candidate partition schemes, the playback scene effect of the candidate partition schemes is predicted by the playback scene effect model, and the influence of multiple factors can be fully considered to determine a more reasonable playback scene effect.
In some embodiments of the present disclosure, the play scene effect corresponding to the candidate partition scheme is determined based on the play scene effect model, so that the play scene effect can be determined efficiently and accurately, and errors of manual determination are avoided.
The preset condition refers to a condition that the effect of the playing scene needs to be met. For example, the preset condition may include whether the play scene effect is the same as or similar to the target scene effect.
In some embodiments, the processing device 130 may determine, as the target partition scheme, a candidate partition scheme corresponding to the playback scene effect in response to the playback scene effect satisfying a preset condition. In some embodiments, when the playing scene effect corresponding to the plurality of candidate partition schemes meets the preset condition, the processing device 130 may use any one of the candidate partition schemes as the target partition scheme; the processing device 130 may also select a candidate partition scheme with the least number of active audio devices from among the plurality of candidate partition schemes as the target partition scheme.
In some embodiments, the processing device 130 may increase the number of audio devices to update the candidate partition scheme in response to the playback scenario effect not meeting the preset condition. And then executing the steps of predicting the playing scene effect, judging whether the updated candidate partition scheme meets the preset condition and the like.
In some embodiments, the processing device 130 may determine the number of audio devices to increase in a number of ways. For example, the number of audio devices added may be a default value or the like.
In some embodiments of the present disclosure, candidate audio devices are determined according to constraint conditions based on a target scene effect and audio information to be played, at least one candidate partition scheme is generated, and the target partition scheme is determined based on a play scene effect of each candidate partition scheme, so that it is ensured that the finally determined target partition scheme can not only achieve an expected listening effect, but also reduce energy consumption by reducing the number of activated audio devices.
FIG. 8 is an exemplary diagram of an update current partition scheme according to some embodiments of the present description.
In some embodiments, the processing device 130 may collect current audio data 810 of a current partition scheme of the target region based on the detection device; decision step 820 is performed based on the current audio data 810: whether the playing scene effect of the current audio data meets the requirement of a user or not; in response to not conforming, then an update step 830 is performed: the current partition scheme is updated.
The detection device refers to a device that can collect relevant information in a target area. For more information on the detection device see fig. 1 and the associated description.
In some embodiments, the detection device may include, but is not limited to, a sensor disposed on the audio device, a pickup device, and the like.
The current partition scheme refers to a partition scheme of the audio device at the current time. For example, the current partitioning scheme may include, but is not limited to, the number of audio devices at the current time, the distribution location, the distribution density, the power of the individual audio devices, the voicing angle, and the like.
The current audio data refers to parameters or information related to audio at the current time. For example, the current audio data may include, but is not limited to, a content type, a channel type, a volume size, etc. of the current audio.
In some embodiments, the processing device 130 may obtain the current audio data of the current partition scheme of the target area based on the detection device in a variety of ways. For example, the processing device 130 may collect current audio data through a sound pickup device disposed on the audio device.
In some embodiments, the processing device 130 may determine whether the playback scenario effect of the current audio data meets the user's needs in a variety of ways. For example, the processing device 130 may determine whether the playback scene effect of the current audio data meets the user's requirement by comparing whether the playback scene effect and the target scene effect of the user's requirement are of the same type.
In some embodiments, if the playback scenario effect of the current audio data does not meet the user's requirement, the processing device 130 may implement updating the current partition scheme in a variety of ways. For example, processing device 130 may implement updating the current partition scheme by executing flow 300.
In some embodiments, the processing device 130 may obtain a difference value between the scene information corresponding to the current partition scheme and the scene information when the audio information to be played is predicted; and updating the current partition scheme in response to the difference value being greater than the preset difference threshold. The preset difference threshold may be a default value, an empirical value, a preset value, etc., and may be determined according to actual requirements.
The scene information refers to information related to a target area. For example, the scene information may include the type of target area (e.g., mall, square, etc.), people stream data, audio features, and so forth.
In some embodiments, the scene information may include a play content type and/or user characteristics.
The playback content type refers to the type of audio played by the audio device. For example, the type of content played may include, but is not limited to, poetry, drama, light music, and the like.
User characteristics refer to information related to personnel within a target area. For example, the user characteristics may include, but are not limited to, the number of people in the target area, distribution density, and the like.
In some embodiments, the processing device 130 may obtain the scene information in a variety of ways. For example, the processing device 130 may collect audio data and infrared radiation data by sensors (e.g., pickup devices, infrared sensors, etc.) disposed on the audio device, and determine scene information based on the audio data and the infrared radiation data.
In some embodiments, the processing device 130 may determine whether the difference value between the scene information corresponding to the current partition scheme and the scene information when the audio information to be played is predicted is greater than a preset difference threshold in various manners. For example, the processing device 130 may predict whether the audio type of the current partition scheme and the audio type when the audio information to be played are of the same type, if not, the difference value of the scene information of the current partition scheme and the audio type is greater than the preset difference threshold. For another example, the processing device 130 may calculate the similarity between the scene information corresponding to the current partition scheme and the scene information when predicting the audio information to be played, and determine whether the similarity is greater than a similarity threshold, and if so, the difference value of the scene information of the two is greater than a preset difference threshold. The similarity threshold may be a default value, a preset value, or the like.
In some embodiments of the present disclosure, based on the difference between the scene information corresponding to the current partition and the scene information when predicting the target scene effect, the current partition scheme is updated, which is conducive to updating the current partition scheme in real time according to different play content types and user dispersion degrees, so as to determine the partition scheme more in line with the actual situation.
In some embodiments, the processing device 130 updates the current partition scheme in response to the last update time interval being greater than a preset time threshold. The preset time threshold may be a default value, a preset value, etc. The preset time threshold may also be determined in other ways.
In some embodiments, the preset time threshold may be set based on different time periods of a particular scene. For example, for a cinema scenario, a day is divided into a plurality of time periods from 10:00-24:00, and different preset time thresholds are preset for different time periods, considering that there is a large difference in the flow of people for different time periods.
In some embodiments, the preset time threshold may be determined based on people stream data of the target area. For example, when the flow rate of people in the target area is large, the preset time may be set smaller. For another example, the preset time threshold may be determined based on the type of target area. For example only, the preset time threshold of the mall may be set smaller than the preset time threshold of the park.
In some embodiments of the present disclosure, a preset time threshold is determined according to different time periods of a specific scene, and based on that a last update time interval is greater than the preset time threshold, updating the partition scheme in different scenes is facilitated in real time by determining an appropriate preset time threshold.
In some embodiments of the present disclosure, by determining whether the playing scene effect of the current audio data meets the user requirement, the current partition scheme is updated, so as to help determine the partition scheme more suitable for the actual situation, so as to achieve a better listening effect.
While the basic concepts have been described above, it will be apparent to those skilled in the art that the foregoing detailed disclosure is by way of example only and is not intended to be limiting. Although not explicitly described herein, various modifications, improvements, and adaptations to the present disclosure may occur to one skilled in the art. Such modifications, improvements, and modifications are intended to be suggested within this specification, and therefore, such modifications, improvements, and modifications are intended to be included within the spirit and scope of the exemplary embodiments of the present invention.
At the same time, the present inventionThe specification uses specific words to describe embodiments of the specification. Such as "one embodiment" "one embodiment" And/or "some embodiments" means a particular feature, structure, or characteristic described in connection with at least one embodiment of the present disclosure. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the present description may be combined as suitable.
Furthermore, the order in which the elements and sequences are processed, the use of numerical letters, or other designations in the description are not intended to limit the order in which the processes and methods of the description are performed unless explicitly recited in the claims. While certain presently useful inventive embodiments have been discussed in the foregoing disclosure, by way of various examples, it is to be understood that such details are merely illustrative and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements included within the spirit and scope of the embodiments of the present disclosure. For example, while the system components described above may be implemented by hardware devices, they may also be implemented solely by software solutions, such as installing the described system on an existing server or mobile device.
Likewise, it should be noted that in order to simplify the presentation disclosed in this specification and thereby aid in understanding one or more inventive embodiments, various features are sometimes grouped together in a single embodiment, figure, or description thereof. This method of disclosure, however, is not intended to imply that more features than are presented in the claims are required for the present description. Indeed, less than all of the features of a single embodiment disclosed above.
In some embodiments, numbers describing the components, number of attributes are used, it being understood that such numbers being used in the description of embodiments are modified in some examples by the modifier "about," approximately, "or" substantially. Unless otherwise indicated, "about," "approximately," or "substantially" indicate that the number allows for a 20% variation. Accordingly, in some embodiments, numerical parameters set forth in the specification and claims are approximations that may vary depending upon the desired properties sought to be obtained by the individual embodiments. In some embodiments, the numerical parameters should take into account the specified significant digits and employ a method for preserving the general number of digits. Although the numerical ranges and parameters set forth herein are approximations that may be employed in some embodiments to confirm the breadth of the range, in particular embodiments, the setting of such numerical values is as precise as possible.
Each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., referred to in this specification is incorporated herein by reference in its entirety. Except for application history documents that are inconsistent or conflicting with the content of this specification, documents that are currently or later attached to this specification in which the broadest scope of the claims to this specification is limited are also. It is noted that, if the description, definition, and/or use of a term in an attached material in this specification does not conform to or conflict with what is described in this specification, the description, definition, and/or use of the term in this specification controls.
Finally, it should be understood that the embodiments described in this specification are merely illustrative of the principles of the embodiments of this specification. Other variations are possible within the scope of this description. Thus, by way of example, and not limitation, alternative configurations of embodiments of the present specification may be considered as consistent with the teachings of the present specification. Accordingly, the embodiments of the present specification are not limited to only the embodiments explicitly described and depicted in the present specification.

Claims (10)

1. A method for dynamic partitioning of a distributed audio device, the method comprising:
Acquiring spatial distribution data and stream data of a target area, wherein the spatial distribution data comprises at least one of audio equipment distribution data and three-dimensional space data;
determining a target scene effect based on the spatial distribution data and/or the people stream data, and determining a preliminary partition scheme by combining the target scene effect;
and determining audio information to be played, and determining a target partition scheme based on the preliminary partition scheme and the audio information to be played.
2. The method of claim 1, wherein the target scene effect comprises at least one of a comfort mode, a 5.1 mode, a surround sound mode.
3. The method of claim 1, wherein the determining a target scene effect based on the spatial distribution data and/or people stream data comprises:
and processing the spatial distribution data and/or the people stream data through a target scene effect prediction model to determine the target scene effect, wherein the target scene effect prediction model is a machine learning model.
4. The method of claim 1, wherein the determining audio information to be played, and determining a target partition scheme based on the preliminary partition scheme and the audio information to be played, comprises:
Acquiring the audio characteristics of the target area;
predicting the audio information to be played based on the audio characteristics;
and determining the target partition scheme based on the preliminary partition scheme and the audio information to be played.
5. The method of claim 4, wherein the determining the target partition scheme based on the preliminary partition scheme and the audio information to be played comprises:
based on the target scene effect and the audio information to be played, determining candidate audio equipment distribution information of the preliminary partition scheme according to constraint conditions;
determining at least one candidate partition scheme based on the candidate audio device distribution information;
obtaining a play scene effect of each candidate partition scheme, and judging whether the play scene effect meets a preset condition or not;
responding to the play scene effect meeting the preset condition, and determining a candidate partition scheme corresponding to the play scene effect as the target partition scheme;
and in response to the playing scene effect not meeting the preset condition, increasing the number of audio devices and updating the candidate partition scheme.
6. The method according to claim 1, wherein the method further comprises:
Collecting current audio data of a current partition scheme of the target area based on detection equipment;
judging whether the playing scene effect of the current audio data meets the requirement of a user or not;
and in response to the non-compliance, updating the current partition scheme.
7. The method of claim 6, wherein updating the trigger condition of the current partition scheme further comprises:
acquiring a difference value of scene information corresponding to the current partition scheme and scene information when audio information to be played is predicted, wherein the scene information comprises a play content type and/or user characteristics;
and updating the current partition scheme in response to the difference value being greater than a preset difference threshold.
8. A distributed audio device dynamic partitioning system, the system comprising:
the system comprises a data acquisition module, a data processing module and a data processing module, wherein the data acquisition module is used for acquiring spatial distribution data and stream data of a target area, and the spatial distribution data comprises at least one of audio equipment distribution data and three-dimensional spatial data;
the first determining module is used for determining a target scene effect based on the space distribution data and/or the people stream data and determining a preliminary partition scheme by combining the target scene effect;
And the second determining module is used for determining the audio information to be played and determining a target partition scheme based on the preliminary partition scheme and the audio information to be played.
9. A distributed audio device dynamic partitioning apparatus, said apparatus comprising at least one processor and at least one memory;
the at least one memory is configured to store computer instructions;
the at least one processor is configured to execute at least some of the computer instructions to implement the method of any one of claims 1 to 7.
10. A computer readable storage medium storing computer instructions which, when read by a computer in the storage medium, perform the method of any one of claims 1 to 7.
CN202310568695.2A 2023-05-18 2023-05-18 Dynamic partitioning method and system for distributed audio equipment Pending CN116610280A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310568695.2A CN116610280A (en) 2023-05-18 2023-05-18 Dynamic partitioning method and system for distributed audio equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310568695.2A CN116610280A (en) 2023-05-18 2023-05-18 Dynamic partitioning method and system for distributed audio equipment

Publications (1)

Publication Number Publication Date
CN116610280A true CN116610280A (en) 2023-08-18

Family

ID=87674133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310568695.2A Pending CN116610280A (en) 2023-05-18 2023-05-18 Dynamic partitioning method and system for distributed audio equipment

Country Status (1)

Country Link
CN (1) CN116610280A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117812504A (en) * 2023-12-29 2024-04-02 恩平市金马士音频设备有限公司 Audio equipment volume data management system and method based on Internet of things

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117812504A (en) * 2023-12-29 2024-04-02 恩平市金马士音频设备有限公司 Audio equipment volume data management system and method based on Internet of things

Similar Documents

Publication Publication Date Title
CN108805048B (en) face recognition model adjusting method, device and storage medium
US8838505B2 (en) Schedule management system using interactive robot and method and computer-readable medium thereof
CN110263213B (en) Video pushing method, device, computer equipment and storage medium
CN116610280A (en) Dynamic partitioning method and system for distributed audio equipment
CN110119477B (en) Information pushing method, device and storage medium
CN107885323B (en) VR scene immersion control method based on machine learning
US11392799B2 (en) Method for improving temporal consistency of deep neural networks
CN114442697B (en) Temperature control method, equipment, medium and product
Li et al. Toward intelligent multizone thermal control with multiagent deep reinforcement learning
US20130096831A1 (en) Automatic, adaptive and optimized sensor selection and virtualization
CN112230555A (en) Intelligent household equipment, control method and device thereof and storage medium
CN115079589B (en) Park management method, device, system, electronic equipment and computer readable medium
CN111898740B (en) Model parameter updating method and device of prediction model
CN118131917B (en) Multi-user real-time interaction method based on electroencephalogram signals and computer equipment
EP3973441A1 (en) Systems and methods using person recognizability across a network of devices
CN113206774A (en) Control method and device of intelligent household equipment based on indoor positioning information
CN115424615A (en) Intelligent equipment voice control method, device, equipment and storage medium
CN115877726A (en) Control method of intelligent household equipment, computer equipment and storage medium
CN117499429A (en) Intelligent scheduling method and system for video data in Internet of things system
KR101935161B1 (en) Prediction system and method based on combination of sns and public opinion poll
CN116611560A (en) Automatic region dividing method, system, device and storage medium
CN110516795B (en) Method and device for allocating processors to model variables and electronic equipment
CN114397826B (en) Smart home control method, system and device
CN118349809B (en) Information processing method and system based on Internet of things
Timofeev et al. Spatial concentration of objects as a factor in locally weighted models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination