WO2018068654A1 - 场景模型动态估计方法、数据分析方法及装置、电子设备 - Google Patents

场景模型动态估计方法、数据分析方法及装置、电子设备 Download PDF

Info

Publication number
WO2018068654A1
WO2018068654A1 PCT/CN2017/103988 CN2017103988W WO2018068654A1 WO 2018068654 A1 WO2018068654 A1 WO 2018068654A1 CN 2017103988 W CN2017103988 W CN 2017103988W WO 2018068654 A1 WO2018068654 A1 WO 2018068654A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
moment
sample
distribution
next moment
Prior art date
Application number
PCT/CN2017/103988
Other languages
English (en)
French (fr)
Inventor
王冬陆
田第鸿
Original Assignee
深圳云天励飞技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201610884792.2A external-priority patent/CN106502965A/zh
Priority claimed from CN201610884791.8A external-priority patent/CN106503631A/zh
Application filed by 深圳云天励飞技术有限公司 filed Critical 深圳云天励飞技术有限公司
Publication of WO2018068654A1 publication Critical patent/WO2018068654A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/17Function evaluation by approximation methods, e.g. inter- or extrapolation, smoothing, least mean square method
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/10Numerical modelling

Definitions

  • the invention relates to the field of design data processing technologies, and in particular, to a scene model dynamic estimation method, a data analysis method and device, and an electronic device.
  • Gaussian Mixture Models are widely used in different fields such as pattern recognition, computer vision, machine learning, data mining, and bioinformatics. In these areas, it is used to perform tasks such as image segmentation, clustering, and construction of probability density functions.
  • the Gaussian mixture model consists of a number of different Gaussian components.
  • the Expectation Maximization (EM) algorithm is used to solve the parameters in the Gaussian mixture model.
  • EM Expectation Maximization
  • the mixing coefficients of the Gaussian mixture model will change with time. Therefore, an application for solving dynamic changes is urgently needed.
  • a method for dynamically estimating Gaussian mixture model parameters under a scene is urgently needed.
  • methods for dynamically estimating Gaussian mixture model parameters include methods based on sliding windows and moving averages.
  • the main insufficiency of the sliding window method is large and redundant.
  • the calculation of the mixing coefficient for each moment requires the use of data within a certain period of time.
  • the time complexity of the expectation maximization algorithm to process the data is O(n). 2 ).
  • a large part of the sliding windows corresponding to the time t and the time t+1 are overlapped, so that the overlapping data are calculated multiple times.
  • the sliding window method does not process the data outside the window. If the window size is small, the sample size will be insufficient. If the window size is large, the assumption that the change of the mixing coefficient is negligible will be violated.
  • the moving average based method needs to know the correspondence between the Gaussian components of the models at different times, which is difficult for the traditional expectation maximization method.
  • the present invention can estimate a dynamically changing scene model and accurately analyze data of a dynamically changing application scenario.
  • a scene model dynamic estimation method comprising:
  • a scene model dynamic estimating device comprising:
  • An acquisition module configured to acquire sample feature data
  • a calculation module configured to perform initial estimation on a model parameter in an initial time in the scene model according to the sample feature data, and calculate a model parameter at an initial time
  • a determining module configured to determine a model parameter of the initial moment as a model parameter of a current moment
  • the acquiring module is further configured to acquire the observed feature data of the current moment at the current moment;
  • the calculating module is further configured to calculate a model parameter of the next moment according to the model parameter of the current moment and the observed feature data of the next moment;
  • the determining module is further configured to determine the next moment as a current moment
  • An iterative module configured to continue to perform the acquiring module to acquire the observed feature data of the current time at the current time by using the iterative method; the calculating module is configured according to the model parameter of the current time and the observed feature data of the next time Calculating the model parameters of the next moment; the determining module determines the next moment as the current moment until the model parameters of each moment in the scene model are calculated.
  • An electronic device comprising a memory and a processor for storing at least one instruction, the processor for executing the at least one instruction to implement the scene model dynamic estimation method of any of the embodiments.
  • a data analysis method comprising:
  • An electronic device comprising a memory and a processor for storing at least one instruction, the processor for executing the at least one instruction to implement a data analysis method in an embodiment.
  • a population analysis method comprising:
  • the crowd analysis model utilizes any one of the foregoing embodiments Scene model State estimation method for estimation;
  • the user is analyzed according to the frequency of occurrence of each of the plurality of time periods of the user of the face feature data, and the analysis result of the user is obtained.
  • An electronic device comprising a memory and a processor for storing at least one instruction, the processor for executing the at least one instruction to implement a data analysis method in an embodiment.
  • the present invention (a) establishes a scene model for describing a dynamically changing scene; (b) acquires sample feature data; (c) according to the sample feature data, The model parameters of the initial time in the scene model are initially estimated, and the model parameters of the initial time are calculated; (d) determining the model parameters of the initial time as the model parameters of the current time; (e) acquiring the current time Observing feature data at a moment; (f) calculating a model parameter of the next time according to the model parameter of the current time and the observed feature data of the next time; (g), the next moment Determined as the current time; (h), perform (e), (f), (g) using an iterative method until the model parameters at each moment in the scene model are calculated.
  • the amount of calculation of the present invention is reduced by an order of magnitude, which increases the speed of operation.
  • the mixing coefficient at each moment is based on the correction of the mixing coefficient at the previous moment, thus making the estimation result of the mixing coefficient in the scene model more stable.
  • the relaxation operation is used to gradually reduce the proportion of pre-sequence estimation, focusing on recent data, realizing dynamic estimation, and making the results more accurate.
  • the smoothing operation is used to estimate the mixing coefficient, which makes the estimation result of the mixing coefficient in the scene model smoother. Therefore, the present invention accurately analyzes data of dynamically changing application scenarios.
  • FIG. 1 is a flow chart of a preferred embodiment of a method for dynamically estimating a scene model of the present invention.
  • FIG. 2 is a flow chart of a preferred embodiment of the data analysis method of the present invention.
  • Figure 3 is a flow chart of a preferred embodiment of the population analysis method of the present invention.
  • FIG. 4 is a functional block diagram of a preferred embodiment of the scene model dynamic estimating apparatus of the present invention.
  • Figure 5 is a functional block diagram of a preferred embodiment of the data analysis device of the present invention.
  • Figure 6 is a functional block diagram of a preferred embodiment of the crowd analyzing device of the present invention.
  • Figure 7 is a block diagram showing a preferred embodiment of an electronic device in at least one example of the present invention.
  • FIG. 1 is a flow chart of a preferred embodiment of a method for dynamically estimating a scene model of the present invention.
  • the order of the steps in the flowchart may be changed according to different requirements, and some steps may be omitted.
  • the electronic device establishes a scene model for describing a dynamically changing scene.
  • the characteristics of the samples in the scene may change over time. For example, for a piece of speech data, the likelihood of a phoneme corresponding to each moment is changing. For portrait data observed over a period of time, the frequency of each person's appearance changes over time.
  • the dynamically changing scenario also includes other application scenarios, and is not limited to the above examples.
  • the scene model is composed of a Gaussian mixture model at a plurality of times, and the Gaussian mixture model at any one of the plurality of times is expressed as:
  • x represents the characteristic of any sample at any one of the times
  • the sample mean ⁇ k represents the mean value of the sample features at any one of the moments
  • the sample variance ⁇ k represents the degree of change of the sample features at any one of the moments
  • the mixing coefficient ⁇ k represents the weight of the kth Gaussian component in the Gaussian mixture model at any one of the times, and it can be said that the probability of the sample at the any one of the moments coming from the kth Gaussian component.
  • the sample represents a phoneme
  • the sample feature represents a pronunciation of a phoneme
  • the sample mean represents a mean of the pronunciation of the phoneme
  • the sample variance represents a degree of change in the pronunciation of the same phoneme
  • the mixing coefficient ⁇ k represents the probability that the phoneme is from the kth Gaussian component at any one time.
  • the sample represents a portrait
  • the sample feature represents a person's appearance feature
  • the sample mean represents a mean value of a person's appearance feature
  • the sample variance represents a difference in the same appearance feature.
  • the mixing coefficient ⁇ k represents the frequency corresponding to each person at any one time.
  • the model parameters of the scene model include model parameters at a plurality of times.
  • the model parameters at any one of the plurality of times include a sample mean ⁇ k , a sample variance ⁇ k , and a mixture coefficient distribution estimate.
  • the electronic device acquires sample feature data.
  • the sample feature data is extracted from pre-acquired samples and stored in advance in a memory of the electronic device.
  • the larger the number of samples the greater the confidence in the subsequent estimation of the model parameters of the scene model, and the more accurate the model parameters of the scene model will be.
  • the electronic device performs initial estimation on a model parameter of an initial time in the scene model according to the sample feature data, and calculates a model parameter at an initial time.
  • the mixing factor ⁇ k at any one time is satisfied 0 ⁇ ⁇ k ⁇ 1, where K represents the total number of Gaussian components at any one time. Therefore, the electronic device uses a Dirichlet distribution to model the mixing coefficient in the Gaussian mixture model at any one of the times, and obtains the mixing coefficient distribution model at any one of the times, that is, The Dirichlet distribution Dir( ⁇
  • an initial estimation of the model parameters at the initial moment in the scene model is performed according to the sample feature data, and the model parameters for calculating the initial moment include:
  • the maximum likelihood estimation is used to estimate the sample in the Gaussian mixture model at the initial time, and the mean value estimation at the initial time and the variance estimation at the initial time are obtained;
  • the initial estimation of the mixed coefficient distribution model is performed, and the distribution coefficient distribution estimation at the initial time is obtained, that is, the Dirichlet distribution Dir( ⁇
  • the electronic device determines the model parameter of the initial moment as a model parameter of a current moment.
  • the electronic device acquires observation feature data of a next moment of the current moment.
  • the current time is represented by t-1, and the current time of the current time is represented by t.
  • the current time t-1 corresponds to the first second
  • the next moment of the current time corresponds to the second second.
  • the observed feature data is extracted from observation data collected by the acquisition device in the scene model in real time.
  • the collecting device may be a camera device, and the observation data is the collected face sample data, and the observed feature data is the feature of the collected face sample. data.
  • the electronic device calculates a model parameter of the next moment according to the model parameter of the current moment and the observed feature data of the next moment.
  • the calculating the model parameters of the next moment according to the model parameters of the current moment and the observed feature data of the next moment include:
  • the estimation of the mixing coefficient distribution at the initial moment is determined as the prior distribution of the mixing coefficients at the first moment. Subsequently, the prior distribution of the mixing coefficients at the first moment is corrected according to the observed feature data at the first moment.
  • the estimation of the mixing coefficient distribution at each moment is based on the correction of the mixing coefficient distribution estimation at the previous moment, so that the distribution estimation of the mixing coefficient in the scene model is more stable and the calculation result is more accurate.
  • the Bayesian theorem and the conjugate relationship of the multinomial distribution and the Dirichlet distribution are utilized according to the prior distribution of the mixing coefficients at the next moment and the multinomial distribution of the mixing coefficients at the next moment. Calculating a posterior distribution of the mixing coefficients at the next moment. The calculation process of the multi-distribution of the mixing coefficients at the next moment is detailed later.
  • x) p t (x
  • ⁇ )p t-1 ( ⁇ )/p t (x) Multi(m
  • ⁇ t-1 )/p t (x Dir( ⁇
  • the relaxation operation is adopted to gradually reduce the proportion of the pre-estimation, so that the scene model tends to ignore the early data, focus on the recent data, and realize the dynamic estimation. more acurrate.
  • the change trend of the mixing coefficient is not assumed.
  • the values of the components of the mixing coefficient should be close to each other, so the smoothing operation can be taken while reducing the proportion of the pre-order estimation.
  • the estimation result of the mixing coefficient in the scene model can be made smoother.
  • the Bayesian theorem can be utilized according to the prior distribution of the mixing coefficient at the next moment and the likelihood function of the mixing coefficient at the next moment, and the relaxation coefficient is used to calculate the mixing coefficient of the next moment.
  • x) Dir( ⁇ ( ⁇ t-1 +m)+b). Since the Bayesian theorem is formed on the basis of the theory of probability, the Bayesian theorem is used to estimate the distribution of the mixing coefficients in the scene model, which can be applied to different application scenarios and has generalization.
  • the likelihood function of calculating the mixing coefficient of the next moment according to the observed feature data of the next moment includes:
  • the sample mean at any one time is equal to the sample mean at the initial time
  • the sample variance at any one time is equal to the sample variance at the initial time. It is of course also possible to use other estimation methods (such as the expected maximum method) to estimate the mean of the samples at any one time and the variance of the samples at any one time.
  • ⁇ ) of the mixing coefficient at the next moment according to the sample mean value of the next moment and the sample variance of the next moment includes:
  • m k represents the kth component of the vector m
  • z nk represents the implicit variable of the nth sample in the observed feature data corresponding to the kth Gaussian component at the next moment.
  • represents the parameter of the Dirichlet distribution at the next moment
  • the electronic device determines the next moment as a current moment.
  • the model parameter at any one of the moments further includes a mixing coefficient at any one time, the method further comprising:
  • the mixing coefficient at any one of the times is determined according to the mixing coefficient distribution estimate at any one of the times.
  • the present invention passes (a), establishes a scene model for describing a dynamically changing scene; (b) acquires sample feature data; and (c) models parameters of an initial time in the scene model according to the sample feature data; Performing an initial estimation, calculating a model parameter at an initial time; (d) determining the model parameter of the initial time as a model parameter of the current time; (e) acquiring the observed feature data of the current time at the current time; f) calculating a model parameter of the next moment according to the model parameter of the current moment and the observed feature data of the next moment; (g) determining the next moment as the current moment; (h) (e), (f), (g) are performed using an iterative method until the model parameters at each moment in the scene model are calculated.
  • the amount of calculation of the present invention is reduced by an order of magnitude, which increases the speed of operation.
  • the mixing coefficient at each moment is based on the correction of the mixing coefficient at the previous moment, thus making the estimation result of the mixing coefficient in the scene model more stable.
  • the relaxation operation is used to gradually reduce the proportion of pre-sequence estimation, focusing on recent data, realizing dynamic estimation, and making the results more accurate.
  • the smoothing operation is used to estimate the mixing coefficient, which makes the estimation result of the mixing coefficient in the scene model smoother.
  • FIG. 2 is a flow chart of a preferred embodiment of the data analysis method of the present invention. The order of the steps in the flowchart may be changed according to different requirements, and some steps may be omitted.
  • the electronic device acquires the collected sample to be tested.
  • the Gaussian mixture model is widely used in different fields such as pattern recognition, computer vision, machine learning, data mining, and bioinformatics.
  • Gaussian mixture models can be used to accomplish different application scenarios such as image segmentation, clustering, and construction of probability density functions.
  • the sample to be tested may be different depending on the application scenario.
  • the sample to be tested may be face data, human voice data, or the like, and the sample to be tested is not limited to the above examples.
  • the electronic device extracts sample feature data to be tested from the collected samples to be tested.
  • the feature data to be tested is extracted from the collected samples to be tested by using feature extraction technology.
  • the feature extraction technique is prior art and will not be described in detail in the present invention.
  • the electronic device calculates, by using a scenario model corresponding to the sample feature data to be tested, a probability that the feature data of the sample to be tested is in a corresponding scenario model.
  • the scene model corresponding to the sample feature data to be tested is pre-established, and the pre-established scene model is dynamically estimated by using the embodiment shown in FIG. 1 above. This can accurately represent dynamically changing application scenarios, improve the accuracy of tasks in the application scenario, and improve computational efficiency.
  • the electronic device analyzes the sample to be tested according to the probability that the sample characteristic data to be tested is under a corresponding scene model, and obtains an analysis result.
  • an analysis result is obtained by analyzing the sample to be tested in combination with an application scenario.
  • the application scenario is a segmentation of a background model in a motion scene
  • the scenario model represents a background estimation model in a motion scenario
  • the sample feature data to be tested is each pixel point XT at time t
  • the sample to be tested is The probability that the feature data is in the corresponding scene model is the probability that each pixel point XT belongs to the background estimation model, and it is determined whether each pixel point matches the background estimation model according to the probability that each pixel point XT belongs to the background estimation model.
  • the analysis result is that the pixel point belongs to the background under the motion scene.
  • a certain pixel point does not match the background estimation model, it may be determined that the analysis result is that the pixel point does not belong to a background in a motion scene or the like.
  • the present invention obtains the sample to be tested by acquiring the sample to be tested, and extracts the feature data of the sample to be tested from the sample to be tested; and calculates the probability corresponding to the feature data of the sample to be tested by using the scene model corresponding to the feature data of the sample to be tested. And analyzing the sample to be tested according to the probability corresponding to the characteristic data of the sample to be tested, and obtaining an analysis result. Therefore, the present invention Accurate analysis of data from dynamically changing application scenarios.
  • the face data in the monitoring area changes dynamically with time.
  • the face data in the face recognition system is constantly growing.
  • the actual “resident population” and “squatting personnel” also change with time. of.
  • the method of clustering data for a selected time range has high computational complexity, and it is not possible to effectively perform analysis of a resident population and the like in the case of dynamic changes in face data. Therefore, in order to solve the above problem, the crowd analysis can be performed by the method shown in FIG.
  • FIG. 3 is a flow chart of a preferred embodiment of the crowd analysis method of the present invention.
  • the order of the steps in the flowchart may be changed according to different requirements, and some steps may be omitted.
  • the electronic device acquires a face image in the collected monitoring area.
  • the target area is a human activity area
  • the face image may be one or more, and one face image corresponds to one user.
  • the specific implementation manner of collecting the face image in the monitoring area may be: acquiring a large-scale face image by deploying multiple monitoring cameras at different positions in the human activity area. Among them, it can be understood that the image of the face in the collected activity area of the person is constantly growing, and the appearance of the person in the activity area of the person also changes with time.
  • the electronic device extracts facial feature data from the facial image.
  • the electronic device analyzes the facial feature data based on a crowd analysis model, and calculates an appearance frequency of each of the plurality of time periods of the user of the facial feature data.
  • the crowd analysis model is pre-established.
  • the pre-established crowd analysis model is dynamically estimated using the embodiment shown in FIG. 1 above, as follows:
  • the electronic device analyzes the user according to the frequency of occurrence of each time period of the user in the plurality of time periods according to the facial feature data, and obtains an analysis result of the user.
  • the monitoring area is an office area.
  • the frequency of occurrence of a user is less than the preset number of times during the working hours, the user is determined to be a suspicious person.
  • the manager of the monitoring area is alerted to the user's whereabouts and the like.
  • the present invention acquires a face image in the collected monitoring area; extracts face feature data from the face image; and analyzes the face feature data based on the crowd analysis model to identify the user of the face feature data
  • the frequency of occurrence of each of the plurality of time periods; the frequency of occurrence of each of the plurality of time periods of the user according to the face feature data, The user is analyzed to obtain an analysis result of the user.
  • the scenario model dynamic estimation apparatus 10 includes an establishment module 100, an acquisition module 101, a calculation module 102, a determination module 103, and an iteration module 104.
  • the unit referred to in the present invention refers to a series of computer program segments that can be executed by the processor of the scene model dynamic estimation device 10 and that can perform fixed functions, which are stored in the memory. In the present embodiment, the functions of the respective units will be described in detail in the subsequent embodiments.
  • the establishing module 100 establishes a scene model for describing a dynamically changing scene.
  • the characteristics of the samples in the scene may change over time.
  • the scene model is composed of a Gaussian mixture model at a plurality of times, and the Gaussian mixture model at any one of the plurality of times is expressed as:
  • x represents the characteristic of any sample at any one of the times
  • the sample mean ⁇ k represents the mean value of the sample features at any one of the moments
  • the sample variance ⁇ k represents the degree of change of the sample features at any one of the moments
  • the mixing coefficient ⁇ k represents the weight of the kth Gaussian component in the Gaussian mixture model at any one of the times, and it can be said that the probability of the sample at the any one of the moments coming from the kth Gaussian component.
  • the model parameters of the scene model include model parameters at a plurality of times.
  • the model parameters at any one of the plurality of times include a sample mean ⁇ k , a sample variance ⁇ k , and a mixture coefficient distribution estimate.
  • the obtaining module 101 acquires sample feature data.
  • the sample feature data is extracted from pre-acquired samples and stored in advance in a memory of the electronic device.
  • the larger the number of samples the greater the confidence in the subsequent estimation of the model parameters of the scene model, and the more accurate the model parameters of the scene model will be.
  • the calculating module 102 performs initial estimation on the model parameters of the initial time in the scene model according to the sample feature data, and calculates model parameters at the initial time.
  • the mixing factor ⁇ k at any one time is satisfied 0 ⁇ ⁇ k ⁇ 1, where K represents the total number of Gaussian components at any one time. Therefore, the calculation module 102 uses the Dirichlet distribution to model the mixing coefficient in the Gaussian mixture model at any one of the times, and obtains the mixing coefficient distribution model at any one of the moments, that is, Obtain the Dirichlet distribution Dir( ⁇
  • the calculating module 102 performs initial estimation on the model parameters of the initial moment in the scene model based on the mixed coefficient distribution model at any one of the moments, and calculates the model parameters of the initial moment in the scene model, including:
  • the maximum likelihood estimation is used to estimate the sample in the Gaussian mixture model at the initial time, and the mean value estimation at the initial time and the variance estimation at the initial time are obtained;
  • the initial estimation of the mixed coefficient distribution model is performed, and the distribution coefficient distribution estimation at the initial time is obtained, that is, the Dirichlet distribution Dir( ⁇
  • the determining module 103 determines the model parameter of the initial time as the model parameter of the current time.
  • the obtaining module 101 acquires observation feature data of the next moment of the current moment.
  • the current time is represented by t-1, and the current time of the current time is represented by t.
  • t-1 For example, if the field The data in the scene is collected once every second. If the current time t-1 corresponds to the first second, the next moment of the current time corresponds to the second second.
  • the observed feature data is extracted from samples collected by the acquisition device in the scene model in real time.
  • the calculating module 102 calculates the model parameters of the next moment according to the model parameters of the current moment and the observed feature data of the next moment.
  • the calculation module 102 calculates the model parameters of the next moment according to the model parameters of the current moment and the observed feature data of the next moment, including:
  • the estimation of the mixing coefficient distribution at the initial moment is determined as the prior distribution of the mixing coefficients at the first moment. Subsequently, the prior distribution of the mixing coefficients at the first moment is corrected according to the observed feature data at the first moment.
  • the estimation of the mixing coefficient distribution at each moment is based on the correction of the mixing coefficient distribution estimation at the previous moment, so that the estimation of the distribution coefficient of the mixing coefficient in the scene model is more stable and the calculation result is more accurate.
  • the Bayesian theorem and the conjugate relationship of the multinomial distribution and the Dirichlet distribution are utilized according to the prior distribution of the mixing coefficients at the next moment and the multinomial distribution of the mixing coefficients at the next moment. Calculating a posterior distribution of the mixing coefficients at the next moment. The calculation process of the multi-distribution of the mixing coefficients at the next moment is detailed later.
  • x) p t (x
  • ⁇ )p t-1 ( ⁇ )/p t (x) Multi(m
  • ⁇ t-1 )/p t (x Dir( ⁇
  • the parameter vector, the ⁇ t-1 represents the parameter vector of the Dirichlet distribution at the current moment.
  • the relaxation operation is adopted to gradually reduce the proportion of the pre-estimation, so that the scene model tends to ignore the early data, focus on the recent data, and realize the dynamic estimation. more acurrate.
  • the change trend of the mixing coefficient is not assumed.
  • the values of the components of the mixing coefficient should be close to each other, so the smoothing operation can be taken while reducing the proportion of the pre-sense estimation. This makes the estimation of the mixing coefficients in the scene model smoother.
  • the Bayesian theorem can be utilized according to the prior distribution of the mixing coefficient at the next moment and the likelihood function of the mixing coefficient at the next moment, and the relaxation coefficient is used to calculate the mixing coefficient of the next moment.
  • x) Dir( ⁇ ( ⁇ t-1 +m)+b). Since the Bayesian theorem is formed on the basis of the theory of probability, the Bayesian theorem is used to estimate the distribution of the mixing coefficients in the scene model, which can be applied to different application scenarios and has generalization.
  • the calculating module 102 calculates the next one according to the observed feature data of the next moment.
  • the likelihood function of the mixing coefficient of the moment includes:
  • the sample mean at any one time is equal to the sample mean at the initial time
  • the sample variance at any one time is equal to the sample variance at the initial time. It is of course also possible to use other estimation methods (such as the expected maximum method) to estimate the mean of the samples at any one time and the variance of the samples at any one time.
  • ⁇ ) of the mixing coefficient at the next moment according to the sample mean value of the next moment and the sample variance of the next moment includes:
  • m k represents the kth component of the vector m
  • z nk represents the implicit variable of the nth sample in the observed feature data corresponding to the kth Gaussian component at the next moment.
  • represents the parameter of the Dirichlet distribution at the next moment
  • the determining module 103 is further configured to determine the next moment as the current moment.
  • the iterative module 104 continues to perform the acquisition module to acquire the observed feature data of the current time at the current time by using the iterative method; the calculating module is configured according to the model parameter of the current time and the observed feature data of the next time Calculating the model parameters of the next moment; the determining module determines the next moment as the current moment until the model parameters of each moment in the scene model are calculated.
  • the model parameter at any one of the moments further includes a mixing coefficient at any time
  • the determining module 103 is further configured to:
  • the mixing coefficient at any one of the times is determined according to the mixing coefficient distribution estimate at any one of the times.
  • the determining module 103 determines, according to the mixing coefficient distribution estimation at any one of the moments, that the mixing coefficient at any one of the moments includes one or more of the following combinations:
  • the present invention passes (a), establishes a scene model for describing a dynamically changing scene; (b) acquires sample feature data; and (c) models parameters of an initial time in the scene model according to the sample feature data; Performing an initial estimation, calculating a model parameter at an initial time; (d) determining the model parameter of the initial time as a model parameter of the current time; (e) acquiring the observed feature data of the current time at the current time; f) calculating a model parameter of the next moment according to the model parameter of the current moment and the observed feature data of the next moment; (g) determining the next moment as the current moment; (h) (e), (f), (g) are performed using an iterative method until the model parameters at each moment in the scene model are calculated.
  • the amount of calculation of the present invention is reduced by an order of magnitude, which increases the speed of operation.
  • the mixing coefficient at each moment is based on the correction of the mixing coefficient at the previous moment, thus making the estimation result of the mixing coefficient in the scene model more stable.
  • the relaxation operation is used to gradually reduce the proportion of pre-sequence estimation, focusing on recent data, realizing dynamic estimation, and making the results more accurate.
  • the smoothing operation is used to estimate the mixing coefficient, which makes the estimation result of the mixing coefficient in the scene model smoother.
  • FIG. 5 is a functional block diagram of a preferred embodiment of the data analysis apparatus of the present invention.
  • the scene model dynamic estimating apparatus 50 includes a data acquiring module 500, a feature extracting module 501, a data calculating module 502, and a result analyzing module 503.
  • the unit referred to in the present invention refers to a series of computer program segments that can be executed by the processor of the data analysis device 50 and that can perform fixed functions, which are stored in the memory. In the present embodiment, the functions of the respective units will be described in detail in the subsequent embodiments.
  • the data acquisition module 500 acquires the collected samples to be tested.
  • the Gaussian mixture model is widely used in different fields such as pattern recognition, computer vision, machine learning, data mining, and bioinformatics.
  • Gaussian mixture models can be used to accomplish different application scenarios such as image segmentation, clustering, and construction of probability density functions.
  • the sample to be tested may be different depending on the application scenario.
  • the sample to be tested may be face data, human voice data, or the like, and the sample to be tested is not limited to the above examples.
  • the feature extraction module 501 extracts sample feature data to be tested from the collected samples to be tested.
  • the feature data to be tested is extracted from the collected samples to be tested by using feature extraction technology.
  • the feature extraction technique is prior art and will not be described in detail in the present invention.
  • the data calculation module 502 calculates a probability under the scene model corresponding to the sample feature data to be tested by using a scenario model corresponding to the sample feature data to be tested.
  • the scene model corresponding to the sample feature data to be tested is pre-established, and the pre-established scene model is dynamically estimated by using the embodiment shown in FIG. 1 above. This can accurately represent dynamically changing application scenarios, improve the accuracy of tasks in the application scenario, and improve computational efficiency.
  • the result analysis module 503 analyzes the sample to be tested according to the probability under the scene model corresponding to the sample feature data of the sample to be tested, and obtains an analysis result.
  • the result analysis module 503 combines the application scenario, analyzes the sample to be tested, and obtains an analysis result.
  • the application scenario is a segmentation of a background model in a motion scene
  • the scenario model represents a background estimation model in a motion scenario
  • the sample feature data to be tested is each pixel point XT at time t
  • the sample to be tested is The probability under the scene model corresponding to the feature data is the probability that each pixel point XT belongs to the background estimation model, and it is determined whether each pixel point matches the background estimation model according to the probability that each pixel point XT belongs to the background estimation model.
  • the present invention obtains the sample to be tested by acquiring the sample to be tested, and extracts the feature data of the sample to be tested from the sample to be tested; and calculates the probability corresponding to the feature data of the sample to be tested by using the scene model corresponding to the feature data of the sample to be tested. And analyzing the sample to be tested according to the probability corresponding to the characteristic data of the sample to be tested, and obtaining an analysis result. Therefore, the present invention accurately analyzes data of dynamically changing application scenarios.
  • the face data in the monitoring area changes dynamically with time.
  • the face data in the face recognition system is constantly growing.
  • the actual “resident population” and “squatting personnel” also change with time. of.
  • the method of clustering data for a selected time range has high computational complexity, and it is not possible to effectively perform analysis of a resident population and the like in the case of dynamic changes in face data.
  • the crowd analysis device 60 includes an image acquisition module 601, a data extraction module 602, a frequency calculation module 603, and a data analysis module 604.
  • the unit referred to in the present invention refers to a series of computer program segments that can be executed by the processor of the crowd analysis device 60 and that can perform fixed functions, which are stored in the memory. In the present embodiment, the functions of the respective units will be described in detail in the subsequent embodiments.
  • the image acquisition module 601 acquires a face image in the collected monitoring area.
  • the target area is a human activity area
  • the face image may be one or more, and one face image corresponds to one user.
  • the specific implementation manner of collecting the face image in the monitoring area may be: acquiring a large-scale face image by deploying multiple monitoring cameras at different positions in the human activity area. Among them, it can be understood that the image of the face in the collected activity area of the person is constantly growing, and the appearance of the person in the activity area of the person also changes with time.
  • the data extraction module 602 extracts facial feature data from the face image.
  • the frequency calculation module 603 analyzes the facial feature data based on the crowd analysis model, and calculates the frequency of occurrence of each of the plurality of time periods of the user of the facial feature data.
  • the crowd analysis model is pre-established.
  • the pre-established crowd analysis model is dynamically estimated using the embodiment shown in FIG. 1 above, as follows:
  • the data analysis module 604 analyzes the user according to the frequency of occurrence of each of the plurality of time periods of the user of the facial feature data, and obtains an analysis result of the user.
  • the monitoring area is an office area, and during the working hours, the frequency of occurrence of one user is less than The preset number of times determines that the user is a suspicious individual.
  • the manager of the monitoring area is alerted to the user's whereabouts and the like.
  • the present invention acquires a face image in the collected monitoring area; extracts face feature data from the face image; and analyzes the face feature data based on the crowd analysis model to identify the user of the face feature data
  • the frequency of occurrence of each of the plurality of time periods analyzing the user according to the frequency of occurrence of each of the plurality of time periods of the user according to the facial feature data, and obtaining an analysis result of the user.
  • the above-described integrated unit implemented in the form of a software function module can be stored in a computer readable storage medium.
  • the above software functional modules are stored in a storage medium and include instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform the method of each embodiment of the present invention. Part of the steps.
  • the electronic device 1 includes at least one transmitting device 31, at least one memory 32, at least one processor 33, at least one receiving device 34, at least one display (not shown), and at least one communication bus.
  • the communication bus is used to implement connection communication between these components.
  • the electronic device 1 is a device capable of automatically performing numerical calculation and/or information processing according to an instruction set or stored in advance, and the hardware includes, but not limited to, a microprocessor, an application specific integrated circuit (ASIC). ), Field-Programmable Gate Array (FPGA), Digital Signal Processor (DSP), embedded devices, etc.
  • the electronic device 1 may also comprise a network device and/or a user device.
  • the network device includes, but is not limited to, a single network server, a server group composed of multiple network servers, or a cloud computing-based cloud composed of a large number of hosts or network servers, where the cloud computing is distributed computing.
  • a super virtual computer consisting of a group of loosely coupled computers.
  • the electronic device 1 may be, but is not limited to, any electronic product that can interact with a user through a keyboard, a touch pad, or a voice control device, such as a tablet, a smart phone, or a personal digital assistant (Personal Digital Assistant). , PDA), smart wearable devices, camera equipment, monitoring equipment and other terminals.
  • a keyboard e.g., a keyboard
  • a touch pad e.g., a touch pad
  • a voice control device such as a tablet, a smart phone, or a personal digital assistant (Personal Digital Assistant). , PDA), smart wearable devices, camera equipment, monitoring equipment and other terminals.
  • PDA Personal Digital Assistant
  • the network in which the electronic device 1 is located includes, but is not limited to, the Internet, a wide area network, a metropolitan area network, a local area network, a virtual private network (VPN), and the like.
  • the Internet includes, but is not limited to, the Internet, a wide area network, a metropolitan area network, a local area network, a virtual private network (VPN), and the like.
  • VPN virtual private network
  • the receiving device 34 and the transmitting device 31 may be wired transmission ports, or may be wireless devices, for example, including antenna devices, for performing data communication with other devices.
  • the memory 32 is used to store program code.
  • the memory 32 may be a circuit having a storage function, such as a RAM (Random-Access Memory), a FIFO (First In First Out), or the like, which has no physical form in the integrated circuit.
  • the memory 32 may also be a memory having a physical form, such as a memory stick, a TF card (Trans-flash Card), a smart media card, a secure digital card, a flash memory card.
  • Storage devices such as (flash card) and the like.
  • the processor 33 can include one or more microprocessors, digital processors.
  • the processor 33 can call program code stored in the memory 32 to perform related functions. For example, each unit described in FIG. 3 is program code stored in the memory 32 and executed by the processor 33 to implement a scene model dynamic estimation method, a data analysis method, and a Crowd analysis method.
  • the processor 33 also known as a central processing unit (CPU), is a very large-scale integrated circuit, which is a computing core (Core) and a control unit (Control Unit).
  • the embodiment of the present invention further provides a computer readable storage medium having stored thereon computer instructions, when executed by an electronic device including one or more processors, causing the electronic device to perform the method embodiment as described above.
  • the scene model dynamic estimation method, a data analysis method, and a crowd analysis method are stored thereon computer instructions, when executed by an electronic device including one or more processors, causing the electronic device to perform the method embodiment as described above.
  • modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional module in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of hardware plus software function modules.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computational Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Evolutionary Computation (AREA)
  • Algebra (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Complex Calculations (AREA)

Abstract

一种场景模型动态估计方法及装置,所述方法包括:(a)建立用于描述动态变化的场景的场景模型(S10);(b)获取样本特征数据(S11);(c)根据所述样本特征数据,对所述场景模型中初始时刻的模型参数初始估计,计算初始时刻的模型参数(S12);(d)将所述初始时刻的模型参数确定为当前时刻的模型参数(S13);(e)获取所述当前时刻的下一时刻的观测特征数据(S14);(f)根据所述当前时刻的模型参数及所述下一时刻的观测特征数据,计算所述下一时刻的模型参数(S15);(g)将所述下一时刻确定为当前时刻(S16);(h)利用迭代方法执行(e)(f)(g),直至计算完所述场景模型中每个时刻的模型参数(S17)。该方法能提高运算速度,使结果更稳定,平滑。

Description

场景模型动态估计方法、数据分析方法及装置、电子设备
本申请要求于2016年10月10日提交中国专利局,申请号为201610884792.2、发明名称为“动态估计高斯混合模型的混合系数的方法及计算机设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中;要求2016年10月10日提交中国专利局,申请号为201610884791.8、发明名称为“一种人群分析方法及计算机设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中;2017年8月23日提交中国专利局,申请号为201710727993.6、发明名称为“场景模型动态估计方法、数据分析方法及装置、电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明设计数据处理技术领域,尤其涉及一种场景模型动态估计方法、数据分析方法及装置、电子设备。
背景技术
高斯混合模型(Gaussian Mixture Models,GMM)被广泛应用于模式识别、计算机视觉、机器学习、数据挖掘、生物信息学等不同领域。在这些领域里,它被用来完成诸如图像分割、聚类、概率密度函数的构建等任务。高斯混合模型由多个不同的高斯分量(Gaussian components)组成。
通常人们用期望最大化(Expectation Maximization,EM)算法求解高斯混合模型中的参数。在有些动态变化的应用场景(对一段语音场景、对运动目标进行分析的场景等等),高斯混合模型的混合系数(mixture coefficients)会随着时间变化,因此,急需一种解决动态变化的应用场景下的动态估计高斯混合模型参数的方法。
目前,动态估计高斯混合模型参数的方法包括基于滑动窗口和移动平均数的方法。滑动窗口方法的主要不足计算量较大而且冗余,对每一个时刻的混合系数的计算都需要使用一定长度为的时间段内的数据,期望最大化算法处理数据的时间复杂度为O(n2)。此外时刻t和时刻t+1所对应的滑动窗口有很大一部分是重叠的,因此这些重叠的数据被分别计算了多次。同时,滑动窗口方法对窗口之外的数据不做处理,如果窗口尺寸较小,会导致样本量不足,如果窗口尺寸较大,会违反混合系数的变化可以忽略不计的假设。另外,基于移动平均数的方法需要知道不同时刻的模型的高斯分量(Gaussian components)之间的对应关系,这对于传统的期望最大化方法来说很难做到。
发明内容
鉴于以上内容,有必要提供一种场景模型动态估计方法、数据分析方法及装置、电子设备,本发明能对动态变化的场景模型进行估计,并对动态变化的应用场景的数据进行准确的分析。
一种场景模型动态估计方法,所述方法包括:
(a)、建立用于描述动态变化的场景的场景模型;
(b)、获取样本特征数据;
(c)、根据所述样本特征数据,对所述场景模型中初始时刻的模型参数进行初始估计,计算初始时刻的模型参数;
(d)、将所述初始时刻的模型参数确定为当前时刻的模型参数;
(e)、获取所述当前时刻的下一时刻的观测特征数据;
(f)、根据所述当前时刻的模型参数及所述下一时刻的观测特征数据,计算所述下一时刻的模型参数;
(g)、将所述下一时刻确定为当前时刻;
(h)、利用迭代方法执行(e)、(f)、(g),直至计算完所述场景模型中每个时刻的模型参数。
一种场景模型动态估计装置,所述装置包括:
建立模块,用于建立用于描述动态变化的场景的场景模型;
获取模块,用于获取样本特征数据;
计算模块,用于根据所述样本特征数据,对所述场景模型中初始时刻的模型参数进行初始估计,计算初始时刻的模型参数;
确定模块,用于将所述初始时刻的模型参数确定为当前时刻的模型参数;
所述获取模块还用于获取所述当前时刻的下一时刻的观测特征数据;
所述计算模块还用于根据所述当前时刻的模型参数及所述下一时刻的观测特征数据,计算所述下一时刻的模型参数;
所述确定模块还用于将所述下一时刻确定为当前时刻;
迭代模块,用于利用迭代方法继续执行所述获取模块获取所述当前时刻的下一时刻的观测特征数据;所述计算模块根据所述当前时刻的模型参数及所述下一时刻的观测特征数据,计算所述下一时刻的模型参数;所述确定模块将所述下一时刻确定为当前时刻,直至计算完所述场景模型中每个时刻的模型参数。
一种电子设备,所述电子设备包括存储器及处理器,所述存储器用于存储至少一个指令,所述处理器用于执行所述至少一个指令以实现任意实施例中所述场景模型动态估计方法。
一种数据分析方法,所述方法包括:
获取采集的待测样本;
从所述采集的待测样本中提取待测样本特征数据;
利用所述待测样本特征数据对应的场景模型,计算所述待测样本特征数据对应的概率,利用上述实施例中任意一项的场景模型动态估计方法对所述待测样本特征数据对应的场景模型进行估计;
根据所述待测样本特征数据对应的概率,对所述待测样本进行分析,得到分析结果。
一种电子设备,所述电子设备包括存储器及处理器,所述存储器用于存储至少一个指令,所述处理器用于执行所述至少一个指令以实现实施例中的数据分析方法。
一种人群分析方法,所述方法包括:
获取采集的监控区域内的人脸图像;
从所述人脸图像中提取人脸特征数据;
基于人群分析模型,对所述人脸特征数据进行分析,识别所述人脸特征数据的用户的在多个时段中每个时段的出现频率,所述人群分析模型利用上述实施例中任一项的场景模型动 态估计方法进行估计;
根据所述人脸特征数据的用户的在多个时段中每个时段的出现频率,对所述用户进行分析,得到所述用户的分析结果。
一种电子设备,所述电子设备包括存储器及处理器,所述存储器用于存储至少一个指令,所述处理器用于执行所述至少一个指令以实现实施例中数据分析方法。
由以上技术方案可以看出,本发明通过(a)、建立用于描述动态变化的场景的场景模型;(b)、获取样本特征数据;(c)、根据所述样本特征数据,对所述场景模型中初始时刻的模型参数进行初始估计,计算初始时刻的模型参数;(d)、将所述初始时刻的模型参数确定为当前时刻的模型参数;(e)、获取所述当前时刻的下一时刻的观测特征数据;(f)、根据所述当前时刻的模型参数及所述下一时刻的观测特征数据,计算所述下一时刻的模型参数;(g)、将所述下一时刻确定为当前时刻;(h)、利用迭代方法执行(e)、(f)、(g),直至计算完所述场景模型中每个时刻的模型参数。本发明的计算量减少一个数量级,提高了运算速度。而且每一时刻的混合系数都是基于上一时刻的混合系数做的修正,因此使得场景模型中的混合系数的估计结果更稳定。另外,采用松弛(relaxation)操作逐渐减少前序估计所占的比重,着重近期的数据,实现动态估计,使结果更准确。采用平滑操作进行混合系数的估计,这样可以使得场景模型中的混合系数的估计结果更平滑。因此,本发明对动态变化的应用场景的数据进行准确的分析。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。
图1是本发明场景模型动态估计方法的较佳实施例的流程图。
图2是本发明中数据分析方法的较佳实施例的流程图。
图3是本发明中人群分析方法的较佳实施例的流程图。
图4是本发明场景模型动态估计装置的较佳实施例的功能模块图。
图5是本发明数据分析装置的较佳实施例的功能模块图。
图6是本发明人群分析装置的较佳实施例的功能模块图。
图7是本发明至少一个实例中电子设备的较佳实施例的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
为使本发明的上述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本发明作进一步详细的说明。
如图1所示,是本发明场景模型动态估计方法的较佳实施例的流程图。根据不同的需求,该流程图中步骤的顺序可以改变,某些步骤可以省略。
S10,所述电子设备建立用于描述动态变化的场景的场景模型。
在本发明的实施例中,在所述动态变化的场景中,场景中样本的特征会随着时间在变化。例如,对于一段语音数据,每个时刻对应的音位的可能性在变化。对于在一个时间段内观测到的人像数据,每个人出现的频率会随着时间变化。当然所述动态变化的场景还包括其他应用场景,不限于上述的举例。
所述场景模型由多个时刻下的高斯混合模型组成,所述多个时刻中任一时刻下的高斯混合模型表示为:
Figure PCTCN2017103988-appb-000001
其中,x表示所述任一时刻下任一样本的特征,其中样本均值μk表示所述任一时刻下样本特征的均值、样本方差Σk表示所述任一时刻下样本特征的变化程度,混合系数πk表示在所述任一时刻下的高斯混合模型中第k个高斯分量的权重,也可以说,表示在所述任一时刻下的样本来自第k个高斯分量的概率。
例如,对于一段语音数据,所述样本表示音位、所述样本特征表示音位的发音、所述样本均值表示音位的发音的均值,所述样本方差表示同一音位的发音的变化程度,所述混合系数πk表示音位来自任一时刻下第k个高斯分量的概率。
又如:对于抓拍的一段人像数据,所述样本表示人像、所述样本特征表示人的外表特征、所述样本均值表示人的外表特征的均值,所述样本方差表示同一个的外表特征的差异程度,所述混合系数πk表示在任一时刻下每个人对应的频率。
在本发明的实施例中,在所述场景模型中,所述场景模型的模型参数包括多个时刻下的模型参数。所述多个时刻中任一时刻下的模型参数包括样本均值μk、样本方差Σk、混合系数分布估计。
S11,所述电子设备获取样本特征数据。
在优选实施例中,所述样本特征数据从预先采集的样本中提取出来的,并预先存储于所述电子设备的存储器中。样本的数量越大,后续估计所述场景模型的模型参数的置信度就会越大,所述场景模型的模型参数就会越准确。
S12,所述电子设备根据所述样本特征数据,对所述场景模型中初始时刻的模型参数进行初始估计,计算初始时刻的模型参数。
在优选实施例中,任一时刻下的混合系数πk满足
Figure PCTCN2017103988-appb-000002
0≤πk≤1,其中K表示任一时刻下高斯分量的总数。因此,所述电子设备利用狄利克雷分布(Dirichlet distribution),对所述任一时刻下的高斯混合模型中的混合系数进行建模,得到所述任一时刻下的混合系数分布模型,即得到所述任一时刻s下的狄利克雷分布Dir(π|αs),其中αs是任一时刻s下的狄利克雷分布的参数向量,π表示任一时刻s下的混合系数向量。
基于所述任一时刻下的混合系数分布模型,根据所述样本特征数据,对场景模型中初始时刻的模型参数进行初始估计,计算初始时刻的模型参数包括:
基于所述样本特征数据,利用极大似然估计对初始时刻下的高斯混合模型中的样本估计,得到初始时刻下的均值估计、初始时刻下的方差估计;
基于所述样本特征数据,利用期望最大化方法,对混合系数分布模型进行初始估计,得到初始时刻下的混合系数分布估计,即狄利克雷分布Dir(π|α0),其中,α0是初始时刻下的狄利克雷分布的参数向量,π表示初始时刻下的混合系数向量。
S13,所述电子设备将所述初始时刻的模型参数确定为当前时刻的模型参数。
S14,所述电子设备获取所述当前时刻的下一时刻的观测特征数据。
在本发明的实施例中,当前时刻用t-1表示,当前时刻的下一时刻用t表示。例如,若场景中的数据以每秒采集一次,若当前时刻t-1对应第一秒,则当前时刻的下一时刻对应第二秒。
所述观测特征数据从所述场景模型中的采集设备实时采集的观测数据中提取出来的。例如,当所述场景模型用于描述一个区域内的人群进行人群分析时,采集设备可以是摄像装置,观测数据即为采集的人脸样本数据,观测特征数据即为采集的人脸样本的特征数据。
S15,所述电子设备根据所述当前时刻的模型参数及所述下一时刻的观测特征数据,计算所述下一时刻的模型参数。
在优选实施例中,所述根据所述当前时刻的模型参数及所述下一时刻的观测特征数据,计算所述下一时刻的模型参数包括:
(a1)将当前时刻的混合系数分布估计确定为所述下一时刻下的混合系数的先验分布,用pt-1(π)表示。
例如,将初始时刻的混合系数分布估计确定为第一时刻下的混合系数的先验分布。后续再根据第一时刻的观测特征数据,对所述第一时刻下的混合系数的先验分布进行修正。
(a2)根据所述下一时刻的观测特征数据,计算所述下一时刻的混合系数的似然函数,用pt(x|π)表示。
(a3)根据所述下一时刻的混合系数的先验分布及所述下一时刻的混合系数的似然函数,利用贝叶斯定理计算所述下一时刻的混合系数的后验分布pt(π|x)=pt(x|π)pt-1(π)/pt(x)。
通过上述优选实施例可知,每一时刻的混合系数分布估计都是基于上一时刻的混合系数分布估计做的修正,因此使得场景模型中的混合系数的分布估计更稳定,计算结果更准确。
在优选实施例中,根据所述下一时刻的混合系数的先验分布及下一时刻的混合系数的多项分布,利用贝叶斯定理及利用多项分布与狄利克雷分布的共轭关系,计算所述下一时刻的混合系数的后验分布。所述下一时刻的混合系数的多项分布的计算过程在后续详述。
具体地,当将所述下一时刻的混合系数的多项分布Multi(m|π)确定为所述下一时刻的混合系数的似然函数时,利用贝叶斯定理及利用多项分布与狄利克雷分布的共轭关系,所述下一时刻的混合系数的后验分布:
pt(π|x)=pt(x|π)pt-1(π)/pt(x)=Multi(m|π)×Dir(π|αt-1)/pt(x)=Dir(π|(αt-1+m),其中Dir(π|(αt-1+m)表示狄利克雷分布,m为所述下一时刻的混合系数的多项分布的参数向量,所述αt-1表示当前时刻下的狄利克雷分布的参数向量。
由于观测样本的数量随着时间增加的同时,对参数的估计的置信度也在不断增加。因此为了适应混合系数随着时间变化的情况,采取松弛(relaxation)操作逐渐减少前序估计所占的比重,这样场景模型会倾向于忽略早期的数据,着重近期的数据,实现动态估计,使结果更准确。另外对混合系数的变化趋势不做假设,在观测数据未知的情况下混合系数的各个分量的值应该趋近于相等,所以在减少前序估计所占的比重的同时还可以采取平滑操作,这样可以使得场景模型中的混合系数的估计结果更平滑。
因此,可以根据所述下一时刻的混合系数的先验分布及所述下一时刻的混合系数的似然函数,利用贝叶斯定理,并采用松弛操作计算所述下一时刻的混合系数的后验分布pt(π|x)=Dir(δ(αt-1+m)+b)。由于贝叶斯定理是在概率的理论基础上形成的,因此,利用贝叶斯定理对场景模型中的混合系数分布进行估计,可以适用于不同的应用场景,具有推广性。
其中,0≤δ≤1,δ表示历史数据所占的比重,b≤0,b表示所述下一时刻的混合系数π变化的不确定性。
(a4)将所述下一时刻的混合系数的后验分布确定为所述下一时刻的混合系数分布估计。
在优选实施例中,所述根据所述下一时刻的观测特征数据,计算所述下一时刻的混合系数的似然函数包括:
(a21)确定所述下一时刻的样本均值及所述下一时刻的样本方差。
优选地,为了简化计算,任一时刻下的样本均值等于初始时刻下的样本均值,任一时刻下的样本方差等于初始时刻下的样本方差。当然也可以采用其他估计方法(如期望最大法)来估计任一时刻下的样本均值及任一时刻下的样本方差。
(a22)根据所述下一时刻的样本均值及所述下一时刻的样本方差,估计所述下一时刻的混合系数的多项分布Multi(m|π)。
(a23)将所述下一时刻的混合系数的多项分布确定为所述下一时刻的混合系数的似然函数。
在优选实施例中,所述根据所述下一时刻的样本均值及所述下一时刻的样本方差,估计所述下一时刻的混合系数的多项分布Multi(m|π)包括:
(a221)根据所述下一时刻的样本均值、所述下一时刻的样本方差、所述下一时刻下的混合系数的先验分布,计算所述下一时刻的观测特征数据的隐含变量z的期望值E[z|x,α],其中α表示所述下一时刻下的狄利克雷分布的参数,所述隐含变量表示所述下一时刻中每一个样本属于所述下一时刻的高斯混合模型中的每一个高斯分布的程度。
(a222)根据所述下一时刻的观测特征数据的隐含变量的期望值,计算所述下一时刻的混合系数的多项分布的参数。
具体地,计算所述下一时刻的混合系数的多项分布Multi(m|π)的参数mk的公式如下:
Figure PCTCN2017103988-appb-000003
其中mk表示向量m中第k个分量,znk表示所述下一时刻中第k个高斯分量对应的观测特征数据中第n个样本的隐含变量。其中α表示所述下一时刻下的狄利克雷分布的参数,
(a223)基于所述下一时刻的混合系数的多项分布的参数,估计所述下一时刻的混合系数的多项分布。
S16,所述电子设备将所述下一时刻确定为当前时刻。
在本发明的实施例中,将所述下一时刻确定为当前时刻,相当于赋值操作。例如,若所述下一时刻用tt1表示,所述当前时刻用tt表示,则所述下一时刻确定为当前时刻,即表示为tt=tt1。
S17,利用迭代方法执行(S14)、(S15)、(S16),直至计算完所述场景模型中每个时刻的模型参数。
在优选实施例中,所述任一时刻下的模型参数还包括任一时刻下的混合系数,所述方法还包括:
根据所述任一时刻下的混合系数分布估计,确定所述任一时刻下的混合系数。
所述根据所述任一时刻下的混合系数分布估计,确定所述任一时刻下的混合系数包括以下一种或者多种组合:
对根据所述任一时刻下的混合系数分布估计进行采样,获取采样数据,将采样数据确定为所述任一时刻下的混合系数;或
计算使所述任一时刻下的混合系数分布估计最大的一组数值,将所述一组数值确定为所 述任一时刻下的混合系数。
本发明通过(a)、建立用于描述动态变化的场景的场景模型;(b)、获取样本特征数据;(c)、根据所述样本特征数据,对所述场景模型中初始时刻的模型参数进行初始估计,计算初始时刻的模型参数;(d)、将所述初始时刻的模型参数确定为当前时刻的模型参数;(e)、获取所述当前时刻的下一时刻的观测特征数据;(f)、根据所述当前时刻的模型参数及所述下一时刻的观测特征数据,计算所述下一时刻的模型参数;(g)、将所述下一时刻确定为当前时刻;(h)、利用迭代方法执行(e)、(f)、(g),直至计算完所述场景模型中每个时刻的模型参数。本发明的计算量减少一个数量级,提高了运算速度。而且每一时刻的混合系数都是基于上一时刻的混合系数做的修正,因此使得场景模型中的混合系数的估计结果更稳定。另外,采用松弛(relaxation)操作逐渐减少前序估计所占的比重,着重近期的数据,实现动态估计,使结果更准确。采用平滑操作进行混合系数的估计,这样可以使得场景模型中的混合系数的估计结果更平滑。
如图2所示,是本发明数据分析方法的较佳实施例的流程图。根据不同的需求,该流程图中步骤的顺序可以改变,某些步骤可以省略。
S20,所述电子设备获取采集的待测样本。
在本发明的实施例中,高斯混合模型被广泛应用于模式识别、计算机视觉、机器学习、数据挖掘、生物信息学等不同领域。在这些领域里,高斯混合模型可以用于完成诸如图像分割、聚类、概率密度函数的构建等不同的应用场景。
因此,应用场景的不同,所述待测样本会不同。例如,所述待测样本可以是人脸数据、人的语音数据等等,所述待测样本并不限于上述的举例。
S21,所述电子设备从所述采集的待测样本中提取待测样本特征数据。
在本发明的实施例中,利用特征提取技术,从所述采集的待测样本中提取待测样本特征数据。所述特征提取技术是现有技术,本发明不再详述。
S22,所述电子设备利用所述待测样本特征数据对应的场景模型,计算所述待测样本特征数据在对应的场景模型下的概率。
在本发明的实施例中,预先建立所述待测样本特征数据对应的场景模型,预先建立的场景模型利用上述图1所示的实施例进行动态估计。这样能准确的表示动态变化的应用场景,提高应用场景下的任务的准确度,及提高运算效率。
S23,所述电子设备根据所述待测样本特征数据在对应的场景模型下的概率,对所述待测样本进行分析,得到分析结果。
在本发明的实施例中,结合应用场景,对所述待测样本的分析,得到分析结果。例如,若应用场景是运动场景下的背景模型的分割,场景模型表示的是运动场景下的背景估计模型,所述待测样本特征数据为t时刻的每个像素点XT,所述待测样本特征数据在对应的场景模型下的概率即为每个像素点XT属于背景估计模型的概率,根据每个像素点XT属于背景估计模型的概率,判断每个像素点是否与背景估计模型相匹配。当某个像素点与所述背景估计模型相匹配,即可确定分析结果为所述像素点属于运动场景下的背景。当某个像素点与所述背景估计模型不匹配,即可确定分析结果为所述像素点不属于运动场景下的背景等等。
本发明通过获取采集的待测样本;从所述采集的待测样本中提取待测样本特征数据;利用所述待测样本特征数据对应的场景模型,计算所述待测样本特征数据对应的概率;根据所述待测样本特征数据对应的概率,对所述待测样本进行分析,得到分析结果。因此,本发明 对动态变化的应用场景的数据进行准确的分析。
基于人群分析的应用场景实施例:
在实际场景中,监控区域中的人脸数据是随着时间动态变化的,人脸识别系统中的人脸数据是不断增长的,实际的“常住人口”、“徘徊人员”等也是随时间变化的。针对选定时间范围的数据进行聚类的方法计算复杂度高,无法在人脸数据动态变化的情况下有效的进行常住人口等类似的人群分析。因此,为了解决上述问题,可以采用如图3所示的方法进行人群分析。
如图3所示,图3是本发明中人群分析方法的较佳实施例的流程图。根据不同的需求,该流程图中步骤的顺序可以改变,某些步骤可以省略。
S30,所述电子设备获取采集的监控区域内的人脸图像。
在本发明的实施例中,所述目标区域为人员活动区域,所述人脸图像可以是一个或者多个,一个人脸图像对应一个用户。采集监控区域内的人脸图像的具体实现方式可以是:通过在人员活动区域的不同位置部署多个监控摄像机,以获取大规模的人脸图像。其中,可以理解的,采集的人员活动区域内人脸图像是不断增长的,人员活动区域内的人员的出现也是随时间变化。
S31,所述电子设备从所述人脸图像中提取人脸特征数据。
S32,所述电子设备基于人群分析模型,对所述人脸特征数据进行分析,计算所述人脸特征数据的用户的在多个时段中每个时段的出现频率。
在本发明的实施例中,预先建立所述人群分析模型。预先建立的人群分析模型利用上述图1所示的实施例进行动态估计,具体如下:
(a)、建立所述人群分析模型;
(b)、获取人脸样本特征数据;
(c)、根据所述人脸样本特征数据,对所述人群分析模型中初始时刻的模型参数进行初始估计,计算初始时刻的模型参数;
(d)、将所述初始时刻的模型参数确定为当前时刻的模型参数;
(e)、获取所述当前时刻的下一时刻的观测人脸样本特征数据;
(f)、根据所述当前时刻的模型参数及所述下一时刻的观测人脸样本特征数据,计算所述下一时刻的模型参数;
(g)、将所述下一时刻确定为当前时刻;
(h)、利用迭代方法执行(e)、(f)、(g),直至计算完所述人群分析模型中每个时刻的模型参数。
S33,所述电子设备根据所述人脸特征数据的用户的在多个时段中每个时段的出现频率,对所述用户进行分析,得到所述用户的分析结果。
具体地,根据所述人脸特征数据的用户的在多个时段中每个时段的出现频率,判断所述用户是否属于可疑人员。例如监控区域是办公区域,在上班时段,一个用户的出现频率小于预设次数,则确定所述用户为可疑人员。当确定所述用户是可疑人员时,提醒监控区域的管理者注意所述用户的行踪等等。
本发明获取采集的监控区域内的人脸图像;从所述人脸图像中提取人脸特征数据;基于人群分析模型,对所述人脸特征数据进行分析,识别所述人脸特征数据的用户的在多个时段中每个时段的出现频率;根据所述人脸特征数据的用户的在多个时段中每个时段的出现频率, 对所述用户进行分析,得到所述用户的分析结果。通过实施本发明实施例能够实现识别人脸采集区域中可疑人员,进行及时预警。
如图4所示,本发明场景模型动态估计装置的较佳实施例的功能模块图。所述场景模型动态估计装置10包括建立模块100、获取模块101、计算模块102、确定模块103及迭代模块104。本发明所称的单元是指一种能够被场景模型动态估计装置10的处理器所执行并且能够完成固定功能的一系列计算机程序段,其存储在存储器中。在本实施例中,关于各单元的功能将在后续的实施例中详述。
所述建立模块100建立用于描述动态变化的场景的场景模型。
在本发明的实施例中,在所述动态变化的场景中,场景中样本的特征会随着时间在变化。
所述场景模型由多个时刻下的高斯混合模型组成,所述多个时刻中任一时刻下的高斯混合模型表示为:
Figure PCTCN2017103988-appb-000004
其中,x表示所述任一时刻下任一样本的特征,其中样本均值μk表示所述任一时刻下样本特征的均值、样本方差Σk表示所述任一时刻下样本特征的变化程度,混合系数πk表示在所述任一时刻下的高斯混合模型中第k个高斯分量的权重,也可以说,表示在所述任一时刻下的样本来自第k个高斯分量的概率。
在本发明的实施例中,在所述场景模型中,所述场景模型的模型参数包括多个时刻下的模型参数。所述多个时刻中任一时刻下的模型参数包括样本均值μk、样本方差Σk、混合系数分布估计。
所述获取模块101获取样本特征数据。
在优选实施例中,所述样本特征数据从预先采集的样本中提取出来的,并预先存储于所述电子设备的存储器中。样本的数量越大,后续估计所述场景模型的模型参数的置信度就会越大,所述场景模型的模型参数就会越准确。
所述计算模块102根据所述样本特征数据,对所述场景模型中初始时刻的模型参数进行初始估计,计算初始时刻的模型参数。
在优选实施例中,任一时刻下的混合系数πk满足
Figure PCTCN2017103988-appb-000005
0≤πk≤1,其中K表示任一时刻下高斯分量的总数。因此,所述计算模块102利用狄利克雷分布(Dirichlet distribution),对所述任一时刻下的高斯混合模型中的混合系数进行建模,得到所述任一时刻下的混合系数分布模型,即得到所述任一时刻s下的狄利克雷分布Dir(π|αs),其中αs是任一时刻s下的狄利克雷分布的参数向量,π表示任一时刻s下的混合系数向量。
所述计算模块102基于所述任一时刻下的混合系数分布模型,根据所述样本特征数据,对场景模型中初始时刻的模型参数进行初始估计,计算初始时刻的模型参数包括:
基于所述样本特征数据,利用极大似然估计对初始时刻下的高斯混合模型中的样本估计,得到初始时刻下的均值估计、初始时刻下的方差估计;
基于所述样本特征数据,利用期望最大化方法,对混合系数分布模型进行初始估计,得到初始时刻下的混合系数分布估计,即狄利克雷分布Dir(π|α0),其中,α0是初始时刻下的狄利克雷分布的参数向量,π表示初始时刻下的混合系数向量。
所述确定模块103将所述初始时刻的模型参数确定为当前时刻的模型参数。
所述获取模块101获取所述当前时刻的下一时刻的观测特征数据。
在本发明的实施例中,当前时刻用t-1表示,当前时刻的下一时刻用t表示。例如,若场 景中的数据以每秒采集一次,若当前时刻t-1对应第一秒,则当前时刻的下一时刻对应第二秒。
所述观测特征数据从所述场景模型中的采集设备实时采集的样本中提取出来的。
所述计算模块102根据所述当前时刻的模型参数及所述下一时刻的观测特征数据,计算所述下一时刻的模型参数。
在优选实施例中,所述计算模块102根据所述当前时刻的模型参数及所述下一时刻的观测特征数据,计算所述下一时刻的模型参数包括:
(a1)将当前时刻的混合系数分布估计确定为所述下一时刻下的混合系数的先验分布,用pt-1(π)表示。
例如,将初始时刻的混合系数分布估计确定为第一时刻下的混合系数的先验分布。后续再根据第一时刻的观测特征数据,对所述第一时刻下的混合系数的先验分布进行修正。
(a2)根据所述下一时刻的观测特征数据,计算所述下一时刻的混合系数的似然函数,用pt(x|π)表--
(a3)根据所述下一时刻的混合系数的先验分布及所述下一时刻的混合系数的似然函数,利用贝叶斯定理计算所述下一时刻的混合系数的后验分布pt(π|x)=pt(x|π)pt-1(π)/pt(x)。
通过上述优选实施例可知,每一时刻的混合系数分布估计都是基于上一时刻的混合系数分布估计做的修正,因此使得场景模型中的混合系数的分布估计估计更稳定,计算结果更准确。
在优选实施例中,根据所述下一时刻的混合系数的先验分布及下一时刻的混合系数的多项分布,利用贝叶斯定理及利用多项分布与狄利克雷分布的共轭关系,计算所述下一时刻的混合系数的后验分布。所述下一时刻的混合系数的多项分布的计算过程在后续详述。
具体地,当将所述下一时刻的混合系数的多项分布Multi(m|π)确定为所述下一时刻的混合系数的似然函数时,利用贝叶斯定理及利用多项分布与狄利克雷分布的共轭关系,所述下一时刻的混合系数的后验分布:
pt(π|x)=pt(x|π)pt-1(π)/pt(x)=Multi(m|π)×Dir(π|αt-1)/pt(x)=Dir(π|(αt-1+m),其中Dir(π|(αt-1+m)表示狄利克雷分布,m为所述所述下一时刻的混合系数的多项分布的参数向量,所述αt-1表示当前时刻下的狄利克雷分布的参数向量。
由于观测样本的数量随着时间增加的同时,对参数的估计的置信度也在不断增加。因此为了适应混合系数随着时间变化的情况,采取松弛(relaxation)操作逐渐减少前序估计所占的比重,这样场景模型会倾向于忽略早期的数据,着重近期的数据,实现动态估计,使结果更准确。另外对混合系数的变化趋势不做假设,在观测数据未知的情况下混合系数的各个分量的值应该趋近于相等,所以在减少前序估计所占的比重的同时还可以采取了平滑操作,这样可以使得场景模型中的混合系数的估计结果更平滑。
因此,可以根据所述下一时刻的混合系数的先验分布及所述下一时刻的混合系数的似然函数,利用贝叶斯定理,并采用松弛操作计算所述下一时刻的混合系数的后验分布pt(π|x)=Dir(δ(αt-1+m)+b)。由于贝叶斯定理是在概率的理论基础上形成的,因此,利用贝叶斯定理对场景模型中的混合系数分布进行估计,可以适用于不同的应用场景,具有推广性。
其中,0≤δ≤1,δ表示历史数据所占的比重,b≤0,b表示所述下一时刻的混合系数π变化的不确定性。
(a4)将所述下一时刻的混合系数的后验分布确定为所述下一时刻的混合系数分布估计。
在优选实施例中,所述计算模块102根据所述下一时刻的观测特征数据,计算所述下一 时刻的混合系数的似然函数包括:
(a21)确定所述下一时刻的样本均值及所述下一时刻的样本方差。
优选地,为了简化计算,任一时刻下的样本均值等于初始时刻下的样本均值,任一时刻下的样本方差等于初始时刻下的样本方差。当然也可以采用其他估计方法(如期望最大法)来估计任一时刻下的样本均值及任一时刻下的样本方差。
(a22)根据所述下一时刻的样本均值及所述下一时刻的样本方差,估计所述下一时刻的混合系数的多项分布Multi(m|π)。
(a23)将所述下一时刻的混合系数的多项分布确定为所述下一时刻的混合系数的似然函数。
在优选实施例中,所述根据所述下一时刻的样本均值及所述下一时刻的样本方差,估计所述下一时刻的混合系数的多项分布Multi(m|π)包括:
(a221)根据所述下一时刻的样本均值、所述下一时刻的样本方差、所述下一时刻下的混合系数的先验分布,计算所述下一时刻的观测特征数据的隐含变量z的期望值E[z|x,α],其中α表示所述下一时刻下的狄利克雷分布的参数,所述隐含变量表示所述下一时刻中每一个样本属于所述下一时刻的高斯混合模型中的每一个高斯分布的程度。
(a222)根据所述下一时刻的观测特征数据的隐含变量的期望值,计算所述下一时刻的混合系数的多项分布的参数。
具体地,计算所述下一时刻的混合系数的多项分布Multi(m|π)的参数mk的公式如下:
Figure PCTCN2017103988-appb-000006
其中mk表示向量m中第k个分量,znk表示所述下一时刻中第k个高斯分量对应的观测特征数据中第n个样本的隐含变量。其中α表示所述下一时刻下的狄利克雷分布的参数,
(a223)基于所述下一时刻的混合系数的多项分布的参数,估计所述下一时刻的混合系数的多项分布。
所述确定模块103还用于将所述下一时刻确定为当前时刻。
在本发明的实施例中,将所述下一时刻确定为当前时刻,相当于赋值操作。例如,若所述下一时刻用tt1表示,所述当前时刻用tt表示,则所述下一时刻确定为当前时刻,即表示为tt=tt1。
所述迭代模块104利用迭代方法继续执行所述获取模块获取所述当前时刻的下一时刻的观测特征数据;所述计算模块根据所述当前时刻的模型参数及所述下一时刻的观测特征数据,计算所述下一时刻的模型参数;所述确定模块将所述下一时刻确定为当前时刻,直至计算完所述场景模型中每个时刻的模型参数。
在优选实施例中,所述任一时刻下的模型参数还包括任一时刻下的混合系数,所述确定模块103还用于:
根据所述任一时刻下的混合系数分布估计,确定所述任一时刻下的混合系数。
所述确定模块103根据所述任一时刻下的混合系数分布估计,确定所述任一时刻下的混合系数包括以下一种或者多种组合:
对根据所述任一时刻下的混合系数分布估计进行采样,获取采样数据,将采样数据确定为所述任一时刻下的混合系数;或
计算使所述任一时刻下的混合系数分布估计最大的一组数值,将所述一组数值确定为所 述任一时刻下的混合系数。
本发明通过(a)、建立用于描述动态变化的场景的场景模型;(b)、获取样本特征数据;(c)、根据所述样本特征数据,对所述场景模型中初始时刻的模型参数进行初始估计,计算初始时刻的模型参数;(d)、将所述初始时刻的模型参数确定为当前时刻的模型参数;(e)、获取所述当前时刻的下一时刻的观测特征数据;(f)、根据所述当前时刻的模型参数及所述下一时刻的观测特征数据,计算所述下一时刻的模型参数;(g)、将所述下一时刻确定为当前时刻;(h)、利用迭代方法执行(e)、(f)、(g),直至计算完所述场景模型中每个时刻的模型参数。本发明的计算量减少一个数量级,提高了运算速度。而且每一时刻的混合系数都是基于上一时刻的混合系数做的修正,因此使得场景模型中的混合系数的估计结果更稳定。另外,采用松弛(relaxation)操作逐渐减少前序估计所占的比重,着重近期的数据,实现动态估计,使结果更准确。采用平滑操作进行混合系数的估计,这样可以使得场景模型中的混合系数的估计结果更平滑。
如图5所示,本发明数据分析装置的较佳实施例的功能模块图。所述场景模型动态估计装置50包括数据获取模块500、特征提取模块501、数据计算模块502、结果分析模块503。本发明所称的单元是指一种能够被数据分析装置50的处理器所执行并且能够完成固定功能的一系列计算机程序段,其存储在存储器中。在本实施例中,关于各单元的功能将在后续的实施例中详述。
所述数据获取模块500获取采集的待测样本。
在本发明的实施例中,高斯混合模型被广泛应用于模式识别、计算机视觉、机器学习、数据挖掘、生物信息学等不同领域。在这些领域里,高斯混合模型可以用于完成诸如图像分割、聚类、概率密度函数的构建等不同的应用场景。
因此,应用场景的不同,所述待测样本会不同。例如,所述待测样本可以是人脸数据、人的语音数据等等,所述待测样本并不限于上述的举例。
所述特征提取模块501从所述采集的待测样本中提取待测样本特征数据。
在本发明的实施例中,利用特征提取技术,从所述采集的待测样本中提取待测样本特征数据。所述特征提取技术是现有技术,本发明不再详述。
所述数据计算模块502利用所述待测样本特征数据对应的场景模型,计算所述待测样本特征数据对应的场景模型下的概率。
在本发明的实施例中,预先建立所述待测样本特征数据对应的场景模型,预先建立的场景模型利用上述图1所示的实施例进行动态估计。这样能准确的表示动态变化的应用场景,提高应用场景下的任务的准确度,及提高运算效率。
所述结果分析模块503根据所述待测样本特征数据对应的场景模型下的概率,对所述待测样本进行分析,得到分析结果。
在本发明的实施例中,所述结果分析模块503结合应用场景,对所述待测样本的分析,得到分析结果。例如,若应用场景是运动场景下的背景模型的分割,场景模型表示的是运动场景下的背景估计模型,所述待测样本特征数据为t时刻的每个像素点XT,所述待测样本特征数据对应的场景模型下的概率即为每个像素点XT属于背景估计模型的概率,根据每个像素点XT属于背景估计模型的概率,判断每个像素点是否与背景估计模型相匹配。当某个像素点与所述背景估计模型相匹配,即可确定分析结果为所述像素点属于运动场景下的背景。当某个像素点与所述背景估计模型不匹配,即可确定分析结果为所述像素点不属于运动场景 下的背景等等。
本发明通过获取采集的待测样本;从所述采集的待测样本中提取待测样本特征数据;利用所述待测样本特征数据对应的场景模型,计算所述待测样本特征数据对应的概率;根据所述待测样本特征数据对应的概率,对所述待测样本进行分析,得到分析结果。因此,本发明对动态变化的应用场景的数据进行准确的分析。
在实际场景中,监控区域中的人脸数据是随着时间动态变化的,人脸识别系统中的人脸数据是不断增长的,实际的“常住人口”、“徘徊人员”等也是随时间变化的。针对选定时间范围的数据进行聚类的方法计算复杂度高,无法在人脸数据动态变化的情况下有效的进行常住人口等类似的人群分析。
如图6所示,本发明人群分析装置的较佳实施例的功能模块图。所述人群分析装置60包括图像获取模块601、数据提取模块602、频率计算模块603及数据分析模块604。本发明所称的单元是指一种能够被人群分析装置60的处理器所执行并且能够完成固定功能的一系列计算机程序段,其存储在存储器中。在本实施例中,关于各单元的功能将在后续的实施例中详述。
所述图像获取模块601获取采集的监控区域内的人脸图像。
在本发明的实施例中,所述目标区域为人员活动区域,所述人脸图像可以是一个或者多个,一个人脸图像对应一个用户。采集监控区域内的人脸图像的具体实现方式可以是:通过在人员活动区域的不同位置部署多个监控摄像机,以获取大规模的人脸图像。其中,可以理解的,采集的人员活动区域内人脸图像是不断增长的,人员活动区域内的人员的出现也是随时间变化。
所述数据提取模块602从所述人脸图像中提取人脸特征数据。
所述频率计算模块603基于人群分析模型,对所述人脸特征数据进行分析,计算所述人脸特征数据的用户的在多个时段中每个时段的出现频率。
在本发明的实施例中,预先建立所述人群分析模型。预先建立的人群分析模型利用上述图1所示的实施例进行动态估计,具体如下:
(a)、建立所述人群分析模型;
(b)、获取人脸样本特征数据;
(c)、根据所述人脸样本特征数据,对所述人群分析模型中初始时刻的模型参数进行初始估计,计算初始时刻的模型参数;
(d)、将所述初始时刻的模型参数确定为当前时刻的模型参数;
(e)、获取所述当前时刻的下一时刻的观测人脸样本特征数据;
(f)、根据所述当前时刻的模型参数及所述下一时刻的观测人脸样本特征数据,计算所述下一时刻的模型参数;
(g)、将所述下一时刻确定为当前时刻;
(h)、利用迭代方法执行(e)、(f)、(g),直至计算完所述人群分析模型中每个时刻的模型参数。
所述数据分析模块604根据所述人脸特征数据的用户的在多个时段中每个时段的出现频率,对所述用户进行分析,得到所述用户的分析结果。
具体地,根据所述人脸特征数据的用户的在多个时段中每个时段的出现频率,判断所述用户是否属于可疑人员。例如监控区域是办公区域,在上班时段,一个用户的出现频率小于 预设次数,则确定所述用户为可疑人员。当确定所述用户是可疑人员时,提醒监控区域的管理者注意所述用户的行踪等等。
本发明获取采集的监控区域内的人脸图像;从所述人脸图像中提取人脸特征数据;基于人群分析模型,对所述人脸特征数据进行分析,识别所述人脸特征数据的用户的在多个时段中每个时段的出现频率;根据所述人脸特征数据的用户的在多个时段中每个时段的出现频率,对所述用户进行分析,得到所述用户的分析结果。通过实施本发明实施例能够实现识别人脸采集区域中可疑人员,进行及时预警。
上述以软件功能模块的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能模块存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本发明每个实施例所述方法的部分步骤。
如图7所示,所述电子设备1包括至少一个发送装置31、至少一个存储器32、至少一个处理器33、至少一个接收装置34、至少一个显示器(图中未示出)以及至少一个通信总线。其中,所述通信总线用于实现这些组件之间的连接通信。
所述电子设备1是一种能够按照事先设定或存储的指令,自动进行数值计算和/或信息处理的设备,其硬件包括但不限于微处理器、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程门阵列(Field-Programmable Gate Array,FPGA)、数字处理器(Digital Signal Processor,DSP)、嵌入式设备等。所述电子设备1还可包括网络设备和/或用户设备。其中,所述网络设备包括但不限于单个网络服务器、多个网络服务器组成的服务器组或基于云计算(Cloud Computing)的由大量主机或网络服务器构成的云,其中,云计算是分布式计算的一种,由一群松散耦合的计算机集组成的一个超级虚拟计算机。
所述电子设备1可以是,但不限于任何一种可与用户通过键盘、触摸板或声控设备等方式进行人机交互的电子产品,例如,平板电脑、智能手机、个人数字助理(Personal Digital Assistant,PDA)、智能式穿戴式设备、摄像设备、监控设备等终端。
所述电子设备1所处的网络包括,但不限于互联网、广域网、城域网、局域网、虚拟专用网络(Virtual Private Network,VPN)等。
其中,所述接收装置34和所述发送装置31可以是有线发送端口,也可以为无线设备,例如包括天线装置,用于与其他设备进行数据通信。
所述存储器32用于存储程序代码。所述存储器32可以是集成电路中没有实物形式的具有存储功能的电路,如RAM(Random-Access Memory,随机存取存储器)、FIFO(First In First Out,)等。或者,所述存储器32也可以是具有实物形式的存储器,如内存条、TF卡(Trans-flash Card)、智能媒体卡(smart media card)、安全数字卡(secure digital card)、快闪存储器卡(flash card)等储存设备等等。
所述处理器33可以包括一个或者多个微处理器、数字处理器。所述处理器33可调用存储器32中存储的程序代码以执行相关的功能。例如,图3中所述的各个单元是存储在所述存储器32中的程序代码,并由所述处理器33所执行,以实现一种场景模型动态估计方法、一种数据分析方法、一种人群分析方法。所述处理器33又称中央处理器(CPU,Central Processing Unit),是一块超大规模的集成电路,是运算核心(Core)和控制核心(Control Unit)。
本发明实施例还提供一种计算机可读存储介质,其上存储有计算机指令,所述指令当被包括一个或多个处理器的电子设备执行时,使电子设备执行如上文方法实施例所述的场景模型动态估计方法、一种数据分析方法、一种人群分析方法。
在本发明所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
另外,在本发明每个实施例中的各功能模块可以集成在一个处理单元中,也可以是每个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能模块的形式实现。
对于本领域技术人员而言,显然本发明不限于上述示范性实施例的细节,而且在不背离本发明的精神或基本特征的情况下,能够以其他的具体形式实现本发明。因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本发明的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化涵括在本发明内。不应将权利要求中的任何附关联图标记视为限制所涉及的权利要求。此外,显然“包括”一词不排除其他单元或步骤,单数不排除复数。系统权利要求中陈述的多个单元或装置也可以由一个单元或装置通过软件或者硬件来实现。第二等词语用来表示名称,而并不表示任何特定的顺序。
最后应说明的是,以上实施例仅用以说明本发明的技术方案而非限制,尽管参照较佳实施例对本发明进行了详细说明,本领域的普通技术人员应当理解,可以对本发明的技术方案进行修改或等同替换,而不脱离本发明技术方案的精神和范围。

Claims (16)

  1. 一种场景模型动态估计方法,其特征在于,所述方法包括:
    (a)、建立用于描述动态变化的场景的场景模型;
    (b)、获取样本特征数据;
    (c)、根据所述样本特征数据,对所述场景模型中初始时刻的模型参数进行初始估计,计算初始时刻的模型参数;
    (d)、将所述初始时刻的模型参数确定为当前时刻的模型参数;
    (e)、获取所述当前时刻的下一时刻的观测特征数据;
    (f)、根据所述当前时刻的模型参数及所述下一时刻的观测特征数据,计算所述下一时刻的模型参数;
    (g)、将所述下一时刻确定为当前时刻;
    (h)、利用迭代方法执行(e)、(f)、(g),直至计算完所述场景模型中每个时刻的模型参数。
  2. 如权利要求1所述的场景模型动态估计方法,其特征在于,所述场景模型由多个时刻下的高斯混合模型组成,所述多个时刻中任一时刻下的高斯混合模型表示为:∑πkΝ(x|μkk),其中x表示所述任一时刻下任一样本的特征,其中样本均值μk表示所述任一K时刻下样本特征的均值、样本方差Σk表示所述任一时刻下样本特征的变化程度,混合系数πk表示在所述任一时刻下的高斯混合模型中第k个高斯分量的权重;
    所述场景模型的模型参数包括多个时刻的模型参数,所述多个时刻中任一时刻下的模型参数包括样本均值μk、样本方差Σk、混合系数分布估计。
  3. 如权利要求2所述的场景模型动态估计方法,其特征在于,所述任一时刻下的模型参数还包括任一时刻下的混合系数,所述方法还包括:
    根据所述任一时刻下的混合系数分布估计,确定所述任一时刻下的混合系数。
  4. 如权利要求3所述的场景模型动态估计方法,其特征在于,所述根据所述任一时刻下的混合系数分布估计,确定所述任一时刻下的混合系数包括以下一种或者多种组合:
    对根据所述任一时刻下的混合系数分布估计进行采样,获取采样数据,将采样数据确定为所述任一时刻下的混合系数;或
    计算使所述任一时刻下的混合系数分布估计最大的一组数值,将所述一组数值确定为所述任一时刻下的混合系数。
  5. 如权利要求2所述的场景模型动态估计方法,其特征在于,所述方法还包括:
    利用狄利克雷分布,对所述任一时刻下的高斯混合模型中的混合系数进行建模,得到所述任一时刻下的混合系数分布模型。
  6. 如权利要求5所述的场景模型动态估计方法,其特征在于,所述根据所述样本特征数据,对场景模型中初始时刻的模型参数进行初始估计,计算初始时刻的模型参数包括:
    基于所述样本特征数据,利用期望最大化方法对初始时刻下的高斯混合模型中的样本估计,得到初始时刻下的样本均值、初始时刻下的样本方差;
    基于所述样本特征数据,利用极大似然估计对初始时刻下的混合系数分布模型进行初始估计,得到初始时刻下的混合系数分布估计。
  7. 如权利要求2至6中任一项所述的场景模型动态估计方法,其特征在于,所述根据所述当前时刻的模型参数及所述下一时刻的观测特征数据,计算所述下一时刻的模型参数包括:
    将当前时刻的混合系数分布估计确定为所述下一时刻的混合系数的先验分布;
    根据所述下一时刻的观测特征数据,计算所述下一时刻的混合系数的似然函数;
    根据所述下一时刻的混合系数的先验分布及所述下一时刻的混合系数的似然函数,利用贝叶斯定理计算所述下一时刻的混合系数的后验分布;
    将所述下一时刻的混合系数的后验分布确定为所述下一时刻的混合系数分布估计。
  8. 如权利要求7所述的场景模型动态估计方法,其特征在于,所述根据所述下一时刻的观测特征数据,计算所述下一时刻的混合系数的似然函数包括:
    确定所述下一时刻的样本均值及所述下一时刻的样本方差;
    根据所述下一时刻的样本均值及所述下一时刻的样本方差,估计所述下一时刻的混合系数的多项分布;
    将所述下一时刻的混合系数的多项分布确定为所述下一时刻的混合系数的似然函数;
    所述根据所述下一时刻的混合系数的先验分布及所述下一时刻的混合系数的似然函数,利用贝叶斯定理计算所述下一时刻的混合系数的后验分布包括:
    根据所述下一时刻的混合系数的先验分布及所述下一时刻的混合系数的多项分布,利用贝叶斯定理及利用多项分布与狄利克雷分布的共轭关系,计算所述下一时刻的混合系数的后验分布。
  9. 如权利要求7所述的场景模型动态估计方法,其特征在于,所述根据所述下一时刻的混合系数的先验分布及所述下一时刻的混合系数的似然函数,利用贝叶斯定理计算所述下一时刻的混合系数的后验分布包括:
    根据所述下一时刻的混合系数的先验分布及所述下一时刻的混合系数的似然函数,利用贝叶斯定理,并采用松弛操作及平滑操作计算所述下一时刻的混合系数的后验分布。
  10. 如权利要求8所述的场景模型动态估计方法,其特征在于,所述根据所述下一时刻的样本均值及所述下一时刻的样本方差,估计所述下一时刻的混合系数的多项分布包括:
    根据所述下一时刻的样本均值、所述下一时刻的样本方差、所述下一时刻的混合系数的先验分布,计算所述下一时刻的观测特征数据的隐含变量的期望值,所述隐含变量表示所述下一时刻中每一个样本属于所述下一时刻的高斯混合模型中的每一个高斯分布的程度;
    根据所述下一时刻的观测特征数据的隐含变量的期望值,计算所述下一时刻的混合系数的多项分布的参数;
    基于所述下一时刻的混合系数的多项分布的参数,估计所述下一时刻的混合系数的多项分布。
  11. 一种场景模型动态估计装置,其特征在于,所述装置包括:
    建立模块,用于建立用于描述动态变化的场景的场景模型;
    获取模块,用于获取样本特征数据;
    计算模块,用于根据所述样本特征数据,对所述场景模型中初始时刻的模型参数进行初始估计,计算初始时刻的模型参数;
    确定模块,用于将所述初始时刻的模型参数确定为当前时刻的模型参数;
    所述获取模块还用于获取所述当前时刻的下一时刻的观测特征数据;
    所述计算模块还用于根据所述当前时刻的模型参数及所述下一时刻的观测特征数据,计算所述下一时刻的模型参数;
    所述确定模块还用于将所述下一时刻确定为当前时刻;
    迭代模块,用于利用迭代方法继续执行所述获取模块获取所述当前时刻的下一时刻的观测特征数据;所述计算模块根据所述当前时刻的模型参数及所述下一时刻的观测特征数据,计算所述下一时刻的模型参数;所述确定模块将所述下一时刻确定为当前时刻,直至计算完所述场景模型中每个时刻的模型参数。
  12. 一种电子设备,其特征在于,所述电子设备包括存储器及处理器,所述存储器用于存储至少一个指令,所述处理器用于执行所述至少一个指令以实现权利要求1至8任意一项的场景模型动态估计方法。
  13. 一种数据分析方法,其特征在于,所述方法包括:
    获取采集的待测样本;
    从所述采集的待测样本中提取待测样本特征数据;
    利用所述待测样本特征数据对应的场景模型,计算所述待测样本特征数据对应的概率,利用如权利要求1至8任意一项的场景模型动态估计方法对所述待测样本特征数据对应的场景模型进行估计;
    根据所述待测样本特征数据对应的概率,对所述待测样本进行分析,得到分析结果。
  14. 一种电子设备,其特征在于,所述电子设备包括存储器及处理器,所述存储器用于存储至少一个指令,所述处理器用于执行所述至少一个指令以实现权利要求13中数据分析方法。
  15. 一种人群分析方法,其特征在于,所述方法包括:
    获取采集的监控区域内的人脸图像;
    从所述人脸图像中提取人脸特征数据;
    基于人群分析模型,对所述人脸特征数据进行分析,识别所述人脸特征数据的用户的在多个时段中每个时段的出现频率,所述人群分析模型利用如权利要求1至8任意一项的场景模型动态估计方法进行估计;
    根据所述人脸特征数据的用户的在多个时段中每个时段的出现频率,对所述用户进行分析,得到所述用户的分析结果。
  16. 一种电子设备,其特征在于,所述电子设备包括存储器及处理器,所述存储器用于存储至少一个指令,所述处理器用于执行所述至少一个指令以实现权利要求15中数据分析方法。
PCT/CN2017/103988 2016-10-10 2017-09-28 场景模型动态估计方法、数据分析方法及装置、电子设备 WO2018068654A1 (zh)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN201610884791.8 2016-10-10
CN201610884792.2A CN106502965A (zh) 2016-10-10 2016-10-10 动态估计高斯混合模型的混合系数的方法及计算机设备
CN201610884791.8A CN106503631A (zh) 2016-10-10 2016-10-10 一种人群分析方法及计算机设备
CN201610884792.2 2016-10-10
CN201710727993.6A CN107918688B (zh) 2016-10-10 2017-08-23 场景模型动态估计方法、数据分析方法及装置、电子设备
CN201710727993.6 2017-08-23

Publications (1)

Publication Number Publication Date
WO2018068654A1 true WO2018068654A1 (zh) 2018-04-19

Family

ID=61898779

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/103988 WO2018068654A1 (zh) 2016-10-10 2017-09-28 场景模型动态估计方法、数据分析方法及装置、电子设备

Country Status (2)

Country Link
CN (1) CN107918688B (zh)
WO (1) WO2018068654A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112036001A (zh) * 2020-07-01 2020-12-04 长安大学 自动驾驶测试场景构建方法、装置、设备及可读存储介质
CN112270436A (zh) * 2020-10-26 2021-01-26 北京明略昭辉科技有限公司 一种资源投放效果评估方法、装置及系统
CN114971400A (zh) * 2022-06-24 2022-08-30 东南大学溧阳研究院 一种基于Dirichlet分布-多项分布共轭先验的用户侧储能聚合方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109858365B (zh) * 2018-12-28 2021-03-05 深圳云天励飞技术有限公司 一种特殊人群聚集行为分析方法、装置及电子设备
CN114137587B (zh) * 2021-12-01 2022-07-29 西南交通大学 一种运动对象的位置估计与预测方法、装置、设备及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510863A (zh) * 2009-03-17 2009-08-19 江苏大学 一种mpsk调制信号识别方法
CN101751921A (zh) * 2009-12-16 2010-06-23 南京邮电大学 一种在训练数据量极少条件下的实时语音转换方法
CN104008574A (zh) * 2014-06-16 2014-08-27 浙江大学 一种基于无限高斯混合模型的高光图图像解混方法
CN106503631A (zh) * 2016-10-10 2017-03-15 深圳云天励飞技术有限公司 一种人群分析方法及计算机设备
CN106502965A (zh) * 2016-10-10 2017-03-15 深圳云天励飞技术有限公司 动态估计高斯混合模型的混合系数的方法及计算机设备

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8396303B2 (en) * 2008-10-14 2013-03-12 Core Wireless Licensing, S.a.r.l. Method, apparatus and computer program product for providing pattern detection with unknown noise levels
CN103678896A (zh) * 2013-12-04 2014-03-26 南昌大学 基于协方差的高斯混合模型参数分离方法
CN104933275A (zh) * 2014-03-18 2015-09-23 日本电气株式会社 混合模型的确定方法及装置
CN105678222B (zh) * 2015-12-29 2019-05-31 浙江大学 一种基于移动设备的人体行为识别方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510863A (zh) * 2009-03-17 2009-08-19 江苏大学 一种mpsk调制信号识别方法
CN101751921A (zh) * 2009-12-16 2010-06-23 南京邮电大学 一种在训练数据量极少条件下的实时语音转换方法
CN104008574A (zh) * 2014-06-16 2014-08-27 浙江大学 一种基于无限高斯混合模型的高光图图像解混方法
CN106503631A (zh) * 2016-10-10 2017-03-15 深圳云天励飞技术有限公司 一种人群分析方法及计算机设备
CN106502965A (zh) * 2016-10-10 2017-03-15 深圳云天励飞技术有限公司 动态估计高斯混合模型的混合系数的方法及计算机设备

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GAO, QINGHUA ET AL.: "A tracking algorithm based on probability density propagation", JOURNAL OF ELECTRONICS AND INFORMATION TECHNOLOGY, vol. 32, no. 10, 31 October 2010 (2010-10-31), pages 2410 - 2412 *
SONG, YANG: "Detection of moving objects based on Gaussian mixture model and the Canny Operator", CHINA MASTER'S THESES FULL-TEXT DATABASE (ELECTRONIC TECHNOLOGY & INFORMATION SCIENCE), no. 05, 15 May 2009 (2009-05-15), pages 2.4.1 - 2.4.2, ISSN: 1674-0246 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112036001A (zh) * 2020-07-01 2020-12-04 长安大学 自动驾驶测试场景构建方法、装置、设备及可读存储介质
CN112036001B (zh) * 2020-07-01 2024-04-23 长安大学 自动驾驶测试场景构建方法、装置、设备及可读存储介质
CN112270436A (zh) * 2020-10-26 2021-01-26 北京明略昭辉科技有限公司 一种资源投放效果评估方法、装置及系统
CN114971400A (zh) * 2022-06-24 2022-08-30 东南大学溧阳研究院 一种基于Dirichlet分布-多项分布共轭先验的用户侧储能聚合方法
CN114971400B (zh) * 2022-06-24 2023-01-31 东南大学溧阳研究院 一种基于Dirichlet分布-多项分布共轭先验的用户侧储能聚合方法

Also Published As

Publication number Publication date
CN107918688B (zh) 2020-02-28
CN107918688A (zh) 2018-04-17

Similar Documents

Publication Publication Date Title
WO2018068654A1 (zh) 场景模型动态估计方法、数据分析方法及装置、电子设备
WO2019127924A1 (zh) 样本权重分配方法、模型训练方法、电子设备及存储介质
WO2019011165A1 (zh) 人脸识别方法、装置、电子设备及存储介质
US20170161591A1 (en) System and method for deep-learning based object tracking
CN110941990A (zh) 基于骨骼关键点进行人体动作评估的方法和装置
CN105160318A (zh) 基于面部表情的测谎方法及系统
BR112020001729A2 (pt) método, aparelho e dispositivo de reconhecimento de gestos
CN109858333B (zh) 图像处理方法、装置、电子设备及计算机可读介质
CN112784778B (zh) 生成模型并识别年龄和性别的方法、装置、设备和介质
CN110781829A (zh) 一种轻量级深度学习的智慧营业厅人脸识别方法
CN109145717A (zh) 一种在线学习的人脸识别方法
Kuhad et al. Using distance estimation and deep learning to simplify calibration in food calorie measurement
CN112329826A (zh) 图像识别模型的训练方法、图像识别方法和装置
WO2018068521A1 (zh) 一种人群分析方法及计算机设备
JP7063237B2 (ja) 分類装置、分類方法および分類プログラム
CN110688929A (zh) 一种人体骨架关节点定位方法及装置
Kim et al. Stabilized adaptive sampling control for reliable real-time learning-based surveillance systems
JP2019153092A (ja) 位置特定装置、位置特定方法及びコンピュータプログラム
WO2020151300A1 (zh) 基于深度残差网络的性别识别方法、装置、介质和设备
CN113627361B (zh) 人脸识别模型的训练方法、装置及计算机程序产品
WO2015176502A1 (zh) 一种图像特征的估计方法和设备
CN117115595A (zh) 姿态估计模型的训练方法、装置、电子设备及存储介质
CN115170919B (zh) 图像处理模型训练及图像处理方法、装置、设备和存储介质
CN112183431A (zh) 实时行人数量统计方法、装置、相机和服务器
CN111507289A (zh) 视频匹配方法、计算机设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17859800

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17859800

Country of ref document: EP

Kind code of ref document: A1