CN114139033B - Time sequence data clustering method and system based on dynamic kernel development - Google Patents

Time sequence data clustering method and system based on dynamic kernel development Download PDF

Info

Publication number
CN114139033B
CN114139033B CN202111423566.1A CN202111423566A CN114139033B CN 114139033 B CN114139033 B CN 114139033B CN 202111423566 A CN202111423566 A CN 202111423566A CN 114139033 B CN114139033 B CN 114139033B
Authority
CN
China
Prior art keywords
dynamic
core
dynamic core
winning
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111423566.1A
Other languages
Chinese (zh)
Other versions
CN114139033A (en
Inventor
谢海斌
李鹏
庄东晔
丁智勇
彭耀仟
江川
闫家鼎
蒋天瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202111423566.1A priority Critical patent/CN114139033B/en
Publication of CN114139033A publication Critical patent/CN114139033A/en
Application granted granted Critical
Publication of CN114139033B publication Critical patent/CN114139033B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/906Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/907Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/909Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Biophysics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Library & Information Science (AREA)
  • Physiology (AREA)
  • Genetics & Genomics (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a time sequence data clustering method and a system based on dynamic nuclear development, wherein the method comprises the following steps: s01, configuring an initial core and taking the initial core as a starting point of dynamic nuclear division development; s02, acquiring current newly-added time sequence data, stimulating each dynamic core by the current newly-added time sequence data, responding to each dynamic core to obtain corresponding output, selecting the dynamic core with the largest output as a winning dynamic core, and copying the category of the winning dynamic core to the current newly-added time sequence data; s03, regulating and controlling the splitting time of each dynamic nucleus by using the memory saturation; s04, gathering each dynamic core in the updated dynamic core set into different categories according to the center and the coverage domain of the dynamic core to obtain the clustering result of each current dynamic core, and returning to the step S02 until the clustering is exited. The method can realize the data clustering of dynamic nuclear development, and has the advantages of simple realization method, good robustness and robustness, high precision, strong flexibility and the like.

Description

Time sequence data clustering method and system based on dynamic kernel development
Technical Field
The invention relates to the technical field of time sequence data clustering, in particular to a time sequence data clustering method and system based on dynamic kernel development.
Background
With the rapid development of network technology, sources of data such as video and image are becoming wider, and initializing the acquired data information is becoming more and more important. In the aspects of security and the like, particularly in areas with dense flows of people such as airports and ports, face information appearing in a monitoring video is more complex and various, most of face information does not exist in an original database, and most of the face information is acquired for the first time and is gradually increased, namely, time sequence data is continuously increased. In time-series data such as real-time video stream, it is generally required to cluster targets in the time-series data, for example, cluster face images in a monitoring video, cluster molds in a pipeline task, and the like, and in pipeline tasks such as logistics, it is required to classify objects on a conveyor belt in real time.
At present, clustering is generally realized by adopting a method such as deep learning, namely, the samples to be clustered are subjected to dimension reduction and feature conversion, so that the original samples are mapped into a new feature space, and the generated data are more easily separated by the existing classifier. The conventional deep clustering method is generally implemented according to the following procedures: firstly, a large number of task samples, such as face images, are obtained, then a neural network model is designed, and training is carried out by using the large number of samples, so that a proper classifier model is obtained. When the new sample is input, the category of the new sample is directly output through the trained classifier model. However, the conventional clustering method cannot be used under the condition that the sample size is extremely small and no priori knowledge exists, if the samples to be clustered are usually unknown and cannot appear in a training set, and the algorithm contact in a clustering task is small in sample size, when only one sample is set, the network cannot be trained generally, and in the task of monitoring video and the like, the real-time clustering of face images or objects appearing in the video is required, namely, the category of the face images or objects is required to be immediately given as long as new target information appears. The resulting network trained with a large number of samples is therefore not suitable for use in a dynamic clustering environment where samples are gradually increasing.
When the conventional clustering method is adopted for the time series data, the following problems specifically exist:
1) Sample information in all categories needs to be obtained at once before performing the clustering task, and thus it must be relied upon to obtain a large amount of sample information.
The traditional clustering algorithm can work normally on the premise of knowing batch data, and has high precision, and the premise is that the category to which the newly added sample belongs appears in the previous training sample, and the precision is poor for the sample which does not appear, so that the algorithm needs to acquire all sample information at one time before executing. For example, when a deep clustering algorithm is used to cluster samples, a great amount of sample information in all the categories related to the task needs to be obtained first, and the samples are divided into a training set and a testing set to train a neural network, so that an effective classifier is obtained. However, in practical applications, the time series data is a continuous data stream, and it is not known in advance which sample is to be clustered, i.e. the type of the newly added data is unknown.
2) A priori parameters such as the number of clusters and the like need to be set manually.
In most of the current clustering algorithms, the clustering number or other algorithm parameters are usually required to be determined according to human experience or priori knowledge before clustering, for example, k-means algorithm and the like need to be assigned with k values, and if the clustering number k is wrongly set, the accuracy of a clustering result is directly affected; the DBSCAN isopycnic algorithm needs to set two prior parameters to accurately connect the density core points together.
3) The two phases "learning-application" are independent of each other and cannot form an association.
At present, a clustering algorithm is usually used for learning to obtain a good model according to a large number of existing samples in a learning stage, and then the learning stage is only used for learning the good model in an application stage, and the good model is not adjusted. However, a large amount of data may be acquired after training the clustering model, and the acquired data may have a significant difference from the data distribution in the learning stage. Therefore, under the condition of limited learning samples, it is difficult to obtain a reasonable k value which can adapt to dynamically increased data through priori knowledge, and the k value set according to priori cannot be adjusted along with the change of the data.
4) There may be algorithm failure problems with a small number of samples.
At present, the clustering algorithm clusters samples under the condition that a large number of samples are known, but in certain clustering tasks, the algorithm only can gradually acquire the samples, even when only one sample is used, so that the current clustering algorithm cannot work normally.
5) The model structure of the algorithm is fixed, so that the algorithm cannot self-evolve along with the increase of samples, and the application generalization capability is limited.
The model structure of the current clustering algorithm needs to be set manually before the clustering task is executed, so that the problem of clustering of batch data can be effectively solved. However, in an actual task, the information of the category to be clustered is unknown and may belong to the category which does not exist in the database, so that the clustering algorithm with a fixed structure cannot generally give a reasonable clustering result when a new sample appears. Meanwhile, due to the fact that the algorithm structure is fixed, the algorithms can only solve specific clustering problems, for example, when a face clustering algorithm in a monitoring video is required to be switched to a clustering task of a pipeline die, an algorithm model is usually required to be modified.
In summary, in the current clustering method, the sample information is changed in an incremental manner along with time, but is irrelevant to time variables, because the sample to be appeared at the next moment cannot be estimated, and only a small amount of local information is possessed before the algorithm is executed, so that the information of all types of samples cannot be obtained. Therefore, the current clustering method is not suitable for processing a small amount of samples without prior knowledge, and does not have the capability of giving reasonable clustering results in real time along with the change of the number of the samples; meanwhile, in the traditional clustering method, aiming at the incremental clustering problem, after sample information is newly added, all obtained data are clustered again to obtain a clustering result at the moment, and although the clustering result at each moment can be given, knowledge learned before is not utilized well to a certain extent, so that great efficiency loss is caused.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: aiming at the technical problems existing in the prior art, the invention provides the time sequence data clustering method and system based on dynamic nuclear development, which have the advantages of simple implementation method, capability of realizing the data clustering of the dynamic nuclear development, good robustness and robustness, high precision and strong flexibility.
In order to solve the technical problems, the technical scheme provided by the invention is as follows:
A time sequence data clustering method based on dynamic nuclear development comprises the following steps:
S01, configuring an initial core and taking the initial core as a starting point of dynamic nuclear division development;
S02, acquiring current newly-added time sequence data, stimulating each dynamic core by the current newly-added time sequence data, responding to each dynamic core to obtain corresponding output, selecting the dynamic core with the largest output as a winning dynamic core, and copying the category of the winning dynamic core to the current newly-added time sequence data;
s03, regulating and controlling splitting time of each dynamic core by using memory saturation for describing activation frequency of the dynamic core, wherein when the memory saturation of the dynamic core exceeds a preset threshold, controlling the corresponding dynamic core to split to generate a new core, and obtaining an updated dynamic core set;
s04, gathering each dynamic core in the updated dynamic core set into different categories according to the center and the coverage domain of the dynamic core to obtain the clustering result of each current dynamic core, and returning to the step S02 until the clustering is exited.
Further, in the step S01, the location of the first time-series data sample in the space is taken as the center of the initial core n 1, the initial core n 1 is taken as the starting point of the dynamic core splitting development, the memory saturation of the initial core n 1 is set to be the preset saturation threshold S t, so that when the initial core n 1 wins in competition, a new dynamic core is generated by direct splitting, and the coverage area of the initial core is set to be the global coverage.
Further, when the current newly added time series data stimulates each dynamic core in step S02, a gaussian function is specifically used as an activation function of the dynamic core for the current newly added time series data, all the dynamic cores respond according to the gaussian basis function, and the corresponding output O n generated after the current newly added time series data stimulates the dynamic core n j is specifically:
Wherein x i+1 represents the current newly added time sequence data, dis () represents a similarity function for calculating the newly added time sequence data x i+1 and the center of the dynamic core, and n j_μ represents the average value of the Gaussian basis function and serves as the center of the j-th dynamic core; n j_σ represents the covariance of the gaussian basis function and serves as the coverage radius of the j-th dynamic kernel, which represents the range attributed to the cluster represented by the dynamic kernel.
Further, in the step S02, when selecting and outputting the winning dynamic core, the method further includes: and updating the memory saturation of the winning dynamic core, updating the memory saturation of the winning dynamic core to 0 if the current newly-added time-series data is smaller than a preset threshold value from the center of the winning dynamic core, and adopting an expression o (1-o) as an increment delta s of the memory saturation if the current newly-added time-series data is larger than the preset threshold value from the center of the winning dynamic core, wherein a radial basis function is adopted to calculate a variable o in the increment delta s of the memory saturation.
Further, in the step S02, the step of determining the dynamic core winning according to the competition learning mode includes:
When the initial core n 1 wins, splitting by the initial core n 1 to generate a center of a new core n i+1, wherein the center of the new core n i+1 is the position of the current newly added time sequence data x i+1, assigning the memory saturation of the current winning dynamic core n 1 to 0, and setting the coverage of the current winning dynamic core n 1 to , wherein n 1_μ is the center of the current winning dynamic core n 1, and Dis () represents a similarity function;
When the dynamic kernel n g wins, wherein g is [1, i ], updating the memory saturation of the current winning dynamic kernel n g to be n g_m′, judging the relation between the memory saturation n g_m′ of the current winning dynamic kernel n g and a preset saturation threshold s t, and adjusting the center of the current winning dynamic kernel n g or executing splitting operation on the current winning dynamic kernel n g according to the judging result;
When the dynamic kernel n g does not win, g epsilon [1, i), judging whether the dynamic kernel n g needs to be attenuated according to the memory saturation.
Further, when the dynamic kernel n g wins, the following details are:
When n g_m′<st, the center of the current winning dynamic core n g is adjusted according to n g_μ'=ng_m′·ng_μ+(1-ng_m′)·xi+1, wherein n g_μ' represents the adjusted center position, the coverage area n g_σ of the current winning dynamic core n g is kept unchanged, the current newly added time sequence data x i+1 is attributed to the current winning dynamic core n g, and the category mark of the current winning dynamic core n g is copied to the current newly added time sequence data x i+1;
When n g_m′≥st, a splitting operation is performed on the current winning dynamic core n g, a new dynamic core n i+1 is generated, the center and coverage area of the current winning dynamic core n g are kept unchanged, the memory saturation is reset to r t, the center vector of the generated new dynamic core n i+1 is represented by the position of the current newly added time sequence data x i+1, the corresponding coverage area is set to and the memory saturation is set to 0, and the current newly added time sequence data x i+1 is classified into the same category as the generated new dynamic core n i+1.
Further, when the dynamic kernel n g does not win, if r t<ng_m<st,rt is a preset reset threshold, attenuating according to a preset beat number until the preset reset threshold r t, and keeping the center and coverage area of the dynamic kernel n g unchanged; if n g_m<rt, dynamic core n g is unchanged, i.e., the current newly added timing data has no effect on dynamic core n g.
Further, in the step S03, in the process of adjusting the splitting time of each dynamic core, when the dynamic core splits to reach a preset saturation threshold, the memory saturation of the winning dynamic core is set as a reset threshold, and when the memory saturation of the dynamic core exceeds the reset threshold but does not reach the preset splitting threshold, the memory saturation is attenuated to the reset threshold according to beats.
Further, the step S04 includes: marking each dynamic core by using a Link () function for the updated dynamic core set N i+1, and marking two target dynamic cores as the same class if the center of one dynamic core of the two target dynamic cores enters the coverage area of the other dynamic core in the dynamic core center and coverage area dynamic adjustment process; if the centers of the two target dynamic cores do not enter the coverage areas of other dynamic cores, the two target dynamic cores are correspondingly marked as two categories, and finally, the dynamic cores are gathered into different categories.
A time series data clustering system based on dynamic nuclear development, comprising a processor and a memory, the memory for storing a computer program, the processor for executing the computer program to perform a method as described above.
Compared with the prior art, the invention has the advantages that:
1. According to the invention, development can be realized from 0 through a dynamic kernel development type clustering method, self development along with the increase of samples under the condition of no priori knowledge, knowledge can be learned from newly-added data, a model is corrected, clustering is performed on the newly-added data through the model, learning and use in the development process are effectively realized, organic combination of learning and application stages can be realized, the newly-added samples can be clustered according to the existing model, and the existing model can be updated in real time according to new information introduced by the newly-added samples, so that an algorithm model is developed in a spiral type along with the increase of the samples, and further gradually develops to maturity, and efficient clustering of time sequence data can be realized in real time.
2. The invention can realize the clustering of incremental data, can give out reasonable clustering results in real time along with the increase of time sequence data, can keep the memory of original learning, and can timely adjust learned results through the information introduced by the newly added time sequence data, thereby improving the efficiency and the accuracy of the clustering of the newly added time sequence data.
3. The invention can normally work under the condition that a small number of samples are even a single sample or the sample quantity is 0, can develop along with the increase of the samples in the later period, does not need to acquire information of all data in advance, does not need to rely on acquiring a large amount of priori information in advance, and can normally work without relying on any priori parameters, thereby being convenient for migration to a scene needing development and clustering, or a clustering scene with data changing in an increment mode, and the like, and can also be used as the basis of development type unsupervised learning.
4. According to the method, the time sequence data clustering can be realized without manually setting prior parameters, so that the clustering result is only related to data and is not interfered by experience knowledge, and the robustness of the algorithm are effectively improved.
Drawings
Fig. 1 is a schematic flow chart of the embodiment for implementing time series data clustering based on dynamic kernel development.
FIG. 2 is a schematic diagram of a forgetting mechanism of dynamic nuclear memory saturation employed by the present invention.
FIG. 3 is a schematic diagram of the linking principle of the dynamic core employed by the present invention.
FIG. 4 is a schematic diagram of clustering test results obtained in a specific application example of the present invention.
Detailed Description
The invention is further described below in connection with the drawings and the specific preferred embodiments, but the scope of protection of the invention is not limited thereby.
The invention adopts incremental data-oriented dynamic kernel development clustering (DCC) algorithm based on saturated memory to dynamically develop and cluster time sequence data, adopts dynamic kernels as representative of sample clusters, selects winning dynamic kernels according to a competition learning method, sets the frequency of activating the memory saturation control dynamic kernels by simulating a human memory mechanism, thereby judging whether to perform parameter adjustment or splitting operation on the dynamic kernels, and can match the characteristic that the time sequence data is continuously increased to realize efficient clustering of self-dynamic development by adjusting the central position of the kernels and the radius of a coverage area in real time and adapting the distribution state of samples in a fitting space.
The invention adopts dynamic core set to represent algorithm model in DCC algorithm, thus, in the process of sample addition, parameter adjustment or splitting operation of the core is used, the self development process of algorithm model along with sample addition can be realized, the adopted DCC algorithm follows the sequency of data sources in physical world, incremental data (time sequence data) irrelevant to time factors can be processed, reasonable clustering result can be given in real time after data enter, and the DCC algorithm model can self develop along with sample addition, thus, the model can normally work when only one sample is added, and the model gradually develops from primary stage to mature along with sample addition. Meanwhile, when the DCC algorithm clusters newly added samples, the newly added samples can be clustered through the existing algorithm model (the model obtained by development in the previous sample newly adding process), so that the class label of the newly added samples, namely the use process of the model, can be given, and the existing algorithm model can be timely adjusted to correct the model through the information introduced by the newly added samples, namely the learning process of the model, so that the organic combination of the learning and use processes of the model can be realized.
The present embodiment first gives the symbols and definitions to be used:
The clustering model of DCC algorithm is described by the dynamic core representing each cluster, i.e., the set n= { N 1,n2,...,nk }, where k represents the number of dynamic cores, and N j represents the j-th dynamic core. The algorithm generates a new dynamic kernel before there is no sample input for learning training, set , but as the samples increase, the algorithm will follow the changing situation of the sample distribution, i.e., k is dynamically developed from 0.
The j-th dynamic kernel of the algorithm model is represented by a quad , where the parameter n j_μ represents the center position of the dynamic kernel n j, the parameter n j_σ represents the coverage area radius of n j, the parameter n j_m represents the memory saturation of n j, and the/> represents the class label of the dynamic kernel n j, i.e., the sample cluster representing the j-th class. The dynamic core updates the sample cluster represented by the dynamic core through the change of the central position and the coverage area, and regulates and controls the splitting time of the dynamic core through the memory saturation, namely, the splitting operation is executed when the memory saturation of the dynamic core reaches a saturation threshold s t. In the dynamic core splitting process, the relationship between the dynamic cores is expressed by using the concept of the father core and the child core, if the dynamic core n j is split to generate a new dynamic core n j+1, the dynamic core n j is called as the father core of the dynamic core n j+1, and the dynamic core n j+1 is called as the child core of the dynamic core n j.
X i=(xi1,xi2,...,xip) represents the i new data (sample) learned by the DCC algorithm of this embodiment in the sample space χ, and has a p-dimensional characteristic. After algorithm learning, represents learned data x i with category label/> , and the clustering model at this time is expressed as/> , namely, after data x i is subjected to algorithm processing, the algorithm model contains k i dynamic kernels. When the time series data x i+1 is added, the existing cluster model N i is updated, mainly the quadruple of the dynamic core in the model N i is updated, and at the same time, N i is updated to N i+1,ki and k i+1.
As shown in fig. 1, the detailed steps for implementing the time sequence data clustering based on dynamic kernel development based on DCC algorithm in this embodiment are:
s01, initial dynamic core configuration: the initial core is configured and serves as the starting point for dynamic nuclear division development.
The clustering model of DCC algorithm needs that the set of dynamic kernels can develop from 0, and in this embodiment, the initial kernel is set in the initial stage, and first, the development of the set of dynamic kernels from 0 to 1 is implemented, that is, the set N generates the first dynamic kernel from the empty set.
Specifically, the position of the first time sequence data sample in contact in space is taken as the center of an initial core n 1, the initial core n 1 is taken as the starting point of dynamic nuclear division development, the memory saturation of the initial core n 1 is set to be a preset saturation threshold s t, namely n 1_m=st, when the initial core n 1 wins in competition, the initial core n 1 is directly divided into new dynamic cores, the coverage area of the initial core is set to be global coverage, and other cluster structures outside the sample distribution concentration area are better found. I.e. the center of the new core n i+1 generated by the initial core split is the position of the new sample x i+1, the overlay field is set to , i.e. the parent core is located at the 3σ boundary of the newly generated core, to allow the newly generated child core to better represent the newly occurring sample cluster.
Since the memory saturation of the initial core n 1 is the saturation threshold s t, n 1 is ready to split, and a new dynamic core n i+1 is generated by splitting immediately as long as winning in competition. However, in an actual incremental environment, all sample information in the space cannot be acquired to determine the center position of the space, so the position of the contacted first sample in the space is taken as the center of the initial core, and is not adjusted any more. Meanwhile, the coverage of the initial core is set to be global coverage, i.e., n 1_σ =inf, so that a discrete point that cannot be classified as any existing dynamic core is generated as a dynamic core representing the discrete point.
S02, competition learning: and obtaining current newly-increased time sequence data, stimulating each dynamic core by the current newly-increased time sequence data, obtaining corresponding output after each dynamic core responds, selecting the dynamic core with the largest output as a winning dynamic core, and copying the category of the winning dynamic core to the current newly-increased time sequence data.
In this embodiment, when the current newly added time series data stimulates each dynamic core, a gaussian base function is specifically selected as an activation function of the dynamic core, a gaussian type function is specifically used as an activation function of the dynamic core for the current newly added time series data, all the dynamic cores respond according to the gaussian base function, the center of the core is the gaussian base function center, the radius of the coverage area of the core is the covariance of the gaussian base function, and when the new time series data x i+1 is added, the corresponding output generated after the current newly added time series data stimulates the dynamic core n j is specifically:
Wherein n j_μ represents the mean value of the gaussian base function and is taken as the center of the j-th dynamic core, namely the mean value of the gaussian base function is taken as the center of the dynamic core, and the center of the dynamic core is also taken as the center of the cluster represented by the dynamic core; n j_σ represents the covariance of the gaussian basis function and is taken as the coverage radius of the j-th dynamic kernel, i.e. the covariance of the gaussian basis function is taken as the coverage radius of the dynamic kernel, i.e. the coverage radius represents the range attributed to the cluster represented by the dynamic kernel. Dis () represents a similarity function used to calculate the center of a newly added sample and a dynamic core, and since the input sample is co-dimensional with the input of the dynamic core, the input sample can measure the similarity of the center vector of the dynamic core and the center vector of the dynamic core by calculating the Euclidean distance.
Considering that samples within a range of 3σ from the dynamic core center occupy 99.73% of the dynamic core coverage according to the gaussian distribution, in this embodiment, for newly added time series data x i+1, the distance from the dynamic core center exceeds 3σ, the probability that newly added time series data x i+1 belongs to the cluster represented by the dynamic core is extremely low, so that discarding is selected, and if the output of dynamic core n j is maximum when time series data x i+1 is newly added, which means that dynamic core n j wins in competition, class label of dynamic core n j is copied to newly added sample x i+1, i.e./>
In this embodiment, the memory saturation is used to describe the activation frequency of the dynamic core and to regulate the splitting time, and when the sample is added, all the dynamic cores respond according to the gaussian basis function, and the dynamic core with the largest output is selected as the winning dynamic core. Assuming that dynamic kernel n i wins due to the stimulus of the newly added sample x i+1, the DCC algorithm of this embodiment updates the memory saturation n i_m of the winning dynamic kernel n i according to the following formula, where n i_m′ represents the updated memory saturation of dynamic kernel n i.
ni_m′=ni_m+Δs
Δs=o(1-o)
When the memory saturation of the winning dynamic core is updated, a radial basis function is specifically adopted to calculate a variable o in the memory saturation increment, and if a newly added sample which is too close to the center of the dynamic core and too far from the center of the dynamic core has small memory disturbance which is originally stable for the dynamic core, if the current newly added time sequence data is smaller than a preset threshold value, namely, a sample which is too close to the center of the winning dynamic core, the memory saturation of the winning dynamic core is updated to 0 because the difference between the newly added time sequence data and the dynamic core is too small to introduce new information to be 0, and if the current newly added time sequence data is larger than the preset threshold value, namely, a sample which is too far from the center of the winning dynamic core, the cluster represented by the winning core has little influence on the cluster, and therefore, the expression o (1-o) is adopted as the increment deltas of the memory saturation.
S03, dynamic core updating: and regulating and controlling the splitting time of each dynamic core by using the memory saturation for describing the activation frequency of the dynamic core, wherein when the memory saturation of the dynamic core exceeds a preset saturation threshold s t, the corresponding dynamic core is controlled to split to generate a new core, and the updated dynamic core set is obtained.
The threshold value of the dynamic core reaching the splitting is the saturation threshold value s t, the saturation threshold value is designed to adopt more dynamic core representation in a dense region of sample distribution, so that the local characteristics of the clusters can be found conveniently in the process of dynamically adjusting the positions of the dynamic cores, the situation that a single core represents a large group of samples is avoided, and the method is a representation of the subdivision capability of the algorithm on the sample distribution characteristics.
The memory saturation of the dynamic core indicates the degree of information carried by the dynamic core, and if the dynamic core n i wins in multiple contentions (indicating that new samples continuously appear in the coverage area of the dynamic core), that is, the dynamic core n i is continuously stimulated, the memory saturation of the dynamic core is continuously increased. When the memory saturation n i_m of the dynamic core n i exceeds the upper limit of the information carrying capacity, which means that the sample density in the coverage area of the dynamic core is higher, and the subdivision is necessary, then a new core is generated by splitting, and a plurality of dynamic cores are adopted to subdivide the original clusters.
In the process of regulating the splitting time of each dynamic core, when the dynamic core splitting reaches a preset saturation threshold s t, setting the memory saturation of the winning dynamic core as a reset threshold r t, and when the memory saturation of the dynamic core exceeds the reset threshold but does not reach the preset splitting threshold, decaying the memory saturation to the reset threshold according to beats.
Because of the unordered nature of the newly added samples, the dynamic core is not in a winning state continuously in competition, i.e. the memory saturation of the dynamic core is not increased continuously. The present embodiment therefore introduces a forgetting mechanism that decays the memory saturation to the reset threshold r t in beats when the memory saturation of the dynamic core exceeds the reset threshold r t but does not reach the split threshold. The present embodiment decays to the reset threshold specifically in a manner that sets the beat. When the winning dynamic core reaches the saturation threshold and splits, the DCC algorithm in this embodiment sets the memory saturation of the winning dynamic core to the reset threshold r t, where the dynamic core can win and eventually split in multiple contentions, which indicates that the dynamic core is in an active state, that is, the probability that new samples reappear in the sample distribution area represented by the dynamic core is high. If it is not reset, the dynamic core is in a frequently split state; if it is reset to 0, it is equal to the result learned by the dynamic core is abandoned, so by resetting it to r t, the memory of the original sample is maintained, and the persistent splitting is avoided.
In a specific application embodiment, the result of the forgetting mechanism adopting the memory saturation is shown in fig. 2, the ordinate is memory saturation information of the dynamic core, the abscissa represents new samples which are sequentially added, the dotted line in the figure represents that the increase of the memory saturation is not linear but is related to the new samples, and when the result is shown as an arrow, the dynamic core wins after the sample is added; the line is shown to indicate that the dynamic kernel did not win after the sample was added. As can be seen from the figure, when the dynamic kernel does not win after the sixth sample is added, the memory saturation of the dynamic kernel decays according to the beat; when the seventh sample is added, the dynamic kernel wins, at which time memory saturation increases; when the ninth sample is added, the dynamic kernel does not win, but the memory saturation is already the reset threshold, so the reset threshold is not reduced any more, and remains unchanged.
The above-mentioned dynamic core for determining winning according to the competition learning mode specifically includes:
State 1: when initial core n 1 wins:
The initial core n 1 splits to generate the center of a new core n i+1, where the center of the new core n i+1 is the position of the current newly added time-series data x i+1, the memory saturation of the current winning dynamic core n 1 is assigned to 0, and the overlay field of the current winning dynamic core n 1 is set to , that is, the parent core is located at the 3 sigma boundary of the newly generated core, where n 1_μ is the center of the current winning dynamic core n 1. Because of the generation of the new kernel, the newly added sample x i+1 belongs to the category represented by the new kernel, and thus the category of the newly added sample x i+1 is marked as/>
State 2: dynamic kernel n g(g∈[1,i]) winning
Firstly, according to the formula 1, the memory saturation of the dynamic kernel n g is updated to n g_m′, and the relationship between the memory saturation of the winning kernel and the saturation threshold is judged, if:
(1) n g_m′<st, the center of the dynamic kernel n g is adjusted according to n g_μ'=ng_m′·ng_μ+(1-ng_m′)·xi+1, n g_μ' represents the adjusted center position, the coverage area n g_σ remains unchanged, the newly added sample belongs to the dynamic kernel n g, and then the class mark of the newly added sample x i+1
(2) N g_m′≥st. Then a split operation is performed to generate a new dynamic kernel n i+1, the winning kernel n g is split, but the center and coverage fields are not adjusted, the memory saturation is reset to r t, the generated kernel n i+1 center vector is represented by the position of the newly added sample x i+1, the coverage field is , the memory saturation is set to 0, and then the category label of the newly added sample x i+1 is obtained
State 3: dynamic kernel n g(g∈[1,i]) does not win
And judging whether to attenuate the memory saturation information according to the memory saturation information. If r t<ng_m<St, then decay by the set number of beats, e.g., n g_m′=ng_m+τ(ng_m-rt) indicates decay by τ beats to the reset threshold, where neither the center of the dynamic kernel nor the coverage area is adjusted. If n g_m<rt, the dynamic core does not change, i.e., the newly added sample has no effect on dynamic core n g.
S04, gathering each dynamic core in the updated dynamic core set into different categories according to the center and the coverage domain of the dynamic core to obtain the clustering result of each current dynamic core, and returning to the step S02 until the clustering is exited.
And sequentially taking all samples as the input of the dynamic core, judging the output of all the dynamic cores, and then clustering the samples into the class which the dynamic core with the largest corresponding output belongs to, so that the clustering result of all the samples to be clustered can be obtained.
The DCC algorithm in this embodiment can realize that the cluster model starts to evolve from 0, that is, the cluster model needs to be obtained according to the model evolution algorithm, and the model evolution algorithm obtains the dynamic kernel set N i before adding the new sample x i+1 by determining the initial kernel N 1 and then updating the cluster model N and the quadruple of the dynamic kernel included in the model according to the dynamic kernel updating mechanism. The model evolution algorithm is input as a new sample x i+1 and output as the updated dynamic kernel set N i+1 and class labels of sample x i+1.
In a specific application embodiment, the following pseudo code is used to implement the model evolution process in the DCC algorithm in this embodiment, as shown in algorithm 1.
The model evolution algorithm can obtain a clustering model which continuously develops along with the increase of samples, and the model only contains four-element information of each dynamic core. The DCC algorithm in this embodiment clusters all the learned samples in space based on the model evolution algorithm by using the thought of development, so as to obtain a clustering result about the samples, where i learned unlabeled sample sets are denoted as C i={x1,x2,...,xi }, the input of the learned clustering result, i.e., the sample set with a clustering label, is denoted as DCC algorithm as a newly added sample x i+1, and the output is a clustering result D i+1 about i+1 learned samples, where algorithm 2 is adopted in a specific application embodiment as follows.
Algorithm 2: DCC algorithm
Input: x i+1
And (3) outputting: d i+1
1:Ni+1=Model_evolution(xi+1,Ni)
:2:Ni+1=Link(Ni+1)
3:Ci+1=Ci∪{xi+1}
4:Di+1=Clustering(N′i+1,Ci+1)
A dynamic kernel represents the collection of all samples within its coverage area, i.e. samples within the 3σ range of the dynamic kernel should be grouped together with the dynamic kernel. In order to be able to cluster samples, for the generated dynamic core set N i+1, marking the dynamic cores by using a function Link (), and in the dynamic adjustment process of the dynamic core center and the coverage area, if the center of one of the two target dynamic cores enters the coverage area of the other dynamic core, marking the two target dynamic cores as the same class; if the centers of the two target dynamic cores do not enter the coverage areas of other dynamic cores, the two target dynamic cores are correspondingly marked as two categories, and finally, the dynamic cores are gathered into different categories. As shown in fig. 3 (a), in the dynamic core center and coverage dynamic adjustment process, the center of the dynamic core B enters the coverage area of the dynamic core a, so we mark the dynamic cores a and B as the same class; as shown in fig. 3 (B), neither of the centers of the dynamic cores a and B enters the coverage area of the other dynamic cores, and thus the dynamic core a is marked as one type and the dynamic core B is marked as the other type. As new samples are added, the coverage area and the center of the dynamic core are dynamically adjusted, and the function Link () marks the dynamic core according to the updated dynamic core center and the updated coverage area, so that the dynamic cores are gathered into different classes. After marking all dynamic cores, the class of the learned i+1 samples needs to be updated. Since the DCC algorithm of this embodiment is based on a developmental model, the addition of each new sample has an adjusting effect on the previously learned outcome. The learned sample is taken as the input of the dynamic core in the updated cluster model through a function Clustering (), and the sample is divided into the category to which the dynamic core with the largest response belongs.
The DCC algorithm of the present invention is suitable for discovering the convex cluster in the space based on the adjustment of the center and the coverage of the dynamic core, and in a specific application embodiment, the method of the present invention is verified on the artificial data set, and the clustering result is shown in fig. 4, where "" represents the position of the generated dynamic core in the space. As shown in fig. 4 (a), the number of dynamic kernels generated in the dense sample distribution area is large, which also meets the original purpose of algorithm design, namely, the dynamic kernels are matched with the sample size. In fig. 4 (b) and 4 (c), the generated dynamic kernels are well fitted to the concentrated areas of the sample distribution, i.e. the adaptive tuning features of the dynamic kernels are well found for the distribution of different clusters.
The development type clustering method can realize development starting from 0, namely self development along with the increase of samples under the condition of no priori knowledge, can start from 0 and gradually develop along with the increase of samples, can learn knowledge from newly added samples and correct models, and can perform clustering on the newly added samples through the models, thereby effectively realizing learning and use in the development process. In addition, the invention solves the clustering problem based on incremental data by using a development type clustering algorithm, can give out reasonable clustering results in real time along with the increase of samples, can keep the memory of original learning, timely adjusts learned results by the information introduced by the newly added samples, and marks reasonable category labels on the newly added samples.
The dynamic kernel is used as the basis of model development in the DCC algorithm, but the dynamic kernel can be replaced and represented by other representative points, units or nouns in the sample set, for example, neurons are used for representing the dynamic kernel.
The embodiment takes development type machine learning for clustering target images in video streams as an example, and of course, the development type machine learning method can be also applied to other fields, for example, a development type neural network also belongs to the category of development artificial intelligence, namely, the development type neural network can realize development from 0 neurons, and can realize operations such as self-generation, extinction and the like of the neurons along with continuous change of samples, so that the capability of more autonomous, artificial participation-free and self-development evolution can be realized under the conditions of unknown, open and limited samples.
The foregoing is merely a preferred embodiment of the present invention and is not intended to limit the present invention in any way. While the invention has been described with reference to preferred embodiments, it is not intended to be limiting. Therefore, any simple modification, equivalent variation and modification of the above embodiments according to the technical substance of the present invention shall fall within the scope of the technical solution of the present invention.

Claims (8)

1. The time sequence data clustering method based on dynamic nuclear development is characterized by comprising the following steps:
S01, configuring an initial core and taking the initial core as a starting point of dynamic nuclear division development;
S02, acquiring current newly-added time sequence data, stimulating each dynamic core by the current newly-added time sequence data, responding to each dynamic core to obtain corresponding output, selecting the dynamic core with the largest output as a winning dynamic core, and copying the category of the winning dynamic core to the current newly-added time sequence data, wherein the time sequence data is video stream data;
s03, regulating and controlling splitting time of each dynamic core by using memory saturation for describing activation frequency of the dynamic core, wherein when the memory saturation of the dynamic core exceeds a preset threshold, controlling the corresponding dynamic core to split to generate a new core, and obtaining an updated dynamic core set;
s04, gathering each dynamic core in the updated dynamic core set into different categories according to the center and the coverage domain of the dynamic core to obtain a clustering result of each current dynamic core, and returning to the step S02 until the clustering is exited;
In the step S01, the position of the first time-series data sample in the space is taken as the center of an initial core n 1, the initial core n 1 is taken as the starting point of dynamic nuclear division development, the memory saturation of the initial core n 1 is set to be a preset saturation threshold S t, so that when the initial core n 1 wins in competition, a new dynamic core is generated by direct division, and the coverage area of the initial core is set to be global coverage;
When the current newly added time series data stimulates each dynamic core in the step S02, using a gaussian basis function as an activation function of the dynamic core for the current newly added time series data, and responding all the dynamic cores according to the gaussian basis function, wherein the corresponding output O n generated after the current newly added time series data stimulates the dynamic core n j is specifically:
Wherein x i+1 represents the current newly added time sequence data, dis () represents a similarity function for calculating the newly added time sequence data x i+1 and the center of the dynamic core, and n j_μ represents the average value of the Gaussian basis function and serves as the center of the j-th dynamic core; n j_σ represents the covariance of the gaussian basis function and serves as the coverage radius of the j-th dynamic kernel, which represents the range attributed to the cluster represented by the dynamic kernel.
2. The method for clustering time-series data based on dynamic core development according to claim 1, wherein in the step S02, when selecting the dynamic core with the largest output as the winning dynamic core further comprises: and updating the memory saturation of the winning dynamic core, updating the memory saturation of the winning dynamic core to 0 if the current newly-added time-series data is smaller than a preset threshold value from the center of the winning dynamic core, and adopting an expression o (1-o) as an increment delta s of the memory saturation if the current newly-added time-series data is larger than the preset threshold value from the center of the winning dynamic core, wherein a radial basis function is adopted to calculate a variable o in the increment delta s of the memory saturation.
3. The method of time series data clustering based on dynamic core development according to claim 1, wherein in step S02, the winning dynamic core is determined according to a competition learning mode, comprising:
When the initial core n 1 wins, splitting by the initial core n 1 to generate a center of a new core n i+1, wherein the center of the new core n i+1 is the position of the current newly added time sequence data x i+1, assigning the memory saturation of the current winning dynamic core n 1 to 0, and setting the coverage of the current winning dynamic core n 1 to , wherein n 1_μ is the center of the current winning dynamic core n 1, and Dis () represents a similarity function;
When the dynamic kernel n g wins, wherein g is [1, i ], updating the memory saturation of the current winning dynamic kernel n g to be n g_m′, judging the relation between the memory saturation n g_m′ of the current winning dynamic kernel n g and a preset saturation threshold s t, and adjusting the center of the current winning dynamic kernel n g or executing splitting operation on the current winning dynamic kernel n g according to the judging result;
When the dynamic kernel n g does not win, g epsilon [1, i), judging whether the dynamic kernel n g needs to be attenuated according to the memory saturation.
4. The method for clustering time series data based on dynamic kernel development according to claim 3, wherein when the dynamic kernel n g wins, the method specifically comprises:
When n g_m′<st, the center of the current winning dynamic core n g is adjusted according to n g_μ'=ng_m′·ng_μ+(1-ng_m′)·xi+1, wherein n g_μ' represents the adjusted center position, the coverage area n g_σ of the current winning dynamic core n g is kept unchanged, the current newly added time sequence data x i+1 is attributed to the current winning dynamic core n g, and the category mark of the current winning dynamic core n g is copied to the current newly added time sequence data z i+1;
When n g_m′≥st, then a split operation is performed on the current winning dynamic core n g, a new dynamic core n i+1 is generated, the center and coverage fields of the current winning dynamic core n g are kept unchanged, the memory saturation of the winning dynamic core n g is reset to r t, the center vector of the generated new dynamic core n i+1 is represented by the position of the current newly added time series data x i+1, the corresponding coverage field is set to , and the memory saturation of the new dynamic core n i+1 is set to 0, and the current newly added time series data x i+1 is attributed to the same category as the generated new dynamic core n i+1.
5. The method for clustering time series data based on dynamic kernel development according to claim 3, wherein when the dynamic kernel n g does not win, if r t<ng_m<st,rt is a preset reset threshold, the method decays according to a preset beat number until a preset reset threshold r t, and keeps the center and coverage area of the dynamic kernel n g unchanged; if n g_m<rt, dynamic core n g is unchanged, i.e., the current newly added timing data has no effect on dynamic core n g.
6. The method according to any one of claims 1 to 5, wherein in the step S03, in the process of adjusting the splitting timing of each dynamic core, when the dynamic core splits to reach a preset saturation threshold, the memory saturation of the winning dynamic core is set as a reset threshold, and when the memory saturation of the dynamic core exceeds the reset threshold but does not reach the preset splitting threshold, the memory saturation is decayed to the reset threshold according to beats.
7. The method for clustering time-series data based on dynamic nuclear development according to any one of claims 1 to 5, wherein the step S04 includes: marking each dynamic core by using a Link () function for the updated dynamic core set N i+1, and marking two target dynamic cores as the same class if the center of one dynamic core of the two target dynamic cores enters the coverage area of the other dynamic core in the dynamic core center and coverage area dynamic adjustment process; if the centers of the two target dynamic cores do not enter the coverage areas of other dynamic cores, the two target dynamic cores are correspondingly marked as two categories, and finally, the dynamic cores are gathered into different categories.
8. A time series data clustering system based on dynamic nuclear development, comprising a processor and a memory, the memory for storing a computer program, the processor for executing the computer program, characterized in that the processor is for executing the computer program to perform the method of any one of claims 1-7.
CN202111423566.1A 2021-11-26 2021-11-26 Time sequence data clustering method and system based on dynamic kernel development Active CN114139033B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111423566.1A CN114139033B (en) 2021-11-26 2021-11-26 Time sequence data clustering method and system based on dynamic kernel development

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111423566.1A CN114139033B (en) 2021-11-26 2021-11-26 Time sequence data clustering method and system based on dynamic kernel development

Publications (2)

Publication Number Publication Date
CN114139033A CN114139033A (en) 2022-03-04
CN114139033B true CN114139033B (en) 2024-04-16

Family

ID=80388618

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111423566.1A Active CN114139033B (en) 2021-11-26 2021-11-26 Time sequence data clustering method and system based on dynamic kernel development

Country Status (1)

Country Link
CN (1) CN114139033B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115344723A (en) * 2022-06-08 2022-11-15 安徽大学 Digital culture visualization method based on improved constructive coverage clustering algorithm

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103714153A (en) * 2013-12-26 2014-04-09 西安理工大学 Density clustering method based on limited area data sampling
US9465857B1 (en) * 2013-09-26 2016-10-11 Groupon, Inc. Dynamic clustering for streaming data
CN113269238A (en) * 2021-05-12 2021-08-17 南京邮电大学 Data stream clustering method and device based on density peak value

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9465857B1 (en) * 2013-09-26 2016-10-11 Groupon, Inc. Dynamic clustering for streaming data
CN103714153A (en) * 2013-12-26 2014-04-09 西安理工大学 Density clustering method based on limited area data sampling
CN113269238A (en) * 2021-05-12 2021-08-17 南京邮电大学 Data stream clustering method and device based on density peak value

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"早期时间序列分类方法研究综述";杨梦晨 等;《华东师范大学学报(自然科学版)》(第5期);116-129 *

Also Published As

Publication number Publication date
CN114139033A (en) 2022-03-04

Similar Documents

Publication Publication Date Title
CN110188635B (en) Plant disease and insect pest identification method based on attention mechanism and multi-level convolution characteristics
CN105488528B (en) Neural network image classification method based on improving expert inquiry method
Jiao et al. A hybrid belief rule-based classification system based on uncertain training data and expert knowledge
CN112132014B (en) Target re-identification method and system based on non-supervised pyramid similarity learning
CN108052968B (en) QSFLA-SVM perception intrusion detection method
Song On the weight convergence of Elman networks
CN114139033B (en) Time sequence data clustering method and system based on dynamic kernel development
CN111709468B (en) Training method and device for directional artificial intelligence and storage medium
Carpenter et al. ARTMAP: A self-organizing neural network architecture for fast supervised learning and pattern recognition.
CN107783998A (en) The method and device of a kind of data processing
Ye et al. K-means clustering algorithm based on improved Cuckoo search algorithm and its application
CN111783866B (en) Production logistics early warning information multi-classification method based on improved FOA-SVM
CN115587323A (en) Open environment pattern recognition method based on dynamic neurons
CN116484193A (en) Crop yield prediction method, system, equipment and medium
CN108573275B (en) Construction method of online classification micro-service
CN114067155B (en) Image classification method, device, product and storage medium based on meta learning
CN116311228A (en) Uncertainty sampling-based corn kernel identification method and system and electronic equipment
Sarin et al. Identification of fuzzy classifiers based on the mountain clustering and cuckoo search algorithms
WO2022162839A1 (en) Learning device, learning method, and recording medium
CN113255765A (en) Cognitive learning method based on brain mechanism
Dlapa Cluster restarted DM: New algorithm for global optimisation
WO2024171286A1 (en) Learning device, learning method, and recording medium
Ball Towards the development of cognitive maps in classifier systems
CN113190068A (en) Temperature and humidity detection control method and system for raw materials for feed production
CN117784615B (en) Fire control system fault prediction method based on IMPA-RF

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant