CN109885677A - A kind of multi-faceted big data acquisition clearing system and method - Google Patents
A kind of multi-faceted big data acquisition clearing system and method Download PDFInfo
- Publication number
- CN109885677A CN109885677A CN201811602313.9A CN201811602313A CN109885677A CN 109885677 A CN109885677 A CN 109885677A CN 201811602313 A CN201811602313 A CN 201811602313A CN 109885677 A CN109885677 A CN 109885677A
- Authority
- CN
- China
- Prior art keywords
- data
- module
- faceted
- weight
- big data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Abstract
The invention belongs to big data field, a kind of multi-faceted big data acquisition clearing system and method, including main control module, data acquisition module, data collection memory module, data analysis module and Drawing of Curve module are disclosed.The present invention is by utilizing data collecting module collected data, and classification classification is carried out to data, record the information of marked rectangle frame region, and the present invention uses speed faster, the stronger deep learning YOLO model of generalization ability, doing in terms of data acquire arrangement can more quickly, more systematicization.Data collection memory module saves parameter using distributed Ethernet under the control of main control module.And data are analyzed and processed by the neural network of Weight using data analysis module, draw change curve using Drawing of Curve module.Structure of the invention is reasonable, can effectively improve the efficiency that data acquisition arranges.
Description
Technical field
The invention belongs to big data field, in particular to a kind of multi-faceted big data acquisition clearing system and method.
Background technique
With the arriving of cloud era, big data has also attracted more and more concerns.Real-time large data set analysis needs
It largely to work, need to analyze bigger data volume, the challenge of algorithm and computing platform be increased, computing cost increases.And
And because of the presence of noise data, many useless data can be generated, data value is caused to substantially reduce.So from various
In the data of type, being quickly obtained valuable information is just particularly important.
In conclusion problem of the existing technology is
It trains network low efficiency and generalization ability is not high, general data memory reliability is low, and transmission, which has, greatly prolongs
Shi Xing;When carrying out taxonomic revision to data, classification is chaotic, and precision is not high, and learning time is long.
Summary of the invention
The main purpose of the present invention is to provide a kind of multi-faceted big data acquisition clearing system and methods, are adopted using data
Collect modules acquiring data, and classification classification is carried out to data, records the information of marked rectangle frame region, and the present invention uses
Faster, the stronger deep learning YOLO model of generalization ability, doing in terms of data acquire arrangement can more quickly, more for speed
Systematization.Data collection memory module saves parameter using distributed Ethernet under the control of main control module.And utilize number
Data are analyzed and processed by the neural network of Weight according to analysis module, draw change curve using Drawing of Curve module
Figure.The problems in background technique can effectively be solved.
To achieve the above object, the technical scheme adopted by the invention is as follows: a kind of multi-faceted big data acquires method for sorting, institute
Stating multi-faceted big data acquisition method for sorting includes:
A certain number of data are collected, classification classification is carried out to data, records the information of marked rectangle frame region, as
Training dataset;
Training data is trained with deep learning YOLO model, and using randomly drawing sample to the classification of model
It can be carried out test, parameter adjustment carried out to model;
Data are analyzed and processed using the neural network of Weight;And parameter is saved using distributed Ethernet;
Change curve is drawn using data processor.
Further, the neural network of Weight includes:
(1) the cluster centre C of n sample is determinedi(i=1,2 ..., m, m≤n), m are the number of hidden node, and n is sample
Sum;
(2) radius sigma of radial basis function is determinedi(i=1,2 ..., m), the method is as follows:
Wherein, i=1,2 ..., m, m are the number of hidden node, ΓiFor the C for belonging to kernel function centeriSample set, NiFor
Set ΓiElement number;
(3) connection weight w is initialized with random numberi(i=1,2 ..., m);
(4) with following formula model find out n input sample corresponding to n output, F1(x)、 F2(x)…Fn(x):
(5) weight is corrected using gradient descent algorithm:
If dj(j=1,2 ..., n) is the target output of j-th of sample, Fj(j=1,2 ..., n) is reality output, is found out
Network error ej;
The description of network overall error are as follows:
ξ (k) is about wj(k) partial derivative are as follows:
Weight is corrected with following formula:
Wherein, i=1,2 ..., m, m are the number of hidden node, and η is the learning rate of weight vector;Give a minimal error
E terminates training when network output error is less than e, otherwise goes to (4) step using modified weight and calculates network mistake again
Difference.
Further, in being analyzed and processed using the neural network of Weight to data, if having certainly in target and decision-making level
Plan index p1,p2,…,pm, the network architecture layer under target and decision-making level has C1,C2,…,CNA index set, wherein CiIn have member
Element
With target and decision-making level's decision index system ps(s=1,2 ..., m) is criterion, with CjMiddle element ejk(k=1,2 ...,
nj) it is time criterion, by index set CiMiddle index is by it to ejkInfluence power size carry out indirect dominance and compare, i.e., in criterion ps
Lower Judgement Matricies:
And weight vectors are obtained by eigenvalue method
Further, k=1,2.., niWhen, obtain matrix W shown in following formulaij;
Wherein, WijColumn vector be CiIn elementTo CjMiddle elementInfluence degree row
Sequence vector;If CjMiddle element is not by CiMiddle element influences, then Wij=0.
Further, i=1,2 ..., N;When j=1,2 ..., N, decision rule p is obtainedsUnder hypermatrix W:
In the hypermatrix W, element WijReflect element i to a step dominance of element j;W can also be calculated2, yuan
ElementIndicate two step dominances of the element i to element j, W2Still it is classified as normalization matrix, and so on, calculate W3, W4...,
Work as W∞In the presence of, W∞Jth column be exactly criterion psIn lower network framework layer each element for j limit relative weighting vector, then
The wherein numerical value of every a line, as the partial weight vector of respective element;When certain a line it is all 0 when, then accordingly
Partial weight be 1;Partial weight is obtained into partial weight vector by order of elements arrangement
Another object of the present invention is to provide a kind of computer journey for realizing the multi-faceted big data acquisition method for sorting
Sequence.
Another object of the present invention is to provide a kind of information data for realizing the multi-faceted big data acquisition method for sorting
Processing terminal.
Another object of the present invention is to provide a kind of computer readable storage medium, including instruction, when its on computers
When operation, so that computer executes the multi-faceted big data and acquires method for sorting
Another object of the present invention is to provide a kind of multi-faceted big data acquisition clearing system, and the multi-faceted big data is adopted
Collecting clearing system includes main control module, data acquisition module, data collection memory module, data analysis module and Drawing of Curve mould
Block;
After the data acquisition module collects certain amount data, classification classification is carried out to data, records marked rectangle
The information of frame region completes the production of training data;
The data collection memory module is connect with main control module, including data collection storage server and interchanger;
The data analysis module is connect with main control module, is analyzed by the neural network of Weight efficiency data
Processing;
The Drawing of Curve module, connect with main control module, for drawing efficiency change curve by data processor
Figure.
Further, the data acquisition module is trained training data with deep learning YOLO model, and use with
Machine sample drawn tests the classification performance of model, carries out parameter adjustment to model;
The data collection memory module saves the various operations ginseng with energy equipment received using distributed Ethernet
Number carries out artificial intelligence on-line analysis to operating parameter.
Compared with prior art, the invention has the following beneficial effects:
Classification classification is carried out using data collecting module collected data, and to data, records marked rectangle frame region
Information, and the present invention uses speed faster, the stronger deep learning YOLO model of generalization ability, in terms of data acquire arrangement
Doing can more quickly, more systematicization.Data collection memory module utilizes distributed Ethernet under the control of main control module
Parameter, high reliablity are saved, net interior nodes shared resource is easy, information flow-rate distribution, the selection optimal path of route can be improved,
Transmission delay is small;Data are analyzed and processed by the neural network of Weight using data analysis module, classification can be improved
Precision, while shortening the time of study, Drawing of Curve module utilized to draw change curve.Big data acquisition is allowed to arrange more
Systematization, effectively.
The neural network of Weight of the present invention includes:
(1) the cluster centre C of n sample is determinedi(i=1,2 ..., m, m≤n), m are the number of hidden node, and n is sample
Sum;
(2) radius sigma of radial basis function is determinedi(i=1,2 ..., m), the method is as follows:
Wherein, i=1,2 ..., m, m are the number of hidden node, ΓiFor the C for belonging to kernel function centeriSample set, NiFor
Set ΓiElement number;
(3) connection weight w is initialized with random numberi(i=1,2 ..., m);
(4) with following formula model find out n input sample corresponding to n output, F1(x)、 F2(x)…Fn(x):
(5) weight is corrected using gradient descent algorithm:
If dj(j=1,2 ..., n) is the target output of j-th of sample, Fj(j=1,2 ..., n) is reality output, is found out
Network error ej;
The description of network overall error are as follows:
ξ (k) is about wj(k) partial derivative are as follows:
Weight is corrected with following formula:
Wherein, i=1,2 ..., m, m are the number of hidden node, and η is the learning rate of weight vector;Give a minimal error
E terminates training when network output error is less than e, otherwise goes to (4) step using modified weight and calculates network mistake again
Difference.
In being analyzed and processed to data using the neural network of Weight, if having decision index system in target and decision-making level
p1,p2,…,pm, the network architecture layer under target and decision-making level has C1,C2,…,CNA index set, wherein CiIn have element
With target and decision-making level's decision index system ps(s=1,2 ..., m) is criterion, with CjMiddle element ejk(k=1,2 ..., nj)
For secondary criterion, by index set CiMiddle index is by it to ejkInfluence power size carry out indirect dominance and compare, i.e., in criterion psUnder
Judgement Matricies:
And weight vectors are obtained by eigenvalue methodRealize neural network to data into
Row precision of analysis;Basis is provided for the operation of postorder program.
Detailed description of the invention
Fig. 1 is multi-faceted big data acquisition clearing system overall structure diagram provided in an embodiment of the present invention;
In figure: 1, main control module;2, data acquisition module;3, data collection memory module;4, data analysis module;5,
Drawing of Curve module.
Fig. 2 is multi-faceted big data acquisition method for sorting flow chart provided in an embodiment of the present invention;
Specific embodiment
To be easy to understand the technical means, the creative features, the aims and the efficiencies achieved by the present invention, below with reference to
The present invention is further elaborated in specific embodiment.
As shown in Figure 1, multi-faceted big data acquisition clearing system provided in an embodiment of the present invention includes:
Main control module 1, data acquisition module 2, data collection memory module 3, data analysis module 4 and Drawing of Curve module
5;
Main control module 1: each module is controlled using single-chip microcontroller and is worked;
Data acquisition module 2: collecting data, makes training data, and transfer data to 4 remainder evidence of data analysis module
Collect memory module 3;
Data collection memory module 3: connecting with main control module 1, including data collection storage server and interchanger;
Data analysis module 4: connecting with main control module 1, is analyzed and processed to efficiency data;
Drawing of Curve module 5: connecting with main control module 1, for drawing efficiency change curve.
Data collection memory module 3 provided in an embodiment of the present invention includes:
Using distributed Ethernet, the various operating parameters with energy equipment received are saved, people is carried out to operating parameter
The analysis of work intelligent online.
The embodiment of the present invention provides data acquisition module 2
Training data is trained using deep learning YOLO model, and the classification using randomly drawing sample to model
Performance is tested, and carries out parameter adjustment to model;
As shown in Fig. 2, multi-faceted big data acquisition method for sorting provided in an embodiment of the present invention includes:
S101: collecting a certain number of data, carries out classification classification to data, records the letter of marked rectangle frame region
Breath, as training dataset;
S102: being trained training data with deep learning YOLO model, and using randomly drawing sample to model
Classification performance is tested, and carries out parameter adjustment to model;
S103: data are analyzed and processed using the neural network of Weight;And ginseng is saved using distributed Ethernet
Number;
S104: change curve is drawn using data processor.
In step S102, deep learning YOLO model training method provided in an embodiment of the present invention includes:
1, it is carried out carrying out data set mark to the data after collection, classification classification with labelimg, and saved;Meeting after preservation
Generate xml document identical with marked filename;
2, YOLO projects are downloaded;
3, it modifies the Makefile file being stored in darknet file and saves;
4, prepare data set;File VOCdevkit is created under scripts file, and is stored in relevant data;
5, voc_label.py file is modified with the mark name in step 1 according to VOCdevkit and run;
6, weight trained in advance on Imagenet is downloaded;
7, cfg/voc.data is modified according to practical application;data/voc.name;cfg/yolov3- voc.cfg;
8, start to train.
In step S103, the neural network of Weight provided in an embodiment of the present invention includes:
(1) the cluster centre C of n sample is determinedi(i=1,2 ..., m, m≤n), m are the number of hidden node, and n is sample
Sum;
(2) radius sigma of radial basis function is determinedi(i=1,2 ..., m), the method is as follows:
Wherein, i=1,2 ..., m, m are the number of hidden node, ΓiFor the C for belonging to kernel function centeriSample set, NiFor
Set ΓiElement number;
(3) connection weight w is initialized with random numberi(i=1,2 ..., m);
(4) with following formula model find out n input sample corresponding to n output, F1(x)、 F2(x)…Fn(x):
(5) weight is corrected using gradient descent algorithm:
If dj(j=1,2 ..., n) is the target output of j-th of sample, Fj(j=1,2 ..., n) is reality output, is found out
Network error ej;
Network overall error can be described as:
ξ (k) is about wj(k) partial derivative are as follows:
Weight is corrected with following formula:
Wherein, i=1,2 ..., m, m are the number of hidden node, and η is the learning rate of weight vector;
A minimal error e is given, when network output error is less than e, terminates training, otherwise goes to the utilization of (4) step
Modified weight calculates network error again.
In embodiments of the present invention, in being analyzed and processed using the neural network of Weight to data, if target with certainly
There is decision index system p in plan layer1,p2,…,pm, the network architecture layer under target and decision-making level has C1,C2,…,CNA index set,
Middle CiIn have element
With target and decision-making level's decision index system ps(s=1,2 ..., m) is criterion, with CjMiddle element ejk(k=1,2 ...,
nj) it is time criterion, by index set CiMiddle index is by it to ejkInfluence power size carry out indirect dominance and compare, i.e., in criterion ps
Lower Judgement Matricies:
And weight vectors are obtained by eigenvalue method
In embodiments of the present invention, k=1,2.., niWhen, obtain matrix W shown in following formulaij;
Wherein, WijColumn vector be CiIn elementTo CjMiddle elementInfluence degree row
Sequence vector;If CjMiddle element is not by CiMiddle element influences, then Wij=0.
In embodiments of the present invention, i=1,2 ..., N;When j=1,2 ..., N, decision rule p is obtainedsUnder hypermatrix
W:
In the hypermatrix W, element WijReflect element i to a step dominance of element j;W can also be calculated2, yuan
ElementIndicate two step dominances of the element i to element j, W2Still it is classified as normalization matrix, and so on, calculate W3, W4...,
Work as W∞In the presence of, W∞Jth column be exactly criterion psIn lower network framework layer each element for j limit relative weighting vector, then
The wherein numerical value of every a line, as the partial weight vector of respective element;When certain a line it is all 0 when, then accordingly
Partial weight be 1;Partial weight is obtained into partial weight vector by order of elements arrangement
In the above-described embodiments, can come wholly or partly by software, hardware, firmware or any combination thereof real
It is existing.When using entirely or partly realizing in the form of a computer program product, the computer program product include one or
Multiple computer instructions.When loading on computers or executing the computer program instructions, entirely or partly generate according to
Process described in the embodiment of the present invention or function.The computer can be general purpose computer, special purpose computer, computer network
Network or other programmable devices.The computer instruction may be stored in a computer readable storage medium, or from one
Computer readable storage medium is transmitted to another computer readable storage medium, for example, the computer instruction can be from one
A web-site, computer, server or data center pass through wired (such as coaxial cable, optical fiber, Digital Subscriber Line (DSL)
Or wireless (such as infrared, wireless, microwave etc.) mode is carried out to another web-site, computer, server or data center
Transmission).The computer-readable storage medium can be any usable medium or include one that computer can access
The data storage devices such as a or multiple usable mediums integrated server, data center.The usable medium can be magnetic Jie
Matter, (for example, floppy disk, hard disk, tape), optical medium (for example, DVD) or semiconductor medium (such as solid state hard disk Solid
State Disk (SSD)) etc..
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention
Made any modifications, equivalent replacements, and improvements etc., should all be included in the protection scope of the present invention within mind and principle.
Claims (10)
1. a kind of multi-faceted big data acquires method for sorting, which is characterized in that the multi-faceted big data acquires method for sorting packet
It includes:
A certain number of data are collected, classification classification is carried out to data, the information of marked rectangle frame region is recorded, as training
Data set;
Training data is trained with deep learning YOLO model, and using randomly drawing sample to the classification performance of model into
Row test carries out parameter adjustment to model;
Data are analyzed and processed using the neural network of Weight;And parameter is saved using distributed Ethernet;
Change curve is drawn using data processor.
2. multi-faceted big data as described in claim 1 acquires method for sorting, which is characterized in that the neural network packet of Weight
It includes:
(1) the cluster centre C of n sample is determinedi(i=1,2 ..., m, m≤n), m are the number of hidden node, and n is total sample number;
(2) radius sigma of radial basis function is determinedi(i=1,2 ..., m), the method is as follows:
Wherein, i=1,2 ..., m, m are the number of hidden node, ΓiFor the C for belonging to kernel function centeriSample set, NiFor set
ΓiElement number;
(3) connection weight w is initialized with random numberi(i=1,2 ..., m);
(4) with following formula model find out n input sample corresponding to n output, F1(x)、F2(x)…Fn(x):
(5) weight is corrected using gradient descent algorithm:
If dj(j=1,2 ..., n) is the target output of j-th of sample, Fj(j=1,2 ..., n) is reality output, finds out network
Error ej;
The description of network overall error are as follows:
ξ (k) is about wj(k) partial derivative are as follows:
Weight is corrected with following formula:
Wherein, i=1,2 ..., m, m are the number of hidden node, and η is the learning rate of weight vector;A minimal error e is given, when
When network output error is less than e, terminate training, otherwise goes to (4) step using modified weight and calculate network error again.
3. multi-faceted big data as described in claim 1 acquires method for sorting, which is characterized in that utilize the nerve net of Weight
During network is analyzed and processed data, if having decision index system p in target and decision-making level1,p2,…,pm, under target and decision-making level
Network architecture layer has C1,C2,…,CNA index set, wherein CiIn have element
With target and decision-making level's decision index system ps(s=1,2 ..., m) is criterion, with CjMiddle element ejk(k=1,2 ..., nj) it is secondary
Criterion, by index set CiMiddle index is by it to ejkInfluence power size carry out indirect dominance and compare, i.e., in criterion psLower construction
Judgment matrix:
And weight vectors are obtained by eigenvalue method
4. multi-faceted big data as claimed in claim 3 acquires method for sorting, which is characterized in that k=1,2.., niWhen, it obtains
Matrix W shown in following formulaij;
Wherein, WijColumn vector be CiIn elementTo CjMiddle elementInfluence degree sort to
Amount;If CjMiddle element is not by CiMiddle element influences, then Wij=0.
5. multi-faceted big data as claimed in claim 3 acquires method for sorting, which is characterized in that i=1,2 ..., N;J=1,
When 2 ..., N, decision rule p is obtainedsUnder hypermatrix W:
In the hypermatrix W, element WijReflect element i to a step dominance of element j;W can also be calculated2, element wij 2
Indicate two step dominances of the element i to element j, W2Still it is classified as normalization matrix, and so on, calculate W3, W4..., work as W∞It deposits
When, W∞Jth column be exactly criterion psIn lower network framework layer each element for j limit relative weighting vector, then
The wherein numerical value of every a line, as the partial weight vector of respective element;When certain a line it is all 0 when, then corresponding office
Portion's weight is 1;Partial weight is obtained into partial weight vector by order of elements arrangement
6. a kind of computer program for realizing multi-faceted big data acquisition method for sorting described in Claims 1 to 5 any one.
7. a kind of information data processing for realizing multi-faceted big data acquisition method for sorting described in Claims 1 to 5 any one
Terminal.
8. a kind of computer readable storage medium, including instruction, when run on a computer, so that computer is executed as weighed
Benefit requires multi-faceted big data acquisition method for sorting described in 1-5 any one.
9. a kind of multi-faceted big data acquires clearing system, which is characterized in that the multi-faceted big data acquires clearing system packet
Include main control module, data acquisition module, data collection memory module, data analysis module and Drawing of Curve module;
After the data acquisition module collects certain amount data, classification classification is carried out to data, records marked rectangle frame area
The information in domain completes the production of training data;
The data collection memory module is connect with main control module, including data collection storage server and interchanger;
The data analysis module is connect with main control module, is carried out at analysis by the neural network of Weight to efficiency data
Reason;
The Drawing of Curve module, connect with main control module, for drawing efficiency change curve by data processor.
10. multi-faceted big data as claimed in claim 9 acquires clearing system, it is characterised in that: the data acquisition module
Training data is trained with deep learning YOLO model, and is surveyed using classification performance of the randomly drawing sample to model
Examination carries out parameter adjustment to model;
The data collection memory module saves the various operating parameters with energy equipment received using distributed Ethernet,
Artificial intelligence on-line analysis is carried out to operating parameter.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811602313.9A CN109885677A (en) | 2018-12-26 | 2018-12-26 | A kind of multi-faceted big data acquisition clearing system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811602313.9A CN109885677A (en) | 2018-12-26 | 2018-12-26 | A kind of multi-faceted big data acquisition clearing system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109885677A true CN109885677A (en) | 2019-06-14 |
Family
ID=66925237
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811602313.9A Pending CN109885677A (en) | 2018-12-26 | 2018-12-26 | A kind of multi-faceted big data acquisition clearing system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109885677A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106570148A (en) * | 2016-10-27 | 2017-04-19 | 浙江大学 | Convolutional neutral network-based attribute extraction method |
US20170212892A1 (en) * | 2015-09-22 | 2017-07-27 | Tenor, Inc. | Predicting media content items in a dynamic interface |
CN108629288A (en) * | 2018-04-09 | 2018-10-09 | 华中科技大学 | A kind of gesture identification model training method, gesture identification method and system |
CN108765189A (en) * | 2018-05-15 | 2018-11-06 | 国网江苏省电力有限公司电力科学研究院 | Open shelf depreciation big data based on intelligent diagnostics algorithm manages system |
CN108897737A (en) * | 2018-06-28 | 2018-11-27 | 中译语通科技股份有限公司 | A kind of core vocabulary special topic construction method and system based on big data analysis |
CN108937967A (en) * | 2018-05-29 | 2018-12-07 | 智众伟业(天津)科技有限公司南宁分公司 | A kind of psychology data memory promotion detection method and system based on VR technology |
-
2018
- 2018-12-26 CN CN201811602313.9A patent/CN109885677A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170212892A1 (en) * | 2015-09-22 | 2017-07-27 | Tenor, Inc. | Predicting media content items in a dynamic interface |
CN106570148A (en) * | 2016-10-27 | 2017-04-19 | 浙江大学 | Convolutional neutral network-based attribute extraction method |
CN108629288A (en) * | 2018-04-09 | 2018-10-09 | 华中科技大学 | A kind of gesture identification model training method, gesture identification method and system |
CN108765189A (en) * | 2018-05-15 | 2018-11-06 | 国网江苏省电力有限公司电力科学研究院 | Open shelf depreciation big data based on intelligent diagnostics algorithm manages system |
CN108937967A (en) * | 2018-05-29 | 2018-12-07 | 智众伟业(天津)科技有限公司南宁分公司 | A kind of psychology data memory promotion detection method and system based on VR technology |
CN108897737A (en) * | 2018-06-28 | 2018-11-27 | 中译语通科技股份有限公司 | A kind of core vocabulary special topic construction method and system based on big data analysis |
Non-Patent Citations (1)
Title |
---|
杨漾,等: "电力大数据平台建设及实时线损异常检测应用", 《现代计算机》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106973057B (en) | A kind of classification method suitable for intrusion detection | |
CN102651088B (en) | Classification method for malicious code based on A_Kohonen neural network | |
Li et al. | A light gradient boosting machine for remainning useful life estimation of aircraft engines | |
CN104052612B (en) | A kind of Fault Identification of telecommunication service and the method and system of positioning | |
CN109002845A (en) | Fine granularity image classification method based on depth convolutional neural networks | |
CN106201871A (en) | Based on the Software Defects Predict Methods that cost-sensitive is semi-supervised | |
CN108833409A (en) | webshell detection method and device based on deep learning and semi-supervised learning | |
CN107239908A (en) | A kind of system maturity assessment method of information system | |
CN103886030B (en) | Cost-sensitive decision-making tree based physical information fusion system data classification method | |
CN110309884A (en) | Electricity consumption data anomalous identification system based on ubiquitous electric power Internet of Things net system | |
CN107944472A (en) | A kind of airspace operation situation computational methods based on transfer learning | |
CN109165819A (en) | A kind of active power distribution network reliability fast evaluation method based on improvement AdaBoost.M1-SVM | |
Wang et al. | Abnormal detection technology of industrial control system based on transfer learning | |
CN107239798A (en) | A kind of feature selection approach of software-oriented defect number prediction | |
WO2017071369A1 (en) | Method and device for predicting user unsubscription | |
CN107391452A (en) | A kind of software defect estimated number method based on data lack sampling and integrated study | |
CN110288028A (en) | ECG detecting method, system, equipment and computer readable storage medium | |
Cheng et al. | Blocking bug prediction based on XGBoost with enhanced features | |
CN111586728A (en) | Small sample characteristic-oriented heterogeneous wireless network fault detection and diagnosis method | |
Yan et al. | TL-CNN-IDS: transfer learning-based intrusion detection system using convolutional neural network | |
CN110196911B (en) | Automatic classification management system for civil data | |
CN109885677A (en) | A kind of multi-faceted big data acquisition clearing system and method | |
CN113970799A (en) | Bridge meteorological monitoring system, method, equipment and storage medium based on narrow-band communication | |
CN111177975A (en) | Aviation equipment availability prediction method based on artificial intelligence | |
Kaur et al. | Performance evaluation of reusable software components |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information |
Inventor after: Cheng Guogen Inventor after: Li Xinjie Inventor before: Cheng Guogen Inventor before: Li Xinran |
|
CB03 | Change of inventor or designer information | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190614 |
|
RJ01 | Rejection of invention patent application after publication |