CN108171234B - Operation complex operation state monitoring system and method based on cross validation - Google Patents

Operation complex operation state monitoring system and method based on cross validation Download PDF

Info

Publication number
CN108171234B
CN108171234B CN201711427712.1A CN201711427712A CN108171234B CN 108171234 B CN108171234 B CN 108171234B CN 201711427712 A CN201711427712 A CN 201711427712A CN 108171234 B CN108171234 B CN 108171234B
Authority
CN
China
Prior art keywords
operation state
data
complex
roi
optical flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711427712.1A
Other languages
Chinese (zh)
Other versions
CN108171234A (en
Inventor
唐文奇
黄晓维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ibaoli Network Beijing Technology Co ltd
Original Assignee
Ibaoli Network Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ibaoli Network Beijing Technology Co ltd filed Critical Ibaoli Network Beijing Technology Co ltd
Priority to CN201711427712.1A priority Critical patent/CN108171234B/en
Publication of CN108171234A publication Critical patent/CN108171234A/en
Application granted granted Critical
Publication of CN108171234B publication Critical patent/CN108171234B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C1/00Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
    • G07C1/10Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people together with the recording, indicating or registering of other data, e.g. of signs of identity
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/38Individual registration on entry or exit not involving the use of a pass with central registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a monitoring system and a method for an operation complex operation state based on cross validation, which are characterized in that video data of equipment operation, crowd flow, automobile flow and the like of the operation complex are collected, then the video data are identified and compressed by an operation state identification method based on a neural form calculation network to obtain operation state identification data, and finally the operation state identification data are compared with actual measurement data of water, electricity and gas data, staff attendance data, automobile in-out card reading data and the like of the operation complex and are cross validated, so that the data of two types of sources are mutually supplemented, and the accurate monitoring of the operation state of the operation complex is realized. The invention provides data support for banks, financial institutions and depositors to borrow money risks for the operation complex, and helps external banks, financial institutions, depositors and other related personnel to master the state of the target operation complex, thereby making valuable decisions.

Description

Operation complex operation state monitoring system and method based on cross validation
Technical Field
The invention belongs to the technical field of operation complex operation monitoring, and particularly relates to a design of an operation complex operation state monitoring system and method based on cross validation.
Background
Banks and financial institutions borrow money for operation complexes (factories, farms, enterprises, logistics centers, etc.), and are very concerned about whether the operation complexes are normally operated, so that the operation conditions of the operation complexes need to be monitored. However, dispatching and committing third party agencies to monitor face labor intensive, costly and in the case of particularly large sums of money being received. Meanwhile, the data source of the existing operation monitoring technology is often single, which may cause the situation of large errors in the monitored data.
The operation condition of the operation complex is supported by personnel, water, electricity, gas and logistics vehicles, and the monitoring of all the related information can help external banks, paying financial institutions and other related personnel to master the state of the target operation complex. Monitoring the working area and the production workplace of the operation complex by the internal information interface, video and audio acquisition and other modes of the operation complex: noise data, ammeter data, gas meter data, import and export and important people flow, attendance machine and vehicle in and out card punching data and the like, and the value of the objective data can be fully exerted by performing cross validation processing.
Disclosure of Invention
The invention aims to solve the problems that the existing operation monitoring technology of an operation complex is single in data source and easy to generate errors, and provides a system and a method for monitoring the operation state of the operation complex based on cross validation.
The technical scheme of the invention is as follows: a cross-validation based operation complex operational status monitoring system comprising:
and the video acquisition device is used for acquiring operation video data F1 of the operation complex.
And the neural network identification device is used for identifying and compressing the operation video data F1 by adopting an operation state identification method based on the neural morphology computing network to obtain operation state identification data F2.
And the data acquisition device is used for acquiring the operation state measured data F3 of the operation complex.
And the cross validation comparison device is used for analyzing and comparing the operation state identification data F2 and the operation state actual measurement data F3 to obtain a cross validation comparison result of the operation state of the operation complex.
The video acquisition equipment, the neural network identification device and the cross validation comparison device are sequentially in communication connection, and the data acquisition equipment is in communication connection with the cross validation comparison device.
The invention has the beneficial effects that: the invention collects the operation video data F1 of the operation complex, processes the operation video data into operation state identification data F2, and then compares the operation video data with the collected operation state actual measurement data F3 and carries out cross validation, so that the two types of data are mutually complemented, and the accurate monitoring of the operation state of the operation complex is realized.
Furthermore, the video acquisition equipment comprises an operation monitoring camera group, a crowd flow monitoring camera group and an automobile flow monitoring camera group of operation complex equipment; the operation monitoring camera group of the operation complex equipment comprises a plurality of cameras arranged in the operation complex workplace; the crowd flow monitoring camera group comprises a plurality of cameras arranged in a working place and a living place of an operation complex; the automobile flow monitoring camera group comprises a plurality of cameras arranged at the gate and the road junction of the operation complex.
The data acquisition equipment comprises a water, electricity and gas acquisition device, an attendance machine and an automobile entrance guard; the water, electricity and gas acquisition device comprises a water meter collector, an electricity meter collector, a gas meter collector, a temperature sensor, a noise sensor and a brightness sensor; the temperature sensor, the noise sensor and the brightness sensor are all arranged in each workplace of the operation complex; the attendance machine is arranged at the door of each workplace of the operation complex; the automobile entrance guard is arranged at the door of an operation complex.
The beneficial effects of the further scheme are as follows: through reasonable layout and installation of the cameras, the operation video data F1 of the operation complex can be more completely collected; through reasonable layout and installation of data acquisition equipment such as a water, electricity and gas acquisition device, an attendance machine, an automobile door control and the like, the operation state actual measurement data F3 of an operation complex can be more completely acquired.
Further, the operation video data F1 includes device operation video data, crowd flow video data, and car flow video data; the operation state identification data F2 comprises water, electricity and gas identification data, crowd flow identification data and automobile flow identification data; the operation state actual measurement data F3 comprises water, electricity and gas actual measurement data, employee attendance data and automobile access card reading data; the water, electricity and gas actual measurement data comprises water meter data, electricity meter data, gas meter data, temperature data, noise data and brightness data.
The beneficial effects of the further scheme are as follows: the operation condition of the operation complex is supported by personnel, water, electricity, gas and logistics vehicles, the accurate monitoring of the operation state of the operation complex can be realized by monitoring and comparing the three types of data, and data support is provided for banks and financial institutions to borrow money for the operation complex.
The invention also provides a cross validation-based operation state monitoring method for the operation complex, which comprises the following steps:
and S1, installing and laying out the video acquisition equipment and the data acquisition equipment.
And S2, collecting operation video data of the operation complex by a video collecting device F1.
S3, in the neural network recognition device, the operation video data F1 is recognized and compressed by an operation state recognition method based on the neural morphology calculation network, and operation state recognition data F2 is obtained.
And S4, collecting operation state measured data of the operation complex by the data collecting device F3.
And S5, analyzing and comparing the operation state identification data F2 and the operation state actual measurement data F3 in the cross validation comparison device to obtain a cross validation comparison result of the operation state of the operation complex.
The invention has the beneficial effects that: by comparing and cross-verifying the operation state identification data F2 and the operation state actual measurement data F3, the problem of monitoring the operation state of the target operation complex is effectively solved, and external banks, loan financial institutions and other related personnel can be helped to master the state of the target operation complex, so that valuable decisions can be made.
Further, step S1 is specifically:
installing a camera, a temperature sensor, a noise sensor and a brightness sensor for monitoring the operation of the operation complex equipment in each operation complex workplace, and installing a water meter collector, an electric meter collector and a gas meter collector on each water meter, electric meter and gas meter; installing cameras for monitoring the flow of people on each work place and each living place of an operation complex, and arranging an attendance machine at the door of each work place of the operation complex; the camera for monitoring the automobile flow is arranged at the gate and each intersection of the operation complex, and the automobile entrance guard is arranged at the gate of the operation complex.
The beneficial effects of the further scheme are as follows: through reasonable layout and installation of the cameras, the operation video data F1 of the operation complex can be more completely collected; through reasonable layout and installation of data acquisition equipment such as a water, electricity and gas acquisition device, an attendance machine, an automobile door control and the like, the operation state actual measurement data F3 of an operation complex can be more completely acquired.
Further, the neural morphology calculation network in step S3 is a generalized regression neural network, and step S3 includes the following sub-steps:
and S31, extracting dynamic texture features based on the optical flow field.
And S32, multi-directional motion segmentation based on the dynamic texture and level set algorithm.
S33, GRNN (Generalized Regression Neural Network) based operation state Regression estimation.
Step S31 specifically includes the following substeps:
s311, calculating an optical flow field: obtaining the optical flow field of two continuous frame images in the operation video data F1 by adopting a Brox multi-resolution optical flow estimation algorithm
Figure BDA0001524263890000031
Where E represents the optical flow field strength and,
Figure BDA0001524263890000032
indicating the azimuth of the optical flow field.
S312, perspective normalization of the intensity of the optical flow field: the optical flow field intensity E of any point in the optical flow field is normalized by adopting a perspective quadrilateral method to obtain a normalized optical flow field
Figure BDA0001524263890000033
The normalized optical flow field
Figure BDA0001524263890000034
I.e. dynamic texture features.
Step S32 specifically includes the following substeps:
s321, dividing direction angle regions: determining the direction angle of an optical flow field based on a priori knowledge of the direction of motion of a device, a crowd, or a vehicle
Figure BDA0001524263890000035
Divided into n mutually disjoint directional angle intervals phiiWherein i is 1, 2.
S322, obtaining the sub optical flow field with different direction angle intervals: point-by-point scanning normalization optical flow field
Figure BDA0001524263890000036
According to its direction angle
Figure BDA0001524263890000037
Fall into the corresponding direction angle interval phiiAfter the scanning is finished, the scanning device,
Figure BDA0001524263890000038
is decomposed into sub-optical flow fields with different direction angle intervals
Figure BDA0001524263890000039
S323, sub-optical flow field segmentation: in each sub-optical flow field
Figure BDA00015242638900000310
In the method, the intensity of an optical flow field is realized by means of a Renyi entropy threshold method
Figure BDA00015242638900000311
To obtain a division result S of the movement of the sub-optical flow fieldi
S324, obtaining an initial segmentation result: merging SiAnd distinguishing and marking according to the difference of the corresponding direction angle intervals so as to obtain an initial segmentation result S of equipment, people or vehicles with different motion directions.
S325, obtaining a ROI (Region Of Interest) set: optimizing an initial segmentation result S by adopting a variational level set algorithm without reinitializing a level set function, and acquiring an ROI set of equipment, people or vehicles corresponding to different motion directions, namely { ROI (i, j) | i ═ 1,2,.. n, j ═ 1,2,. m (i) }, wherein m (i) is an angle interval phi belonging to the same directioniThe number of ROIs.
Step S33 specifically includes the following substeps:
s331, ROI feature extraction: and obtaining the ROI area, the ROI perimeter area ratio, the ROI internal edge point number, the ROI fractal dimension and the ROI statistical topographic features which are closely related to the operation state, and using the ROI statistical topographic features for regression analysis of the operation state.
S332, obtaining a sample set for network training: and selecting a plurality of representative ROIs with different equipment, crowd or vehicle densities and occlusion degrees in the scene as a sample set of network training.
S333, building and training a GRNN model: the method comprises the steps of taking the area of an ROI, the perimeter of the ROI, the area ratio of the perimeter of the ROI, the number of edge points inside the ROI, the fractal dimension of the ROI and the statistical topographic features of the ROI as network input, taking the number of devices, pedestrians or vehicles in the ROI as network output, constructing GRNN, giving an output error limit, training the GRNN by adopting a sample set cross validation method, and obtaining an optimal SPREAD value in a circular comparison mode.
Statistical topographic features of the ROI includeNumber of mountain peak entity related to variable horizontal plane z ═ alpha
Figure BDA0001524263890000041
Statistical mean of
Figure BDA0001524263890000042
Sum variance
Figure BDA0001524263890000043
Number of mountain and valley entities
Figure BDA0001524263890000044
Statistical mean of
Figure BDA0001524263890000045
Sum variance
Figure BDA0001524263890000046
S334, obtaining an operation state estimator: the GRNN after training can be used as an estimator of the operating state in a single ROI, the input of the operating state estimator is the ROI characteristics, and the output is the number of devices, pedestrians or vehicles in the ROI.
S335, outputting the operation state prediction result: extracting the characteristics of each ROI in the ROI set, sequentially inputting the characteristics into the operation state estimator to obtain operation state prediction results in each ROI, merging the operation state prediction results of the ROIs belonging to the same direction angle interval to obtain operation state prediction results in each direction angle interval, and outputting the operation state prediction results according to different direction angle intervals to obtain operation state identification data F2.
The beneficial effects of the further scheme are as follows: according to the invention, the motion equipment, the crowd or the vehicle is integrally modeled by means of the dynamic texture based on the optical flow field, so that the extraction and tracking process of the individual characteristics of the complex equipment, pedestrians or vehicles can be avoided, the inter-pedestrian shielding resistance of the algorithm is greatly improved, and meanwhile, the integrity and the difference of the motion of the equipment, pedestrians or vehicles can be considered, thereby realizing the segmentation of the equipment, the crowd or the vehicle according to the motion direction. In addition, due to the introduction of the GRNN-based operation state regression model, the generalization capability of the operation state estimator is higher, and the influence of mutual shielding among equipment, pedestrians or vehicles, the influence of various factors such as the segmentation quality of the equipment, the people or the vehicles on the operation state estimation can be effectively reduced. Therefore, the method is more suitable for being applied to the situation of the direction estimation of the operation state with the requirements of low complexity, high precision and high real-time performance.
Further, the method for analyzing and comparing the operation status identification data F2 and the operation status measured data F3 in step S5 includes a quasi-real-time data difference comparison method, an average statistical data difference comparison method, a time difference comparison method, and a neuromorphic comparison method.
The beneficial effects of the further scheme are as follows: and analyzing and comparing the operation state identification data F2 and the operation state actual measurement data F3 by adopting a plurality of comparison methods, so that the final cross validation comparison result is more accurate.
Further, the quasi real-time data difference comparison method specifically comprises the following steps: the operation status identification data F2 and the operation status actual measurement data F3 are compared in near real time at intervals of seconds, and the comparison is made as to whether the two are the same.
The average statistical data difference comparison method specifically comprises the following steps: at a time interval T1Counting the operation state identification data F2 and the operation state actual measurement data F3, and respectively calculating F2 and F3 at the time interval T1Mean value of
Figure BDA0001524263890000051
And
Figure BDA0001524263890000052
comparison
Figure BDA0001524263890000053
And
Figure BDA0001524263890000054
whether they are the same; time interval T1Set to one minute, one hour, one day or one week.
The time difference comparison method specifically comprises the following steps: real-time operational statusIdentification data F2 and a certain time interval T2Comparing the later or earlier actual measurement data F3 of the operation state, and comparing whether the two data are the same; time interval T2Set to one minute, one hour, one day or one week.
The beneficial effects of the further scheme are as follows: the latest operation state of the operation complex can be obtained in time by adopting a quasi-real-time data difference comparison method, the operation state of the operation complex in a period of time can be obtained by adopting an average statistical data difference comparison method, and the fluctuation condition of the operation state of the operation complex in a period of time can be obtained by adopting a time difference comparison method.
Further, the neuromorphic comparison method specifically comprises the following sub-steps:
s51, the operation state identification data F2 and the operation state actual measurement data F3 are subjected to time compression preprocessing, and the time interval is compressed from seconds to days.
And S52, training the nerve morphology calculation method in a cross validation comparison device.
And S53, processing the operation state identification data F2 and the operation state actual measurement data F3 by adopting a neural form calculation method in the cross validation comparison device to obtain a cross validation comparison result of the operation state of the operation complex.
The beneficial effects of the further scheme are as follows: the operation state identification data F2 and the operation state actual measurement data F3 are processed through a neuromorphic computing method, the two types of data are mutually supplemented and mutually cross-verified, and finally, a cross-verification comparison result of the operation state of the operation complex is obtained, so that an external bank, a deposit financial institution and other related personnel are helped to master the state of the target operation complex, and valuable decisions are made.
Further, step S52 specifically includes the following sub-steps:
s521, inputting training data and default initial parameters of a system into the neural morphology calculation method; the training data is operation state identification data and operation state measured data similar to operation complex.
S522, calculating to obtain a cross validation comparison result X of the operation state identification data and the operation state actual measurement data through a nerve morphology calculation method.
S523, selecting the actual operation condition of the operation complex as a training target Y.
And S524, forming an error function F (X, Y) according to the cross validation comparison result X and the training target Y, and calculating an output error.
And S525, judging whether the output error is smaller than a set threshold value, if so, ending the training, and entering the step S53, otherwise, selecting the parameter optimization direction and the optimization rate of the neural morphology calculation method through the output error, adjusting the initial parameters, and returning to the step S521.
The beneficial effects of the further scheme are as follows: the neural morphology calculation method model is trained, the error of the neural morphology calculation method model is adjusted to be within an acceptable range, a calculation system of the neural morphology calculation method under the scene is formed, and a more accurate cross validation comparison result of the operation state of the operation complex is obtained.
Drawings
Fig. 1 is a block diagram of a system for monitoring an operation state of an operating complex based on cross validation according to an embodiment of the present invention.
Fig. 2 is a flowchart of a cross-validation-based operation status monitoring method for an operating complex according to a second embodiment of the present invention.
Fig. 3 is a flowchart illustrating a substep of step S3 according to a second embodiment of the present invention.
Fig. 4 is a flowchart of a neuromorphic comparison method according to a second embodiment of the present invention.
Fig. 5 is a flowchart illustrating a substep of step S52 according to a second embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It is to be understood that the embodiments shown and described in the drawings are merely exemplary and are intended to illustrate the principles and spirit of the invention, not to limit the scope of the invention.
The first embodiment is as follows:
a cross-validation based operation state monitoring system for an operation complex, as shown in fig. 1, comprising:
and the video acquisition device is used for acquiring operation video data F1 of the operation complex.
And the neural network identification device is used for identifying and compressing the operation video data F1 by adopting an operation state identification method based on the neural morphology computing network to obtain operation state identification data F2.
And the data acquisition device is used for acquiring the operation state measured data F3 of the operation complex.
And the cross validation comparison device is used for analyzing and comparing the operation state identification data F2 and the operation state actual measurement data F3 to obtain a cross validation comparison result of the operation state of the operation complex.
The video acquisition equipment, the neural network identification device and the cross validation comparison device are sequentially in communication connection, and the data acquisition equipment is in communication connection with the cross validation comparison device.
The video acquisition equipment comprises an operation monitoring camera group, a crowd flow monitoring camera group and an automobile flow monitoring camera group of operation complex equipment; the operation monitoring camera group of the operation complex equipment comprises a plurality of cameras arranged in the operation complex workplace; the crowd flow monitoring camera group comprises a plurality of cameras arranged in a working place and a living place of an operation complex; the automobile flow monitoring camera group comprises a plurality of cameras arranged at the gate and the road junction of the operation complex.
The data acquisition equipment comprises a water, electricity and gas acquisition device, an attendance machine and an automobile entrance guard; the water, electricity and gas acquisition device comprises a water meter collector, an electricity meter collector, a gas meter collector, a temperature sensor, a noise sensor and a brightness sensor; the temperature sensor, the noise sensor and the brightness sensor are all arranged in each workplace of the operation complex; the attendance machine is arranged at the door of each workplace of the operation complex; the automobile entrance guard is arranged at the door of an operation complex.
In the embodiment of the invention, the neural network identification device and the cross validation comparison device are both computers.
The operation video data F1 comprises equipment operation video data, crowd flow video data and automobile flow video data; the operation state identification data F2 comprises water, electricity and gas identification data, crowd flow identification data and automobile flow identification data; the operation state actual measurement data F3 comprises water, electricity and gas actual measurement data, employee attendance data and automobile access card reading data; the water, electricity and gas actual measurement data comprises water meter data, electricity meter data, gas meter data, temperature data, noise data and brightness data.
Example two:
the embodiment of the invention provides a method for monitoring the operation state of an operation complex based on cross validation, which comprises the following steps S1-S5 as shown in FIG. 2:
and S1, installing and laying out the video acquisition equipment and the data acquisition equipment.
Installing a camera, a temperature sensor, a noise sensor and a brightness sensor for monitoring the operation of the operation complex equipment in each operation complex workplace, and installing a water meter collector, an electric meter collector and a gas meter collector on each water meter, electric meter and gas meter; installing cameras for monitoring the flow of people on each work place and each living place of an operation complex, and arranging an attendance machine at the door of each work place of the operation complex; the camera for monitoring the automobile flow is arranged at the gate and each intersection of the operation complex, and the automobile entrance guard is arranged at the gate of the operation complex.
And S2, collecting operation video data of the operation complex by a video collecting device F1.
S3, in the neural network recognition device, the operation video data F1 is recognized and compressed by an operation state recognition method based on the neural morphology calculation network, and operation state recognition data F2 is obtained.
In the embodiment of the present invention, the neural morphology calculation network in step S3 is a generalized regression neural network, and as shown in fig. 3, step S3 includes the following substeps S31-S33:
and S31, extracting dynamic texture features based on the optical flow field.
And S32, multi-directional motion segmentation based on the dynamic texture and level set algorithm.
S33, GRNN (Generalized Regression Neural Network) based operation state Regression estimation.
Step S31 specifically includes the following substeps:
s311, calculating an optical flow field: obtaining the optical flow field of two continuous frame images in the operation video data F1 by adopting a Brox multi-resolution optical flow estimation algorithm
Figure BDA0001524263890000081
Where E represents the optical flow field strength and,
Figure BDA0001524263890000082
indicating the azimuth of the optical flow field.
S312, perspective normalization of the intensity of the optical flow field: the optical flow field intensity E of any point in the optical flow field is normalized by adopting a perspective quadrilateral method to obtain a normalized optical flow field
Figure BDA0001524263890000083
The normalized optical flow field
Figure BDA0001524263890000084
I.e. dynamic texture features.
Step S32 specifically includes the following substeps:
s321, dividing direction angle regions: determining the direction angle of an optical flow field based on a priori knowledge of the direction of motion of a device, a crowd, or a vehicle
Figure BDA0001524263890000085
Divided into n mutually disjoint directional angle intervals phiiWherein i is 1, 2.
S322, obtaining the sub optical flow field with different direction angle intervals: point-by-point scanning normalization optical flow field
Figure BDA0001524263890000086
According to its direction angle
Figure BDA0001524263890000087
Put it into phaseCorresponding direction angle interval phiiAfter the scanning is finished, the scanning device,
Figure BDA0001524263890000088
is decomposed into sub-optical flow fields with different direction angle intervals
Figure BDA0001524263890000089
S323, sub-optical flow field segmentation: in each sub-optical flow field
Figure BDA00015242638900000810
In the method, the intensity of an optical flow field is realized by means of a Renyi entropy threshold method
Figure BDA00015242638900000811
To obtain a division result S of the movement of the sub-optical flow fieldi
S324, obtaining an initial segmentation result: merging SiAnd distinguishing and marking according to the difference of the corresponding direction angle intervals so as to obtain an initial segmentation result S of equipment, people or vehicles with different motion directions.
S325, obtaining a ROI (Region Of Interest) set: optimizing an initial segmentation result S by adopting a variational level set algorithm without reinitializing a level set function, and acquiring an ROI set of equipment, people or vehicles corresponding to different motion directions, namely { ROI (i, j) | i ═ 1,2,.. n, j ═ 1,2,. m (i) }, wherein m (i) is an angle interval phi belonging to the same directioniThe number of ROIs.
Step S33 specifically includes the following substeps:
s331, ROI feature extraction: and obtaining the ROI area, the ROI perimeter area ratio, the ROI internal edge point number, the ROI fractal dimension and the ROI statistical topographic features which are closely related to the operation state, and using the ROI statistical topographic features for regression analysis of the operation state.
S332, obtaining a sample set for network training: and selecting a plurality of representative ROIs with different equipment, crowd or vehicle densities and occlusion degrees in the scene as a sample set of network training.
S333, building and training a GRNN model: the method comprises the steps of taking the area of an ROI, the perimeter of the ROI, the area ratio of the perimeter of the ROI, the number of edge points inside the ROI, the fractal dimension of the ROI and the statistical topographic features of the ROI as network input, taking the number of devices, pedestrians or vehicles in the ROI as network output, constructing GRNN, giving an output error limit, training the GRNN by adopting a sample set cross validation method, and obtaining an optimal SPREAD value in a circular comparison mode.
The statistical topographic features of the ROI include the number of mountain peak entities associated with a variable horizontal plane z ═ alpha
Figure BDA0001524263890000091
Statistical mean of
Figure BDA0001524263890000092
Sum variance
Figure BDA0001524263890000093
Number of mountain and valley entities
Figure BDA0001524263890000094
Statistical mean of
Figure BDA0001524263890000095
Sum variance
Figure BDA0001524263890000096
S334, obtaining an operation state estimator: the GRNN after training can be used as an estimator of the operating state in a single ROI, the input of the operating state estimator is the ROI characteristics, and the output is the number of devices, pedestrians or vehicles in the ROI.
S335, outputting the operation state prediction result: extracting the characteristics of each ROI in the ROI set, sequentially inputting the characteristics into the operation state estimator to obtain operation state prediction results in each ROI, merging the operation state prediction results of the ROIs belonging to the same direction angle interval to obtain operation state prediction results in each direction angle interval, and outputting the operation state prediction results according to different direction angle intervals to obtain operation state identification data F2.
In the embodiment of the invention, the operation video data F1 includes device operation video data, crowd flow video data and automobile flow video data; the operation state identification data F2 includes water electrical identification data, crowd flow identification data, and car flow identification data. Therefore, if the operation video data F1 is the device operation video data, the operation state identification data F2 correspondingly identified through the step S3 is the water, electricity and gas identification data; if the operation video data F1 is crowd traffic video data, the operation state identification data F2 obtained by the corresponding identification in step S3 is crowd traffic identification data; if the operation video data F1 is the car traffic video data, the operation state identification data F2 identified correspondingly through the step S3 is the car traffic identification data.
And S4, collecting operation state measured data of the operation complex by the data collecting device F3.
In the embodiment of the invention, the operation state actual measurement data F3 comprises water, electricity and gas actual measurement data, staff attendance data and automobile in-out card punching data.
And S5, analyzing and comparing the operation state identification data F2 and the operation state actual measurement data F3 in the cross validation comparison device to obtain a cross validation comparison result of the operation state of the operation complex.
In the embodiment of the invention, the method for analyzing and comparing the operation state identification data F2 and the operation state actual measurement data F3 comprises a quasi-real-time data difference comparison method, an average statistical data difference comparison method, a time difference comparison method and a nerve morphology comparison method.
The quasi-real-time data difference comparison method specifically comprises the following steps: the operation status identification data F2 and the operation status actual measurement data F3 are compared in near real time at intervals of seconds, and the comparison is made as to whether the two are the same.
The average statistical data difference comparison method specifically comprises the following steps: at a time interval T1Counting the operation state identification data F2 and the operation state actual measurement data F3, and respectively calculating F2 and F3 at the time interval T1Mean value of
Figure BDA0001524263890000097
And
Figure BDA0001524263890000098
comparison
Figure BDA00015242638900000910
And
Figure BDA0001524263890000099
whether they are the same; time interval T1Set to one minute, one hour, one day or one week.
The time difference comparison method specifically comprises the following steps: identifying the real-time operation status F2 and a certain time interval T2Comparing the later or earlier actual measurement data F3 of the operation state, and comparing whether the two data are the same; time interval T2Set to one minute, one hour, one day or one week.
The latest operation state of the operation complex can be obtained in time by adopting a quasi-real-time data difference comparison method, the operation state of the operation complex in a period of time can be obtained by adopting an average statistical data difference comparison method, and the fluctuation condition of the operation state of the operation complex in a period of time can be obtained by adopting a time difference comparison method.
As shown in FIG. 4, the neuromorphic comparison method specifically includes the following substeps S51-S53:
s51, the operation state identification data F2 and the operation state actual measurement data F3 are subjected to time compression preprocessing, and the time interval is compressed from seconds to days.
And S52, training the nerve morphology calculation method in a cross validation comparison device.
The neural morphology calculation method comprises a memristor network algorithm (a memristor neural morphology calculation method and the like), a shallow structure network algorithm (a convolution neural network algorithm, a BP neural network algorithm, a cyclic neural network algorithm and the like) and a deep structure network algorithm (a deep structure neural morphology calculation method and the like). In the embodiment of the invention, a BP neural network algorithm is adopted to process data.
As shown in fig. 5, step S52 specifically includes the following substeps S521-S525:
s521, inputting training data and default initial parameters of a system into the neural morphology calculation method; the training data is operation state identification data and operation state measured data similar to operation complex.
S522, calculating to obtain a cross validation comparison result X of the operation state identification data and the operation state actual measurement data through a nerve morphology calculation method.
S523, selecting the actual operation condition of the operation complex as a training target Y.
And S524, forming an error function F (X, Y) according to the cross validation comparison result X and the training target Y, and calculating an output error.
And S525, judging whether the output error is smaller than a set threshold value, if so, ending the training, and entering the step S53, otherwise, selecting the parameter optimization direction and the optimization rate of the neural morphology calculation method through the output error, adjusting the initial parameters, and returning to the step S521.
And S53, processing the operation state identification data F2 and the operation state actual measurement data F3 by adopting a neural form calculation method in the cross validation comparison device to obtain a cross validation comparison result of the operation state of the operation complex.
The manager of the operation complex can perform corresponding subsequent processing according to whether the cross verification comparison result is the same, if the cross verification comparison result is the same, the manager can be used as a bank and financial institution to provide data support for the operation complex to borrow money, help external banks, loan financial institutions and other related personnel to master the state of the target operation complex, and further make valuable decisions; if the cross-validation comparison results are different, related equipment, personnel or vehicles in the operation complex can be overhauled or checked according to the specific data with deviation.
It will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.

Claims (4)

1. A monitoring method of operation state of operation complex based on cross validation is realized based on an operation state monitoring system of operation complex, the operation state monitoring system of operation complex comprises:
the video acquisition equipment is used for acquiring operation video data F1 of an operation complex;
the neural network identification device is used for identifying and compressing the operation video data F1 by adopting an operation state identification method based on a neural morphology computing network to obtain operation state identification data F2;
the data acquisition equipment is used for acquiring operation state actual measurement data F3 of an operation complex;
the cross validation comparison device is used for analyzing and comparing the operation state identification data F2 and the operation state actual measurement data F3 to obtain a cross validation comparison result of the operation state of the operation complex;
the video acquisition equipment, the neural network identification device and the cross validation comparison device are sequentially in communication connection, and the data acquisition equipment is in communication connection with the cross validation comparison device;
the video acquisition equipment comprises an operation monitoring camera group, a crowd flow monitoring camera group and an automobile flow monitoring camera group of operation complex equipment; the operation monitoring camera group of the operation complex equipment comprises a plurality of cameras arranged in the operation complex workplace; the crowd flow monitoring camera group comprises a plurality of cameras arranged in a working place and a living place of an operation complex; the automobile flow monitoring camera group comprises a plurality of cameras arranged at a gate and a road junction of an operation complex;
the data acquisition equipment comprises a water, electricity and gas acquisition device, an attendance machine and an automobile entrance guard; the water, electricity and gas acquisition device comprises a water meter collector, an electricity meter collector, a gas meter collector, a temperature sensor, a noise sensor and a brightness sensor; the temperature sensor, the noise sensor and the brightness sensor are all arranged in each workplace of an operation complex; the attendance machine is arranged at the door of each workplace of the operation complex; the automobile access control device is arranged on an operation complex gate;
the operation video data F1 comprises equipment operation video data, crowd flow video data and automobile flow video data; the operation state identification data F2 comprises water, electricity and gas identification data, crowd flow identification data and automobile flow identification data; the operation state actual measurement data F3 comprises water, electricity and gas actual measurement data, staff attendance data and automobile access card reading data; the water, electricity and gas actual measurement data comprises water meter data, electricity meter data, gas meter data, temperature data, noise data and brightness data;
the operation state monitoring method of the operation complex is characterized by comprising the following steps:
s1, installing and laying out the video acquisition equipment and the data acquisition equipment;
s2, collecting operation video data of an operation complex by video collecting equipment F1;
s3, in the neural network recognition device, recognizing and compressing the operation video data F1 by adopting an operation state recognition method based on a neural morphology computing network to obtain operation state recognition data F2;
s4, collecting operation state actual measurement data F3 of an operation complex through data collection equipment;
s5, analyzing and comparing the operation state identification data F2 and the operation state actual measurement data F3 in the cross validation comparison device to obtain a cross validation comparison result of the operation state of the operation complex;
the method for analyzing and comparing the operation state identification data F2 and the operation state actual measurement data F3 in the step S5 includes a quasi-real-time data difference comparison method, an average statistical data difference comparison method, a time difference comparison method and a neuromorphism comparison method;
the quasi-real-time data difference comparison method specifically comprises the following steps: comparing the operation state identification data F2 with the operation state actual measurement data F3 in a quasi-real-time manner at a time interval of seconds, and comparing whether the two data are the same or not;
the average statistical data difference ratioThe method comprises the following specific steps: at a time interval T1Counting the operation state identification data F2 and the operation state actual measurement data F3, and respectively calculating F2 and F3 at the time interval T1Mean value of
Figure FDA0003196829690000023
And
Figure FDA0003196829690000022
comparison
Figure FDA0003196829690000021
And
Figure FDA0003196829690000024
whether they are the same; time interval T1Set to one minute, one hour, one day, or one week;
the time difference comparison method specifically comprises the following steps: identifying the real-time operation status F2 and a certain time interval T2Comparing the later or earlier actual measurement data F3 of the operation state, and comparing whether the two data are the same; time interval T2Set to one minute, one hour, one day, or one week;
the nerve morphology comparison method specifically comprises the following steps:
s51, performing time compression pretreatment on the operation state identification data F2 and the operation state actual measurement data F3, and compressing the time interval from seconds to days;
s52, training a nerve morphology calculation method in a cross validation comparison device;
and S53, processing the operation state identification data F2 and the operation state actual measurement data F3 by adopting a neural form calculation method in the cross validation comparison device to obtain a cross validation comparison result of the operation state of the operation complex.
2. The operation complex operation state monitoring method according to claim 1, wherein the step S1 is specifically:
installing a camera, a temperature sensor, a noise sensor and a brightness sensor for monitoring the operation of the operation complex equipment in each operation complex workplace, and installing a water meter collector, an electric meter collector and a gas meter collector on each water meter, electric meter and gas meter; installing cameras for monitoring the flow of people on each work place and each living place of an operation complex, and arranging an attendance machine at the door of each work place of the operation complex; the camera for monitoring the automobile flow is arranged at the gate and each intersection of the operation complex, and the automobile entrance guard is arranged at the gate of the operation complex.
3. The operation complex operation state monitoring method according to claim 1, wherein the neuromorphic computing network of the step S3 is a generalized regression neural network, and the step S3 includes the following sub-steps:
s31, extracting dynamic texture features based on the optical flow field;
s32, multi-directional motion segmentation based on the dynamic texture and level set algorithm;
s33, regression estimation of the operation state based on GRNN;
the step S31 specifically includes the following sub-steps:
s311, calculating an optical flow field: obtaining the optical flow field of two continuous frame images in the operation video data F1 by adopting a Brox multi-resolution optical flow estimation algorithm
Figure FDA0003196829690000031
Where E represents the optical flow field strength and,
Figure FDA0003196829690000032
representing the azimuth of the optical flow field;
s312, perspective normalization of the intensity of the optical flow field: the optical flow field intensity E of any point in the optical flow field is normalized by adopting a perspective quadrilateral method to obtain a normalized optical flow field
Figure FDA0003196829690000033
The normalized optical flow field
Figure FDA0003196829690000034
Namely the dynamic texture features;
the step S32 specifically includes the following sub-steps:
s321, dividing direction angle regions: determining the direction angle of an optical flow field based on a priori knowledge of the direction of motion of a device, a crowd, or a vehicle
Figure FDA0003196829690000035
Divided into n mutually disjoint directional angle intervals phiiWherein i is 1, 2.. times.n;
s322, obtaining the sub optical flow field with different direction angle intervals: point-by-point scanning normalization optical flow field
Figure FDA0003196829690000036
According to its direction angle
Figure FDA0003196829690000037
Fall into the corresponding direction angle interval phiiAfter the scanning is finished, the scanning device,
Figure FDA0003196829690000038
is decomposed into sub-optical flow fields with different direction angle intervals
Figure FDA0003196829690000039
S323, sub-optical flow field segmentation: in each sub-optical flow field
Figure FDA00031968296900000310
In the method, a sub-optical flow field is realized by means of a Renyi entropy threshold method
Figure FDA00031968296900000311
To obtain a division result S of the movement of the sub-optical flow fieldi
S324, obtaining an initial segmentation result: merging SiAccording to the corresponding direction angleThe different intervals are distinguished and marked, so that an initial segmentation result S of equipment, people or vehicles with different motion directions is obtained;
s325, ROI set acquisition: optimizing an initial segmentation result S by adopting a variational level set algorithm without reinitializing a level set function, and acquiring an ROI set of equipment, people or vehicles corresponding to different motion directions, namely { ROI (i, j) | i ═ 1,2,.. n, j ═ 1,2,. m (i) }, wherein m (i) is an angle interval phi belonging to the same directioniThe number of ROIs of (a);
the step S33 specifically includes the following sub-steps:
s331, ROI feature extraction: obtaining the ROI area, the ROI perimeter area ratio, the ROI internal edge point number, the ROI fractal dimension and the ROI statistical topographic features which are closely related to the operation state, and using the ROI statistical topographic features for regression analysis of the operation state;
s332, obtaining a sample set for network training: selecting a plurality of representative ROIs with different equipment, crowd or vehicle densities and shielding degrees in a scene as a sample set of network training;
s333, building and training a GRNN model: the method comprises the steps of taking the area of an ROI (region of interest), the perimeter of the ROI, the area ratio of the perimeter of the ROI, the number of edge points inside the ROI, the fractal dimension of the ROI and the statistical topographic features of the ROI as network input, taking the number of devices, pedestrians or vehicles in the ROI as network output, constructing GRNN (generalized regression neural network), giving an output error limit, training the GRNN by adopting a cross validation method of a sample set, and obtaining an optimal SPREAD value in a circular comparison mode;
the statistical topographic features of the ROI include a number of mountain entities related to a variable horizontal plane z ═ alpha
Figure FDA0003196829690000041
Statistical mean of
Figure FDA0003196829690000042
Sum variance
Figure FDA0003196829690000043
Number of mountain and valley entities
Figure FDA0003196829690000044
Statistical mean of
Figure FDA0003196829690000045
Sum variance
Figure FDA0003196829690000046
S334, obtaining an operation state estimator: the GRNN which is trained can be used as an estimator of an operation state in a single ROI, wherein the input of the estimator of the operation state is the ROI characteristic, and the output is the number of equipment, pedestrians or vehicles in the ROI;
s335, outputting the operation state prediction result: extracting the characteristics of each ROI in the ROI set, sequentially inputting the characteristics into the operation state estimator to obtain operation state prediction results in each ROI, merging the operation state prediction results of the ROIs belonging to the same direction angle interval to obtain operation state prediction results in each direction angle interval, and outputting the operation state prediction results according to different direction angle intervals to obtain operation state identification data F2.
4. The operation complex operation state monitoring method according to claim 1, wherein the step S52 specifically comprises the following sub-steps:
s521, inputting training data and default initial parameters of a system into the neural morphology calculation method; the training data is operation state identification data and operation state actual measurement data similar to operation complex;
s522, calculating to obtain a cross validation comparison result X of the operation state identification data and the operation state actual measurement data through a nerve morphology calculation method;
s523, selecting the actual operation condition of the operation complex as a training target Y;
s524, forming an error function F (X, Y) according to the cross validation comparison result X and the training target Y, and calculating an output error;
and S525, judging whether the output error is smaller than a set threshold value, if so, ending the training, and entering the step S53, otherwise, selecting the parameter optimization direction and the optimization rate of the neural morphology calculation method through the output error, adjusting the initial parameters, and returning to the step S521.
CN201711427712.1A 2017-12-26 2017-12-26 Operation complex operation state monitoring system and method based on cross validation Active CN108171234B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711427712.1A CN108171234B (en) 2017-12-26 2017-12-26 Operation complex operation state monitoring system and method based on cross validation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711427712.1A CN108171234B (en) 2017-12-26 2017-12-26 Operation complex operation state monitoring system and method based on cross validation

Publications (2)

Publication Number Publication Date
CN108171234A CN108171234A (en) 2018-06-15
CN108171234B true CN108171234B (en) 2021-11-02

Family

ID=62520624

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711427712.1A Active CN108171234B (en) 2017-12-26 2017-12-26 Operation complex operation state monitoring system and method based on cross validation

Country Status (1)

Country Link
CN (1) CN108171234B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102609688A (en) * 2012-02-02 2012-07-25 杭州电子科技大学 Multidirectional moving population flow estimation method on basis of generalized regression neural network
CN103745229A (en) * 2013-12-31 2014-04-23 北京泰乐德信息技术有限公司 Method and system of fault diagnosis of rail transit based on SVM (Support Vector Machine)
CN103824092A (en) * 2014-03-04 2014-05-28 国家电网公司 Image classification method for monitoring state of electric transmission and transformation equipment on line
JP2014192715A (en) * 2013-03-27 2014-10-06 Fujitsu Ltd Video storage and distribution device, system, method and program
CN104463505A (en) * 2014-12-29 2015-03-25 国家电网公司 Operation state on-line monitoring method based on detail data
CN104992264A (en) * 2015-06-01 2015-10-21 康玉龙 Enterprise operation condition real time monitoring system
CN105046560A (en) * 2015-07-13 2015-11-11 江苏秉信成金融信息服务有限公司 Third-party credit supervision and risk assessment system and method
CN106850293A (en) * 2017-01-25 2017-06-13 浙江中都信息技术有限公司 A kind of enterprise security operation centre Bot

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102609688A (en) * 2012-02-02 2012-07-25 杭州电子科技大学 Multidirectional moving population flow estimation method on basis of generalized regression neural network
JP2014192715A (en) * 2013-03-27 2014-10-06 Fujitsu Ltd Video storage and distribution device, system, method and program
CN103745229A (en) * 2013-12-31 2014-04-23 北京泰乐德信息技术有限公司 Method and system of fault diagnosis of rail transit based on SVM (Support Vector Machine)
CN103824092A (en) * 2014-03-04 2014-05-28 国家电网公司 Image classification method for monitoring state of electric transmission and transformation equipment on line
CN104463505A (en) * 2014-12-29 2015-03-25 国家电网公司 Operation state on-line monitoring method based on detail data
CN104992264A (en) * 2015-06-01 2015-10-21 康玉龙 Enterprise operation condition real time monitoring system
CN105046560A (en) * 2015-07-13 2015-11-11 江苏秉信成金融信息服务有限公司 Third-party credit supervision and risk assessment system and method
CN106850293A (en) * 2017-01-25 2017-06-13 浙江中都信息技术有限公司 A kind of enterprise security operation centre Bot

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Visual, Real-time Monitoring System for Remote Operation of Electrical Substations;M. R. Bastos 等;《2010 IEEE/PES Transaction and Distribution Conference and Exposition》;20110505;第417-421页 *
中国建设银行连云港分行风险管理模式研究;钱军;《中国优秀硕士学位论文全文数据库 经济与管理学辑》;20090115;第2009年卷(第01期);第J159-51页 *
基于层次分析法的运营监测(控)数据质量评估;闫晓斌 等;《ELECTRIC POWER ICT》;20131231;第11卷(第10期);第89-92页 *
小企业贷款与批量风险控制;张增强;《内蒙古金融研究》;20111231(第05期);摘要,第一-五节 *

Also Published As

Publication number Publication date
CN108171234A (en) 2018-06-15

Similar Documents

Publication Publication Date Title
CN110263846A (en) The method for diagnosing faults for being excavated and being learnt based on fault data depth
CN108985380B (en) Point switch fault identification method based on cluster integration
CN105631596A (en) Equipment fault diagnosis method based on multidimensional segmentation fitting
CN110728252B (en) Face detection method applied to regional personnel motion trail monitoring
CN107944353B (en) SAR image change detection method based on contour wave BSPP network
CN116167668B (en) BIM-based green energy-saving building construction quality evaluation method and system
CN104156729B (en) A kind of classroom demographic method
CN109657580B (en) Urban rail transit gate traffic control method
CN117172509B (en) Construction project distribution system based on decoration construction progress analysis
CN114518143A (en) Intelligent environment sensing system
CN115096627A (en) Method and system for fault diagnosis and operation and maintenance in manufacturing process of hydraulic forming intelligent equipment
CN110287798B (en) Vector network pedestrian detection method based on feature modularization and context fusion
CN111242028A (en) Remote sensing image ground object segmentation method based on U-Net
CN113084193B (en) In-situ quality comprehensive evaluation method for selective laser melting technology
CN108171234B (en) Operation complex operation state monitoring system and method based on cross validation
CN117422936A (en) Remote sensing image classification method and system
CN116719241A (en) Automatic control method for informationized intelligent gate based on 3D visualization technology
CN111160150A (en) Video monitoring crowd behavior identification method based on depth residual error neural network convolution
Kumawat et al. Time-Variant Satellite Vegetation Classification Enabled by Hybrid Metaheuristic-Based Adaptive Time-Weighted Dynamic Time Warping
CN114219051A (en) Image classification method, classification model training method and device and electronic equipment
CN112287854A (en) Building indoor personnel detection method and system based on deep neural network
CN113255422A (en) Process connection target identification management method and system based on deep learning
CN111832475A (en) Face false detection screening method based on semantic features
CN115150840B (en) Mobile network flow prediction method based on deep learning
CN117636908B (en) Digital mine production management and control system

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant