CN110110707A - Artificial intelligence CNN, LSTM neural network dynamic identifying system - Google Patents

Artificial intelligence CNN, LSTM neural network dynamic identifying system Download PDF

Info

Publication number
CN110110707A
CN110110707A CN201910436838.8A CN201910436838A CN110110707A CN 110110707 A CN110110707 A CN 110110707A CN 201910436838 A CN201910436838 A CN 201910436838A CN 110110707 A CN110110707 A CN 110110707A
Authority
CN
China
Prior art keywords
formula
layer
information
follows
neural networks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910436838.8A
Other languages
Chinese (zh)
Inventor
詹志超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Flash Cnc System Integration Co Ltd
Original Assignee
Suzhou Flash Cnc System Integration Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Flash Cnc System Integration Co Ltd filed Critical Suzhou Flash Cnc System Integration Co Ltd
Priority to CN201910436838.8A priority Critical patent/CN110110707A/en
Publication of CN110110707A publication Critical patent/CN110110707A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7834Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using audio features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • G06F16/784Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content the detected or recognised objects being people
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • G06Q50/265Personal security, identity or safety
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/95Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/955Hardware or software architectures specially adapted for image or video understanding using specific electronic processors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Library & Information Science (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Tourism & Hospitality (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Computational Linguistics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Computer Security & Cryptography (AREA)
  • Health & Medical Sciences (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of artificial intelligence CNN, LSTM neural network dynamic identifying system, including camera terminal (100), server (200), convolutional neural networks (300), long Memory Neural Networks (400) in short-term, artificial intelligence early warning operating system (500), cloud computing (600), with cloud Database Dynamic blacklist comparative analysis (700), determine target person identity (800), local data library module (900).The video flowing of face characteristic, phonetic feature and behavior characteristic information that the present invention is acquired by artificial intelligence CNN, LSTM neural network dynamic identifying system with camera terminal carries out feature extraction, and then behavioural characteristic and the relevant technical treatment of abnormal behaviour are carried out to characteristic information, including fighting, stealing, old man falls down, event of assembling a crowd, invasion etc., realize uninterrupted monitoring in round-the-clock 24 hours in the monitoring range of camera terminal periphery, user can realize information sharing, information resources utilization rate is improved, increases safety guarantee to maintain public order to stablize.

Description

Artificial intelligence CNN, LSTM neural network dynamic identifying system
Technical field
The present invention relates to intelligent security guard early warning fields, and in particular to a kind of artificial intelligence CNN, LSTM for security protection stability maintenance Neural network dynamic identifying system.
Background technique
Artificial intelligence CNN, LSTM neural network dynamic identifying system is by advanced camera terminal technology, central processing unit CPU, image processor GPU, neural network processor NPU, isomery/reconfigurable processor technology, convolutional neural networks technology, length Short-term memory nerual network technique, AI computer early warning processing technique, AI artificial intelligence early warning operating technology, risk factors acquisition Technology, risk factors collection technology, big data analysis technology, cloud computing technology, cloud storage technology, cloud database technology etc. are effective It is integrated be applied to entire artificial intelligence CNN, LSTM neural network speech recognition system, and the one kind established is a wide range of, complete What orientation played a role, in real time, accurately and efficiently comprehensive intelligent early warning system.
With the measure normalization of motherland's training in educational technology at western frontier stability maintenance, borderland is made to keep a stable situation for a long time, guarantees warp Ji can be realized fast development, and artificial intelligence CNN, LSTM neural network dynamic identifying system is fundamentally to solve to influence Changzhi The deep-seated problem pacified long lays good basis.
Summary of the invention
The present invention be in order to overcome in existing security system exist not automatic identification dynamic behaviour, surveillance and control measure it is leaky, It the problems such as taking precautions against not in time, proposes a kind of artificial intelligence CNN, LSTM neural network dynamic identifying system, passes through camera terminal pair Risk factors source is deployed to ensure effective monitoring and control of illegal activities, in real time acquire Risk Source Factor multidate information, carry out dynamic detection, dynamic target tracking, Characteristic spectrum Signal Pretreatment, behavioral characteristics extract, behavior matches and identification, in the behavioral characteristics data and database of extraction The behavioral characteristics handwritten form template of storage scans for comparing, and according to similarity degree, judges identity information, sets one Threshold value, when similarity is more than this threshold value, then result matching obtained exports.
Risk factors source is carried out using artificial intelligence CNN, LSTM neural network dynamic identifying system for realization is above-mentioned The acquisition of Dynamic Recognition information, dynamic detection, dynamic target tracking, Dynamic Signal pretreatment, behavioral characteristics extract, behavior matches With identification, then the purpose of grading forewarning system is carried out, the invention provides the following technical scheme: a kind of artificial intelligence CNN, LSTM is neural Network dynamic identifying system, including camera terminal (100), server (200), convolutional neural networks (300), long short-term memory mind Through network (400), artificial intelligence early warning operating system (500), cloud computing (600) and cloud Database Dynamic blacklist to score Analysis (700) determines that target person identity (800), local data library module (900) present invention pass through artificial intelligence CNN, LSTM mind Risk factors are acquired through network dynamic identifying system, comparative analysis, storage, classifying alarm, reply prevention and control, are realized to bat It takes the photograph terminal, peripheral and deploys to ensure effective monitoring and control of illegal activities and monitor within round-the-clock 24 hours, user can realize information sharing, improve information resources utilization rate, be Safeguard that stability in border areas increases safety guarantee.
It includes camera terminal (100) that the present invention, which provides a kind of artificial intelligence CNN, LSTM neural network dynamic identifying system, For acquiring the video flowing containing face characteristic, phonetic feature and behavior characteristic information, and automatic detection and tracking in the picture Face, voice, behavioural characteristic information, and then one is carried out to the face characteristic, phonetic feature and behavior characteristic information that detect Series technical treatment relevant to behavior, including recognition of face, speech recognition, behavior characteristic information identification and abnormal behaviour identification (including fight, steal, old man falls down, event of assembling a crowd, invasion etc.), and is sent to server for image sequence by network (200), the network includes local area network, Interne or wireless network.
Dynamic Signal uses network transmission: including local area network, Interne or wireless network.The network transmission is for clapping It takes the photograph terminal to server and sends Dynamic Signal sequence.
Server (200) includes high-performance central processor CPU, image processor GPU, programmable gate array FPGA, neural network processor NPU, isomery/reconfigurable processor, convolutional neural networks module (300), long short-term memory nerve Network (400), artificial intelligence early warning operating system (500), cloud computing (600) module and cloud Database Dynamic blacklist compare Analysis (700) module determines target person identity (800) module, local data library module (900), and the server (200) is used Client provides the service of various high-performance calculations, control of the server in artificial intelligence early warning operating system in for network system Under system, by coupled network video server, programme-controlled exchange, AI cloud computing server, AI database server, GPU Cloud Server, Web server, communication server, display, hybrid matrix, router, modem are connected, and are long-range Monitor client provides the service of centralized calculation, information publication and data management.
Convolutional neural networks module (300) includes input layer, hardwired layer H1, convolutional layer C2, down-sampling layer S3, convolutional layer C4, down-sampling layer S5, C6, Dropout layers of convolutional layer, Memory Neural Networks, convolutional neural networks pass through 3D volumes to input length in short-term Product core goes to extract time and the space characteristics of video data, and operation of the 3D feature extractor in room and time dimension can be with The motion information of video flowing is captured, 3D convolution feature extractor constructs a 3D convolutional neural networks framework, this framework can To generate the information of multichannel from successive video frames, convolution sum down-sampling behaviour is discretely then carried out in each channel Make, finally combine the information in all channels to obtain final feature description, is obtained by the high-rise motion feature of calculating auxiliary Output is helped to enhance model, is compared in Trecvid data integrated test, and with some pedestal methods, in order to cope with difference The use of environment, comprehensive multiple and different CNN framework remove comprehensive descision recognition result, and 3D convolution is multiple continuous by stacking Frame forms a cube, then goes the motion feature on pull-in time and Spatial Dimension to believe with 3D convolution kernel in cube It ceases, the weight of each 3D convolution kernel is the same in convolutional layer, that is, shared weight, a convolution kernel can only extract one kind Motion feature, a variety of convolution kernels extract multi-motion feature, and the cube of each 3D convolution nuclear convolution is continuous 7 frame, and every frame is big Small is 60 × 40, and first by pretreated continuous 7 frame, every frame sign is 60 × 40 sequence inputting convolutional neural networks progress Training updates each layer weight of convolutional neural networks, initializes to convolutional neural networks convolutional layer C2, first to convolution The convolution kernel and weight of layer and output layer carry out Gaussian Profile random initializtion, and mean value is set as 0, and variance is set as 0.001, to biasing Full 0 initialization is carried out, then convolutional neural networks are trained.
Long Memory Neural Networks (400) LSTM memory unit in short-term includes forgeing door, input gate, out gate, and LSTM is with two A door carrys out the content of control unit state c, and one is to forget door, it determines the location mode c of last momentt-1How many is protected It is left to current time cT, t-1The input h at momentt-1And xtF is exported after a linear transformation+sigmoid is activatedt, ftAgain with ct-1It is multiplied to obtain an intermediate result, the other is input gate, it determines the input x of current time networktHow many It is saved in location mode cT, t-1The input h at momentt-1And xtIt is exported after another linear transformation+sigmoid activation lt, while ht-1And xtAfter another linear transformation+tanh activation, with ltMultiplication obtains an intermediate result, in this Between result be added to obtain c with the intermediate result of previous stept, so-called out gate, LSTM is with out gate come control unit state ctHave more It is output to the current output value h of LSTM lessT, t-1The input h at momentt-1And xtIt is activated by another linear transformation+sigmoid Output o latert, otWith the c by tanhtMultiplication obtains ht, c, x, h are vector, LSTM memory unit time series here Data include behavioral characteristics model, handwriting recongnition, sequence generation, behavioural analysis, and sequence here refers to time arrow sequence Column, it is assumed that time series are as follows:
X { x1, x2 ..., xN }
Time series models are as follows:
Output valve by Dropout layers of length of convolutional neural networks for 128 sequence vector is input to long short-term memory nerve net Network operation obtains an output, and output vector is converted by softmax function, exports behavior tag along sort vector, seeing is Act of omission or positive act.
Artificial intelligence early warning operating system (500) is based on the artificial intelligence of AI developed on the basis of (SuSE) Linux OS framework Energy early warning operating system, the system include class cranial nerve network system, multidimensional man-machine object collaboration inter-operation system, public safety intelligence Monitoring and warning and prevention and control system, autonomous unmanned servo-system, Incorporate information network platform system can be changed, for managing and The computer runs programs for controlling computer hardware, software and data resource, for artificial intelligence early warning systems at different levels and interconnection The interface that net+distribution early warning police kiosk is linked up, for cloud computing, cloud storage, cloud database and artificial intelligence early warning system, mutually The interface of networking+distribution early warning police kiosk and other software handshakes is set for the man-machine object collaboration inter-operation system of multidimensional with movement Standby and smart television communication interface, provides support, including class cranial nerve network system for man-machine interface for other application software The man-machine object collaboration inter-operation system of system, multidimensional, the early warning of public safety intelligent monitoring and prevention and control system, autonomous unmanned servo system System, Incorporate network information platform system, intelligent things and risk facior data acquisition system, risk factors management system System, artificial intelligence early warning operating system (500) subsystem include dynamic recognition system, NI Vision Builder for Automated Inspection, actuator system, recognize Know system of behavior, file system, management of process, Inter-Process Communication, memory management, network communication, security mechanism, driver, User interface.
Cloud computing (600) is based on open source Hadoop framework and is designed, and carries out high speed computing and storage using cluster advantage, Cloud computing (600) includes that infrastructure services, platform services, software services, for calculating on distributed computer Risk factors collection module, risk factors reasoning module, risk factors evaluation module, by network by huge calculation processing journey Sequence is split into numerous lesser subprogram automatically, then bulky systems composed by multi-section server is transferred to be searched and magnanimity Data information compares and analyzes, hierarchical reasoning, early warning value assessment, processing result returned to user again later goes forward side by side to rack and deposit Storage.
With cloud the comparative analysis of Database Dynamic blacklist (700) module, the cloud database includes original multidate information number It is dynamic according to library, primitive image features information database, real-time risk factors acquisition image information data library, the acquisition of real-time risk factors State information database, risk factors collection database, risk factors reason DB, risk factors assessment database, risk because Element reply database, risk factors administrative evaluation database, real-time judge are real according to database, judgment rule database and accident Example database, the cloud database are used for the cluster application of cloud computing (600) system, and it is soft that distributed system file is passed through application Part gathers collaborative work, the work of data storage and business access is provided for user, by the way that online data storage mould is arranged Block stores facial image blacklist, dynamic feature information blacklist, biological information blacklist and voice in memory module Information blacklist, will be in the facial image of acquisition, dynamic feature information, biological information and voice messaging and memory module Facial image blacklist, dynamic feature information blacklist, biological information blacklist and voice messaging blacklist compare, If similarity reaches preset early warning value, which is generated early warning information in time and carries out risk factors by early warning system Reasoning, assessment, generate warning level warning message, feed back to upper level early warning system carry out risk management evaluation.
Determine target person identity (800) module for handle with cloud Database Dynamic blacklist comparative analysis (700) give birth to At early warning information, early warning value assessment, generate warning level warning message, generate pre-warning signal feed back to upper level early warning The information of system, and according to cloud computing (600) by the data that are transmitted with cloud Database Dynamic blacklist comparative analysis (700) into Row real time information updates, and consults letter generated to cloud database information for storing the artificial intelligence early warning system (500) Cease data.
Local data library module (900) is used to store the same level artificial intelligence early warning operating system warning information generated, For storing the information sent to upper level artificial intelligence early warning operating system and feedback information, sent for storing to cloud computing Information and feedback information.
Preferred embodiment, the cloud Database Systems include Dynamic Recognition blacklist.
Preferred embodiment, the network include local area network, Internet or wireless network.
Preferred embodiment, the convolutional neural networks activation primitive are ReLU activation primitive.
Preferred embodiment, the convolutional neural networks loss function are cross entropy loss function.
Preferred embodiment, the camera terminal are AI camera terminal.
Preferred embodiment, the cloud computing are designed based on open source Hadoop framework.
Preferred embodiment, the cloud database: by online data storage module, online data storage module is based on open source Hadoop framework is designed.
Preferred embodiment, the cloud database are divided into original dynamic information database, primitive image features information database, reality When risk factors acquisition image information data library, real-time risk factors acquire dynamic information database, risk factors collection data Library, risk factors reason DB, risk factors assessment database, risk factors cope with database, risk factors administrative evaluation Database, real-time judge are according to database, judgment rule database and accident examples database.
Preferred embodiment, the artificial intelligence early warning operating system is based on developing on the basis of (SuSE) Linux OS framework AI artificial intelligence early warning operating system.
Preferred embodiment, the dynamic feature information include the characteristic spectrum information of acquisition.
Preferred embodiment, the original dynamic feature information include the dynamic blacklist of memory module storage.
Preferred embodiment, the server 700 include high-performance central processor CPU, image processor GPU, may be programmed and patrol Collect gate array FPGA, neural network processor NPU, isomery/reconfigurable processor.
Preferred embodiment, the convolutional layer, pond layer are characterized extraction, and the full articulamentum is Classification and Identification, activation primitive ReLU is canonical loss.
Detailed description of the invention
Fig. 1 is artificial intelligence CNN, LSTM neural network dynamic identifying system structural block diagram: 100, camera terminal;200, it takes Business device;300, convolutional neural networks;400, long Memory Neural Networks in short-term;500, artificial intelligence early warning operating system;600, cloud It calculates;700, with the blacklist comparative analysis of cloud Database Dynamic;800, target person identity is determined;900, local data base.
Fig. 2 is convolutional neural networks structural schematic diagram: input layer, hardwired layer H1, convolutional layer C2, down-sampling layer S3, volume Lamination C4, down-sampling layer S5, C6, Dropout layers of convolutional layer.
Fig. 3 is long Memory Neural Networks structural schematic diagram in short-term.
Specific embodiment
Specific embodiment is clearly and completely described to technical solution of the present invention with reference to the accompanying drawings of the specification.
The present invention provides a kind of artificial intelligence CNN, LSTM neural network dynamic identifying system, as shown in Figure 1, camera terminal (100) for acquire the video flowing containing face characteristic, phonetic feature and behavior characteristic information, and automatic detection in the picture with Track human faces, voice, behavioural characteristic information, and then to the face characteristic, phonetic feature and behavior characteristic information detected into A series of technical treatments relevant to behavior of row, including recognition of face, speech recognition, behavior characteristic information identification and abnormal behaviour Identification (including fight, steal, old man falls down, event of assembling a crowd, invasion etc.), and is sent to service for image sequence by network Device (200), the network include local area network, Interne or wireless network, and overall system structure is as shown in Figure 1.
Server (200) includes high-performance central processor CPU, image processor GPU, programmable gate array FPGA, neural network processor NPU, isomery/reconfigurable processor, convolutional neural networks (300), long Memory Neural Networks in short-term (400), artificial intelligence early warning operating system (500), cloud computing (600) module and the blacklist comparative analysis of cloud Database Dynamic (700) module, determine target person identity (800) module, local data library module (900), the server (200) is used for The service of various high-performance calculations, control of the server in artificial intelligence early warning operating system are provided for client in network system Under, by coupled network video server, programme-controlled exchange, AI cloud computing server, AI database server, GPU cloud Processor, isomery/restructural cloud processor, Web server, communication server, display, mixes NPU neural network cloud processor It closes matrix, router, modem to be connected, provides centralized calculation, information publication and data pipe for long-range monitor client The service of reason, the neural network processor NPU are used to complete the operation of convolutional neural networks, long Memory Neural Networks in short-term Journey, the isomery/reconfigurable processor are used for cooperated computing between CPU, GPU, NPU, accelerate each other, work asynchronously together.
Convolutional neural networks module (300) includes input layer, hardwired layer H1, convolutional layer C2, down-sampling layer S3, convolutional layer C4, down-sampling layer S5, C6, Dropout layers of convolutional layer, Memory Neural Networks, convolutional neural networks pass through 3D volumes to input length in short-term Product core goes to extract time and the space characteristics of video data, and operation of the 3D feature extractor in room and time dimension can be with The motion information of video flowing is captured, 3D convolution feature extractor constructs a 3D convolutional neural networks framework, this framework can To generate the information of multichannel from successive video frames, convolution sum down-sampling behaviour is discretely then carried out in each channel Make, finally combine the information in all channels to obtain final feature description, is obtained by the high-rise motion feature of calculating auxiliary Output is helped to enhance model, is compared in Trecvid data integrated test, and with some pedestal methods, in order to cope with difference The use of environment, comprehensive multiple and different CNN framework remove comprehensive descision recognition result, and 3D convolution is multiple continuous by stacking Frame forms a cube, then goes the motion feature on pull-in time and Spatial Dimension to believe with 3D convolution kernel in cube It ceases, the weight of each 3D convolution kernel is the same in convolutional layer, that is, shared weight, a convolution kernel can only extract one kind Motion feature, a variety of convolution kernels extract multi-motion feature, and the cube of each 3D convolution nuclear convolution is continuous 7 frame, and every frame is big Small is 60 × 40, and first by pretreated continuous 7 frame, every frame sign is 60 × 40 sequence inputting convolutional neural networks progress Training updates each layer weight of convolutional neural networks, initializes to convolutional neural networks convolutional layer C2, first to convolution The convolution kernel and weight of layer and output layer carry out Gaussian Profile random initializtion, and mean value is set as 0, and variance is set as 0.001, to biasing Full 0 initialization is carried out, then convolutional neural networks are trained, steps are as follows:
A) input layer: continuous 7 frame of input, every frame sign 60 × 40;
B) it is used to generate multi-channel information for hardwired layer H1:H1 layers, is used to coding priori knowledge, this layer is to each of input layer Frame extracts the information in five channels, is respectively: gray value, the gradient in the direction x, the gradient in the direction y, the light stream in the direction x, the direction y Light stream, wherein three values in front all calculate every frame, the light stream in the direction x and y two successive frames of needs could be calculated, due to defeated Entering layer is 7 frames, so the characteristic pattern quantity of H1 is 7 (gray value)+7 (gradient in the direction x)+7 (gradient in the direction y)+6 (directions x Light stream)+6 (light stream in the direction y)=33, each characteristic pattern size is still 60 × 40;
C) convolutional layer C2:C2 layers are 3D convolutional layers, and 3D convolution kernel size is 7 × 7 × 3, wherein 7 × 7 indicate spatially big Small, 3 be the length of time dimension, and C2 layers carry out convolution to each channel in H1 layers of five channels respectively, and characteristic pattern quantity is (7-3+1) × 3+ (6-3+1) × 2=5 × 3+4 × 2=23 indicates gray scale, these three channels of the gradient in the direction x and y multiplied by 3, The light stream for indicating the direction x and y multiplied by 2, using 2 different convolution kernels, such C2 layers has two groups of characteristic patterns, and every group all includes 23 characteristic patterns, i.e. C2 layers of total characteristic pattern quantity is 23 × 2, the size of each characteristic pattern be (60-7+1) × (40-7+1)= 54 × 34, C2 layers can training parameter be (7 × 7 × 3 × 5+5) × 2=740 × 2=1480, it is therein to indicate 5 multiplied by 5 The information in a channel indicates 2 different convolution kernels, the volume that 3D convolution layer depth is 1 multiplied by 2 in addition 5 indicate bias term Product calculation method, formula are as follows:
X in above formula oneI, jIndicate the i-th row jth column element of image, wM, nIndicate m row the n-th column weight, wb indicates filter Bias term, aI, jIndicate that the i-th row jth column element of characteristic spectrum, f indicate relu activation primitive;
When step-length is 2, characteristic spectrum reforms into 2 × 2, and calculation formula is as follows:
W2=(W1- F+2P)/S+1 formula two
H2=(H1- F+2P)/S+1 formula three
W in above formula two, three2The width of characteristic spectrum, W after expression convolution1The width of image, F indicate filter before expression convolution Width, P indicate zero padding quantity, S indicate step-length, H2The height of characteristic spectrum after indicating convolution, H1Image before expression convolution Width;
Depth is greater than 1 convolutional calculation mode, and formula is as follows:
D indicates depth in above formula four, and F indicates the size (width or height, the two are identical) of filter, wD, m, nIndicate filter D layer m row the n-th column weight, aD, i, jIndicate that d layer the i-th row jth column pixel of image, other symbol meanings and formula one are public Formula is identical;
D) down-sampling layer S3: using maximum pond method, sampling window is 2 × 2, so each characteristic pattern size is (54/2) × (34/2)=27 × 17, characteristic pattern quantity are equal to a upper layer number and are still 23 × 2,
Its general representation, formula are as follows:
A in formula fiveI, jIndicate the i-th row jth column element of characteristic spectrum, b is departure, and K is characterized the port number of figure, f, soWith P is convolution layer parameter, corresponding convolution kernel size, convolution step-length and the filling number of plies, distinguishingly, when convolution kernel is size f=1, step Long so=1 and when not including the unit convolution kernel of filling, the crosscorrelation calculating in convolutional layer is equivalent to matrix multiplication, step-length so, the meaning of pixel (i, j) it is identical as convolutional layer, p is preassignment parameter, and as p=1, as p → ∞, the pond Lp is in region It is inside maximized, referred to as maximum pond retains the background and texture information of image to lose characteristic pattern having a size of cost;
E) convolutional layer C4:C4 layers are 3D convolutional layers, and convolution kernel size is 7 × 6 × 3, wherein 7 × 6 indicate size spatially, 3 be time dimension, and characteristic pattern quantity is (5-3+1) × 3+ (4-3+1) × 2=3 × 3+2 × 2=13, indicates gray value, x multiplied by 3 With these three channels of the gradient in the direction y, the light stream for indicating the direction x and y multiplied by 2 is C2 layers such using 6 different convolution kernels There are 6 groups of characteristic patterns, every group all includes 6 characteristic patterns, i.e., this layer total characteristic pattern quantity is 13 × 6=78, each characteristic pattern Size is (27-7+1) × (17-6+1)=21 × 12, can training parameter have (7 × 6 × 3 × 5+5) × 6=3810, C4 meter It is identical as C2 formula to calculate formula;
F) down-sampling layer S5:S5 layers of down-sampling layer, using maximum pond method, sampling window is 3 × 3, so each characteristic pattern Size is (21/3) × (12/3)=7 × 4, and characteristic pattern quantity is equal to a upper layer number and is still 13 × 6=78, and C6 layers are 2D Convolutional layer, convolution kernel size are 7 × 4, and characteristic pattern quantity is 128, and characteristic pattern size is 1 × 1, each characteristic pattern with S5 layers 78 characteristic patterns be connected, can training parameter have (4 × 7 × 128+128) × (13 × 6)=289536, S3 calculation formula It is identical as S5 calculation formula;
G) convolutional layer C6: only the convolution on Spatial Dimension, the core used are 7 × 4 to this layer, and the characteristic spectrum then exported is just Be reduced to 1 × 1 size, it includes 128 characteristic spectrums, each characteristic spectrum with S5 layers in all 78 (13 × 6) a features Map connects entirely, and characteristic spectrum each in this way is exactly 1 × 1, that is, value, and this value is exactly final feature vector Totally 128 dimension, C6 calculation formula are identical as C2 formula;
H) Dropout layers: it is randomly the neuron assignment weight of zero in network, due to having selected 0.5 ratio, then 50% Neuron will be weight of zero, by this operation, network to the response of the minor change of data with regard to less sensitive, because This, it can further increase the accuracy to invisible data processing, and Dropout layers of output is still one 1 × 128 Matrix, then the vector output valve that length is 128 is input to long Memory Neural Networks (400) in short-term and carries out time series behavior point Analyse operation;
I) convolutional neural networks are subjected to weight initialization, input data repeats step (a)~(h), and propagated forward is exported Value, finds out the error between the output valve of convolutional neural networks and target value, when error is greater than desired value, passes error back volume In product neural network, is exercised supervision training with BP back-propagation algorithm, find out the error of result and desired value, then by one layer of error One layer of return calculates each layer of error, carries out right value update, is followed successively by Dropout layers, convolutional layer C6, down-sampling layer The error of S5, convolutional layer C4, down-sampling layer S3, convolutional layer C2, hardwired layer H1, in the hope of the overall error of convolutional neural networks, Error is passed in convolutional neural networks again, acquires how many specific gravity should be undertaken for total error for each layer, in training convolutional When neural network, by constantly changing all parameters in convolutional neural networks, reduce loss function constantly, be equal to when error or When less than desired value, it was demonstrated that trained high-precision convolutional neural networks model, terminated training;
J) pretreated continuous 7 frame cube sequence input convolutional neural networks are acquired to be tested, by step (a)~ (h) after the data handling procedure of convolutional neural networks, data are treated as 1 × 128 vector, are input to softmax classification Device is separated, and signal to be separated is mapped on corresponding label by softmax classifier, and signal is by convolution mind when training Data handling procedure through network obtains a classification results, it is compared with corresponding label data and calculates corresponding phase To error, the weight constantly corrected on the convolution window in convolutional neural networks by the certain number of training makes relative error Constantly reduce, finally tends to restrain, then test set is input in the network and carries out testing classification, obtain classification results label Vector, label where maximum value element indicate that the motion feature is the class label of this test motion feature, realize that behavior is known Not.
Long Memory Neural Networks (400) LSTM memory unit in short-term includes forgeing door, input gate, out gate, and LSTM is with two A door carrys out the content of control unit state c, and one is to forget door, it determines the location mode c of last momentt-1How many is protected It is left to current time cT, t-1The input h at momentt-1And xtF is exported after a linear transformation+sigmoid is activatedt, fyAgain With ct-1It is multiplied to obtain an intermediate result, the other is input gate, it determines the input x of current time networktHave How much location mode c is saved inT, t-1The input h at momentt-1And xtAfter another linear transformation+sigmoid activation Export lt, while ht-1And xtAfter another linear transformation+tanh activation, with ltMultiplication obtains an intermediate result, this A intermediate result is added to obtain c with the intermediate result of previous stept, so-called out gate, LSTM is with out gate come control unit state ct How many is output to the current output value h of LSTMT, t-1The input h at momentt-1And xtBy another linear transformation+sigmoid O is exported after activationt, otWith the c by tanhtMultiplication obtains ht, c, x, h are vector, LSTM memory unit time here Sequence data include behavioral characteristics model, handwriting recongnition, sequence generate, behavioural analysis, sequence here refer to the time to Measure sequence, it is assumed that time series are as follows:
X { x1, x2 ..., xN }
Time series models are as follows:
Output valve by Dropout layers of length of convolutional neural networks for 128 sequence vector is input to long short-term memory nerve net Network operation obtains an output, and output vector is converted by softmax function, exports behavior tag along sort vector, seeing is Positive behavior or negative behavior;
Xiang Xunlian before carrying out to long Memory Neural Networks in short-term, it is shown that steps are as follows:
A) forget the calculating of door, formula is as follows:
ft=σ (wf·[ht-1, xt]+bf) formula 1
W in formula 1fIndicate the weight matrix of forgetting door, [ht-1, xt] two vectors is indicated to connect into a longer vector, bf Indicate the bias term of forgetting door, σ indicates sigmoid function, if the dimension of input is dx, the dimension of hidden layer is dn, cell-like The dimension of state is dc(usual dc=dn), then forget the weight matrix w of doorfDimension is dc×(dn+dx), in fact, weight matrix Wf It is all to be spliced by two matrixes, one is Wfh, it corresponds to input item ht-1, dimension dc×dh, one is wfx, It corresponds to input item xt, dimension dc×dx, WfIt can be written as, formula is as follows:
B) calculating of input gate, formula are as follows:
it=σ (wi·[ht-1, xt]+bi) formula 2
W in formula 2iIndicate the weight matrix of input gate, biIt indicates the bias term of input gate, calculates below for describing currently to input Location modeIt is calculated according to last output and this input, and formula is as follows:
Calculate the location mode c at current timet, it is by last location mode ct-1By element multiplied by forgetting door ft, then use Location mode currently enteredBy element multiplied by input gate it, then two product adductions are generated, formula is as follows:
SymbolExpression is multiplied by element, thus LSTM about current memoryWith long-term memory ct-1It combines, Form new location mode ct, due to forget door control, it can save for a long time before information, due to the control of input gate System, it can enter memory again to avoid current inessential content;
C) calculating of out gate, formula are as follows:
ot=σ (wo·[ht-1, xt]+bo) formula 5
It controls influence of the long-term memory to currently exporting with out gate, and LSTM final output is by out gate and cell-like What state determined jointly, formula is as follows:
D) backpropagation training is carried out to long Memory Neural Networks in short-term, LSTM backpropagation calculates the error term of each neuron δ value, the backpropagation of LSTM error term include both direction, and one is backpropagation along the time, i.e., since current t moment The error term at each moment is calculated, the other is error term upper layer is propagated, steps are as follows:
The activation primitive of gate is set as sigmoid function, the activation primitive of output is tanh function, their derivative difference For formula is as follows:
σ ' (z)=y (1-y)
Tanh ' (z)=1-y2
The derivative of sigmoid and tanh function is all the function of original function in above formula, once calculate original function, so that it may with it come The value of derivative is calculated, LSTM needs the parameter learnt to share 8 groups, is the weight matrix w for forgeing door respectivelyfWith bias term bf, it is defeated The weight matrix w of introductioniWith bias term bi, out gate weight matrix woWith bias term boAnd the weight of computing unit state Matrix wcWith bias term bc, two parts of weight matrix in backpropagation use different formula, the weight in subsequent derivation Matrix wf、wi、wo、wcTwo separated matrixes: w will be all written asfh、wfx、wih、wix、 woh、wox、wch、wcx
E) multiply by elementSymbol, whenWhen acting on two vectors, operational formula is as follows:
WhenWhen acting on one matrix of a vector sum, operational formula is as follows:
WhenWhen acting on two matrixes, the element multiplication of two matrix corresponding positions, formula is as follows:
When a row vector right side multiplies a diagonal matrix, be equivalent to this row vector by element multiply diagonal of a matrix composition to Amount, formula are as follows:
In t moment, the output valve of LSTM is ht, define the error term δ of t momenttFor formula is as follows:
Assuming that error term is derivative of the loss function to output valve, needs to define four weightings and input corresponding error term, formula It is as follows:
netF, t=wf[ht-1, xt]+bf
=wfhht-1+wfxxt+bf
netI, t=wi[ht-1, xt]+bi
=wihht-1+wixxt+bi
netO, t=wo[ht-1, xt]+bo
=wohht-1+woxxt+bo
F) along time reversal transmission error item, the error term δ at t-1 moment is calculatedt-1, formula is as follows:
Formula seven is obtained using total derivative formula, and formula is as follows:
Ask each partial derivative, formula in formula seven as follows:
It is found out according to formula six:
It is found out according to formula four:
Because of following operation:
ot=σ (netO, t)
netO, t=wohht-1+woxxt+bo
ft=σ (netF, t)
netF, t=wfhht-1+wfxxt+bf
it=σ (netI, t)
netI, t=wihht-1+wixxt+bi
Obtained partial derivative, formula are as follows:
Above-mentioned partial derivative is brought into formula seven formula eight that obtains, formula is as follows:
According to δO, t、δF, t、δI, tDefinition, obtain formula nine, formula ten, formula 11, formula 12, formula is as follows:
From formula eight to formula ten second is that error term along time reversal propagate at the time of formula, error term is acquired to formula 12 according to formula eight 13 formula at any k moment are passed forward to, formula is as follows:
G) error term is transmitted to upper one layer, it is assumed that current is l layers, and the error term for defining l-1 layers is error function to l-1 layers The derivative of input is weighted, formula is as follows:
The input x of LSTMtFormula is as follows:
In above formula, fl-1It indicates l-1 layers of activation primitive, asks E pairsDerivative, with total derivative formula by error propagation To upper one layer, formula is as follows:
H) the calculating w of weight gradientfh、wih、wch、wohWeight gradient be the sum of each moment gradient, find out them first in t The gradient at moment, formula are as follows:
The gradient at each moment is added together, final gradient is obtained, formula is as follows:
Seek bf、bi、bc、boThe bias term gradient at each moment, formula are as follows:
The bias term gradient at each moment is added together, formula is as follows:
According to error term, w is soughtfx、wix、wcx、woxWeight gradient, formula is as follows:
I) the long each output valve of Memory Neural Networks in short-term is subjected to mean value pond, output vector is carried out by softmax function Conversion, exports behavior tag along sort vector, label where maximum value element indicate this feature map belong to such distinguishing label to Amount, sees act of omission or positive act;
J) cross entropy error function is finally used to optimize as optimization aim to model, formula is as follows:
In above formula, N is the number of training sample, vector ynIt is the label of sample, vector onIt is the output of network, marks ynIt is one A one-hot vector;
K) (a) step is jumped to, input data repeats step (a)~(j), until network error is less than given value, it was demonstrated that instructed High-precision length Memory Neural Networks model in short-term is practised, training is terminated;
L) it acquires in pretreated characteristic spectrum sequence and is tested for any one group, behavior point is obtained by step (a)~(j) Class result label vector, label where maximum value element indicate that this feature map is the behavior class label of this test, realize Activity recognition.
Artificial intelligence early warning operating system (500) is based on the artificial intelligence of AI developed on the basis of (SuSE) Linux OS framework Energy early warning operating system, the system include class cranial nerve network system, multidimensional man-machine object collaboration inter-operation system, public safety intelligence Monitoring and warning and prevention and control system, autonomous unmanned servo-system, Incorporate information network platform system can be changed, for managing and The computer runs programs for controlling computer hardware, software and data resource, for artificial intelligence early warning systems at different levels and interconnection The interface that net+distribution early warning police kiosk is linked up, for cloud computing, cloud storage, cloud database and artificial intelligence early warning system, mutually The interface of networking+distribution early warning police kiosk and other software handshakes is set for the man-machine object collaboration inter-operation system of multidimensional with movement Standby and smart television communication interface, provides support, including class cranial nerve network system for man-machine interface for other application software The man-machine object collaboration inter-operation system of system, multidimensional, the early warning of public safety intelligent monitoring and prevention and control system, autonomous unmanned servo system System, Incorporate network information platform system, intelligent things and risk facior data acquisition system, risk factors management system System, artificial intelligence early warning operating system (500) subsystem include dynamic recognition system, NI Vision Builder for Automated Inspection, actuator system, recognize Know system of behavior, file system, management of process, Inter-Process Communication, memory management, network communication, security mechanism, driver, User interface.
Cloud computing (600) is based on open source Hadoop framework and is designed, and carries out high speed computing and storage using cluster advantage, Cloud computing (600) includes that infrastructure services, platform services, software services, for calculating on distributed computer Risk factors collection module, risk factors reasoning module, risk factors evaluation module, by network by huge calculation processing journey Sequence is split into numerous lesser subprogram automatically, then bulky systems composed by multi-section server is transferred to be searched and magnanimity Data information compares and analyzes, hierarchical reasoning, early warning value assessment, processing result returned to user again later goes forward side by side to rack and deposit Storage.
With cloud the comparative analysis of Database Dynamic blacklist (700) module, module, the cloud database includes original dynamic letter Breath database, primitive image features information database, real-time risk factors acquisition image information data library, real-time risk factors are adopted Collect dynamic information database, risk factors collection database, risk factors reason DB, risk factors and assesses database, wind Dangerous factor reply database, risk factors administrative evaluation database, real-time judge are according to database, judgment rule database and thing Therefore instance database, the cloud database are used for the cluster application of cloud computing (600) system, by distributed system file by answering Get up to cooperate with software assembly, provide the work of data storage and business access for user, is deposited by the way that online data is arranged Store up module, store in memory module facial image blacklist, dynamic feature information blacklist, biological information blacklist and Voice messaging blacklist, by the facial image of acquisition, dynamic feature information, biological information and voice messaging and memory module Interior facial image blacklist, dynamic feature information blacklist, biological information blacklist and voice messaging blacklist carries out Comparison, if similarity reaches preset early warning value, which is generated early warning information in time and carries out wind by early warning system The reasoning of dangerous factor, generates warning level warning message, feeds back to the progress risk management evaluation of upper level early warning system assessment.
Determine target person identity (800) module for handle with cloud Database Dynamic blacklist comparative analysis (700) give birth to At early warning information, early warning value assessment, generate warning level warning message, generate pre-warning signal feed back to upper level early warning The information of system, and according to cloud computing (600) by the data that are transmitted with cloud Database Dynamic blacklist comparative analysis (700) into Row real time information updates, and consults letter generated to cloud database information for storing the artificial intelligence early warning system (500) Cease data.
Local data library module (900) is used to store the same level artificial intelligence early warning operating system warning information generated, For storing the information sent to upper level artificial intelligence early warning operating system and feedback information, sent for storing to cloud computing Information and feedback information.

Claims (10)

1. artificial intelligence CNN, LSTM neural network dynamic identifying system, it is characterised in that: include: camera terminal (100), service Device (200), convolutional neural networks (300), long Memory Neural Networks (400) in short-term, artificial intelligence early warning operating system (500), Cloud computing (600) and cloud Database Dynamic blacklist comparative analysis (700) determine target person identity (800), local data Library module (900).
2. artificial intelligence CNN, LSTM neural network dynamic identifying system according to claim 1, it is characterised in that: described Camera terminal (100) is used to acquire the video flowing containing face characteristic, phonetic feature and behavior characteristic information, and automatically in image Middle detection and tracking face, voice, behavioural characteristic information, and then it is special to the face characteristic, phonetic feature and behavior that detect Reference breath carries out a series of technical treatments relevant to behavior, including recognition of face, speech recognition, behavior characteristic information identify and Abnormal behaviour identification (including fight, steal, old man falls down, event of assembling a crowd, invasion etc.), and is sent out image sequence by network It send to server (200), the network includes local area network, Interne or wireless network.
3. artificial intelligence CNN, LSTM neural network dynamic identifying system according to claim 1, it is characterised in that: described Server (200) includes high-performance central processor CPU, image processor GPU, programming logic gate array FPGA, nerve net It is network processor NPU, isomery/reconfigurable processor, convolutional neural networks (300), long Memory Neural Networks (400) in short-term, artificial Intelligent early-warning operating system (500), cloud computing (600) module and the comparative analysis of cloud Database Dynamic blacklist (700) module, Determine target person identity (800) module, local data library module (900), the server (200) is used for as in network system Client provides the service of various high-performance calculations, and server, will be with its phase under the control of artificial intelligence early warning operating system Network video server even, programme-controlled exchange, AI cloud computing server, AI database server, GPU cloud processor, NPU mind Through network cloud processor, isomery/restructural cloud processor, Web server, communication server, display, hybrid matrix, routing Device, modem are connected, and provide the service of centralized calculation, information publication and data management for long-range monitor client.
4. planting artificial intelligence CNN, LSTM neural network dynamic identifying system according to claim 1, it is characterised in that: described Convolutional neural networks module (300) include input layer, hardwired layer H1, convolutional layer C2, down-sampling layer S3, convolutional layer C4, under adopt Sample layer S5, C6, Dropout layers of convolutional layer, Memory Neural Networks, convolutional neural networks go to mention input length by 3D convolution kernel in short-term Time and the space characteristics of video data are taken, operation of the 3D feature extractor in room and time dimension can capture video The motion information of stream, 3D convolution feature extractor construct a 3D convolutional neural networks framework, this framework can be from continuous The information that multichannel is generated in video frame, then discretely carries out convolution sum down-sampling operation in each channel, finally will The information in all channels combines to obtain final feature description, by calculating high-rise motion feature obtains that output is assisted to increase Strong model is compared in Trecvid data integrated test, and with some pedestal methods, in order to cope with the use of varying environment, Comprehensive multiple and different CNN framework removes comprehensive descision recognition result, and 3D convolution is to form one by stacking multiple continuous frames Then cube removes the body dynamics information on pull-in time and Spatial Dimension, convolutional layer with 3D convolution kernel in cube In the weight of each 3D convolution kernel be the same, that is, shared weight, a convolution kernel can only extract a kind of motion feature, A variety of convolution kernels extract multi-motion features, and the cube of each 3D convolution nuclear convolution is continuous 7 frame, and every frame sign is 60 × 40, first by pretreated continuous 7 frame, every frame sign is that 60 × 40 sequence inputting convolutional neural networks are trained, more New each layer weight of convolutional neural networks, initializes convolutional neural networks convolutional layer C2, first to convolutional layer and defeated The convolution kernel of layer and weight carry out Gaussian Profile random initializtion out, and mean value is set as 0, and variance is set as 0.001, carry out to biasing complete 0 initialization, then convolutional neural networks are trained, steps are as follows:
A) input layer: continuous 7 frame of input, every frame sign 60 × 40;
B) it is used to generate multi-channel information for hardwired layer H1:H1 layers, is used to coding priori knowledge, this layer is to each of input layer Frame extracts the information in five channels, is respectively: gray value, the gradient in the direction x, the gradient in the direction y, the light stream in the direction x, the direction y Light stream, wherein three values in front all calculate every frame, the light stream in the direction x and y two successive frames of needs could be calculated, due to defeated Entering layer is 7 frames, so the characteristic pattern quantity of H1 is 7 (gray value)+7 (gradient in the direction x)+7 (gradient in the direction y)+6 (directions x Light stream)+6 (light stream in the direction y)=33, each characteristic pattern size is still 60 × 40;
C) convolutional layer C2:C2 layers are 3D convolutional layers, and 3D convolution kernel size is 7 × 7 × 3, wherein 7 × 7 indicate spatially big Small, 3 be the length of time dimension, and C2 layers carry out convolution to each channel in H1 layers of five channels respectively, and characteristic pattern quantity is (7-3+1) × 3+ (6-3+1) × 2=5 × 3+4 × 2=23 indicates gray scale, these three channels of the gradient in the direction x and y multiplied by 3, The light stream for indicating the direction x and y multiplied by 2, using 2 different convolution kernels, such C2 layers has two groups of characteristic patterns, and every group all includes 23 A characteristic pattern, i.e. C2 layer total characteristic pattern quantity are 23 × 2, and the size of each characteristic pattern is (60-7+1) × (40-7+1)=54 × 34, C2 layers can training parameter be (7 × 7 × 3 × 5+5) × 2=740 × 2=1480, it is therein multiplied by 5 indicate 5 it is logical The information in road indicates 2 different convolution kernels, the convolutional calculation that 3D convolution layer depth is 1 multiplied by 2 in addition 5 indicate bias term Method, formula are as follows:
X in above formula oneI, jIndicate the i-th row jth column element of image, wM, nIndicate m row the n-th column weight, wb indicates filter Bias term, aI, jIndicate that the i-th row jth column element of characteristic spectrum, f indicate relu activation primitive;
When step-length is 2, characteristic spectrum reforms into 2 × 2, and calculation formula is as follows:
W2=(W1- F+2P)/S+1 formula two
H2=(H1- F+2P)/S+1 formula three
W in above formula two, three2The width of characteristic spectrum, W after expression convolution1The width of image, F indicate filter before expression convolution Width, P indicate zero padding quantity, and S indicates step-length, H2The height of characteristic spectrum after indicating convolution, H1Image before expression convolution Width;
Depth is greater than 1 convolutional calculation mode, and formula is as follows:
D indicates depth in above formula four, and F indicates the size (width or height, the two are identical) of filter, wD, m, nIndicate filter D layer m row the n-th column weight, aD, i, jIndicate d layer the i-th row jth column pixel of image, one formula of other symbol meanings and formula It is identical;
D) down-sampling layer S3: using maximum pond method, sampling window is 2 × 2, thus each characteristic pattern size be (54/2) × (34/2)=27 × 17, characteristic pattern quantity is still 23 × 2 equal to a upper layer number,
Its general representation, formula are as follows:
A in formula fiveI, jIndicate the i-th row jth column element of characteristic spectrum, b is departure, and K is characterized the port number of figure, f, soAnd p It is convolution layer parameter, corresponding convolution kernel size, convolution step-length and the filling number of plies, distinguishingly, when convolution kernel is size f=1, step-length so=1 and when not including the unit convolution kernel of filling, the crosscorrelation calculating in convolutional layer is equivalent to matrix multiplication, step-length so, as The meaning of plain (i, j) is identical as convolutional layer, and p is preassignment parameter, and as p=1, as p → ∞, the pond Lp takes most in region Big value, referred to as maximum pond, the background and texture information of image are retained to lose characteristic pattern having a size of cost;
E) convolutional layer C4:C4 layers are 3D convolutional layers, and convolution kernel size is 7 × 6 × 3, wherein 7 × 6 indicate size spatially, 3 be time dimension, and characteristic pattern quantity is (5-3+1) × 3+ (4-3+1) × 2=3 × 3+2 × 2=13, indicates gray value, x multiplied by 3 With these three channels of the gradient in the direction y, the light stream for indicating the direction x and y multiplied by 2 is C2 layers such using 6 different convolution kernels There are 6 groups of characteristic patterns, every group all includes 6 characteristic patterns, i.e., this layer total characteristic pattern quantity is 13 × 6=78, each characteristic pattern Size is (27-7+1) × (17-6+1)=21 × 12, can training parameter there is (7 × 6 × 3 × 5+5) × 6=3810, C4 to calculate Formula is identical as C2 formula;
F) down-sampling layer S5:S5 layers of down-sampling layer, using maximum pond method, sampling window is 3 × 3, so each characteristic pattern Size is (21/3) × (12/3)=7 × 4, and characteristic pattern quantity is equal to a upper layer number and is still 13 × 6=78, and C6 layers are 2D volumes Lamination, convolution kernel size are 7 × 4, and characteristic pattern quantity is 128, and characteristic pattern size is 1 × 1, each characteristic pattern with S5 layers 78 characteristic patterns are connected, can training parameter have (4 × 7 × 128+128) × (13 × 6)=289536, S3 calculation formula with S5 calculation formula is identical;
G) convolutional layer C6: only the convolution on Spatial Dimension, the core used are 7 × 4 to this layer, and the characteristic spectrum then exported is just Be reduced to 1 × 1 size, it includes 128 characteristic spectrums, each characteristic spectrum with S5 layers in all 78 (13 × 6) a features Map connects entirely, and characteristic spectrum each in this way is exactly 1 × 1, that is, value, and this value is exactly final feature vector Totally 128 dimension, C6 calculation formula are identical as C2 formula;
H) Dropout layers: it is randomly the neuron assignment weight of zero in network, due to having selected 0.5 ratio, then 50% Neuron will be weight of zero, by this operation, network to the response of the minor change of data with regard to less sensitive, because This, it can further increase the accuracy to invisible data processing, and Dropout layers of output is still one 1 × 128 Matrix, then the vector output valve that length is 128 is input to long Memory Neural Networks (400) in short-term and carries out time series behavior point Analyse operation;
I) convolutional neural networks are subjected to weight initialization, input data repeats step (a)~(h), and propagated forward is exported Value, finds out the error between the output valve of convolutional neural networks and target value, when error is greater than desired value, passes error back volume In product neural network, is exercised supervision training with BP back-propagation algorithm, find out the error of result and desired value, then by one layer of error One layer of return calculates each layer of error, carries out right value update, is followed successively by Dropout layers, convolutional layer C6, down-sampling layer The error of S5, convolutional layer C4, down-sampling layer S3, convolutional layer C2, hardwired layer H1, in the hope of the overall error of convolutional neural networks, Error is passed in convolutional neural networks again, acquires how many specific gravity should be undertaken for total error for each layer, in training convolutional When neural network, by constantly changing all parameters in convolutional neural networks, reduce loss function constantly, be equal to when error or When less than desired value, it was demonstrated that trained high-precision convolutional neural networks model, terminated training;
J) pretreated continuous 7 frame cube sequence input convolutional neural networks are acquired to be tested, by step (a)~ (h) after the data handling procedure of convolutional neural networks, data are treated as 1 × 128 vector, are input to softmax classification Device is separated, and signal to be separated is mapped on corresponding label by softmax classifier, and signal is by convolution mind when training Data handling procedure through network obtains a classification results, it is compared with corresponding label data and calculates corresponding phase To error, the weight constantly corrected on the convolution window in convolutional neural networks by the certain number of training makes relative error Constantly reduce, finally tends to restrain, then test set is input in the network and carries out testing classification, obtain classification results label Vector, label where maximum value element indicate that the motion feature is the class label of this test motion feature, realize that behavior is known Not.
5. artificial intelligence CNN, LSTM neural network dynamic identifying system according to claim 1, it is characterised in that: described Long Memory Neural Networks (400) LSTM memory unit in short-term includes forgeing door, input gate, out gate, and LSTM is controlled with two doors The content of location mode c processed, one is to forget door, it determines the location mode c of last momentt-1How many is remained into currently Moment cT, t-1The input h at momentt-1And xtF is exported after a linear transformation+sigmoid is activatedt, ftAgain with ct-1It carries out Multiplication obtains an intermediate result, the other is input gate, it determines the input x of current time networktHow many is saved in Location mode cT, t-1The input h at momentt-1And xtL is exported after another linear transformation+sigmoid activationt, simultaneously ht-1And xtAfter another linear transformation+tanh activation, with ltMultiplication obtains an intermediate result, this intermediate result and The intermediate result of previous step is added to obtain ct, so-called out gate, LSTM is with out gate come control unit state ctHow many is output to The current output value h of LSTMT, t-1The input h at momentt-1And xtIt is exported after another linear transformation+sigmoid activation ot, otWith the c by tanhtMultiplication obtains ht, c, x, h are vectors here, and LSTM memory unit time series data includes Behavioral characteristics model, handwriting recongnition, sequence generation, behavioural analysis, sequence here refer to time vector serial, it is assumed that when Between sequence are as follows:
X { x1, x2 ..., xN }
Time series models are as follows:
Output valve by Dropout layers of length of convolutional neural networks for 128 sequence vector is input to long short-term memory nerve net Network operation obtains an output, and output vector is converted by softmax function, exports behavior tag along sort vector, seeing is Positive behavior or negative behavior;
Xiang Xunlian before carrying out to long Memory Neural Networks in short-term, it is shown that steps are as follows:
A) forget the calculating of door, formula is as follows:
ft=σ (wf·[ht-1, xt]+bf) formula 1
W in formula 1fIndicate the weight matrix of forgetting door, [ht-1, xt] two vectors is indicated to connect into a longer vector, bf Indicate the bias term of forgetting door, σ indicates sigmoid function, if the dimension of input is dx, the dimension of hidden layer is dn, cell-like The dimension of state is dc(usual dc=dn), then forget the weight matrix w of doorfDimension is dc×(dn+dx), in fact, weight matrix Wf It is all to be spliced by two matrixes, one is Wfh, it corresponds to input item ht-1, dimension dc×dh, one is wfx, It corresponds to input item xt, dimension dc×dx, WfIt can be written as, formula is as follows:
B) calculating of input gate, formula are as follows:
it=σ (wi·[ht-1, xt]+bi) formula 2
W in formula 2iIndicate the weight matrix of input gate, biIt indicates the bias term of input gate, calculates below for describing currently to input Location modeIt is calculated according to last output and this input, and formula is as follows:
Calculate the location mode c at current timet, it is by last location mode ct-1By element multiplied by forgetting door ft, then use Location mode currently enteredBy element multiplied by input gate it, then two product adductions are generated, formula is as follows:
SymbolExpression is multiplied by element, thus LSTM about current memoryWith long-term memory ct-1It combines, Form new location mode ct, due to forget door control, it can save for a long time before information, due to the control of input gate System, it can enter memory again to avoid current inessential content;
C) calculating of out gate, formula are as follows:
ot=σ (wo·[ht-1, xt]+bo) formula 5
It controls influence of the long-term memory to currently exporting with out gate, and LSTM final output is by out gate and cell-like What state determined jointly, formula is as follows:
D) backpropagation training is carried out to long Memory Neural Networks in short-term, LSTM backpropagation calculates the error term of each neuron δ value, the backpropagation of LSTM error term include both direction, and one is backpropagation along the time, i.e., since current t moment The error term at each moment is calculated, the other is error term upper layer is propagated, steps are as follows:
The activation primitive of gate is set as sigmoid function, the activation primitive of output is tanh function, their derivative difference For formula is as follows:
σ ' (z)=y (1-y)
Tanh ' (z)=1-y2
The derivative of sigmoid and tanh function is all the function of original function in above formula, once calculate original function, so that it may with it come The value of derivative is calculated, LSTM needs the parameter learnt to share 8 groups, is the weight matrix w for forgeing door respectivelyfWith bias term bf, it is defeated The weight matrix w of introductioniWith bias term bi, out gate weight matrix woWith bias term boAnd the weight of computing unit state Matrix wcWith bias term bc, two parts of weight matrix in backpropagation use different formula, the weight in subsequent derivation Matrix wf、wi、wo、wcTwo separated matrixes: w will be all written asfh、wfx、wih、wix、woh、wox、wch、wcx
E) multiply by elementSymbol, whenWhen acting on two vectors, operational formula is as follows:
WhenWhen acting on one matrix of a vector sum, operational formula is as follows:
WhenWhen acting on two matrixes, the element multiplication of two matrix corresponding positions, formula is as follows:
When a row vector right side multiplies a diagonal matrix, be equivalent to this row vector by element multiply diagonal of a matrix composition to Amount, formula are as follows:
In t moment, the output valve of LSTM is ht, define the error term δ of t momenttFor formula is as follows:
Assuming that error term is derivative of the loss function to output valve, needs to define four weightings and input corresponding error term, formula It is as follows:
netF, t=wf[ht-1, xt]+bf
=wfhht-1+wfxxt+bf
netI, t=wi[ht-1, xt]+bi
=wihht-1+wixxt+bi
netO, t=wo[ht-1, xt]+bo
=wohht-1+woxxt+bo
F) along time reversal transmission error item, the error term δ at t-1 moment is calculatedt-1, formula is as follows:
Formula seven is obtained using total derivative formula, and formula is as follows:
Ask each partial derivative, formula in formula seven as follows:
It is found out according to formula six:
It is found out according to formula four:
Because of following operation:
ot=σ (netO, t)
netO, t=wohht-1+woxxt+bo
ft=σ (netF, t)
netF, t=wfhht-1+wfxxt+bf
it=σ (netI, t)
netI, t=wihht-1+wixxt+bi
Obtained partial derivative, formula are as follows:
Above-mentioned partial derivative is brought into formula seven formula eight that obtains, formula is as follows:
According to δO, t、δF, t、δI, tDefinition, obtain formula nine, formula ten, formula 11, formula 12, formula is as follows:
From formula eight to formula ten second is that error term along time reversal propagate at the time of formula, error term is acquired to formula 12 according to formula eight 13 formula at any k moment are passed forward to, formula is as follows:
G) error term is transmitted to upper one layer, it is assumed that current is l layers, and the error term for defining l-1 layers is error function to l-1 layers The derivative of input is weighted, formula is as follows:
The input X of LSTMt, formula is as follows:
In above formula, fl-1It indicates l-1 layers of activation primitive, asks E pairsDerivative, with total derivative formula by error propagation To upper one layer, formula is as follows:
H) the calculating w of weight gradientfh、wih、wch、wohWeight gradient be the sum of each moment gradient, find out them first in t The gradient at moment, formula are as follows:
The gradient at each moment is added together, final gradient is obtained, formula is as follows:
Seek bf、bi、bc、boThe bias term gradient at each moment, formula are as follows:
The bias term gradient at each moment is added together, formula is as follows:
According to error term, w is soughtfx、wix、wcx、woxWeight gradient, formula is as follows:
I) the long each output valve of Memory Neural Networks in short-term is subjected to mean value pond, output vector is carried out by softmax function Conversion, exports behavior tag along sort vector, label where maximum value element indicate this feature map belong to such distinguishing label to Amount, sees act of omission or positive act;
J) cross entropy error function is finally used to optimize as optimization aim to model, formula is as follows:
In above formula, N is the number of training sample, vector ynIt is the label of sample, vector onIt is the output of network, marks ynIt is one A one-hot vector;
K) (a) step is jumped to, input data repeats step (a)~(j), until network error is less than given value, it was demonstrated that instructed High-precision length Memory Neural Networks model in short-term is practised, training is terminated;
L) it acquires in pretreated characteristic spectrum sequence and is tested for any one group, behavior point is obtained by step (a)~(j) Class result label vector, label where maximum value element indicate that this feature map is the behavior class label of this test, realize Activity recognition.
6. artificial intelligence CNN, LSTM neural network dynamic identifying system according to claim 1, it is characterised in that: described Artificial intelligence early warning operating system (500) is based on the AI artificial intelligence early warning behaviour developed on the basis of (SuSE) Linux OS framework Make system, which includes class cranial nerve network system, multidimensional man-machine object collaboration inter-operation system, public safety intelligent monitoring Early warning and prevention and control system, autonomous unmanned servo-system, Incorporate information network platform system, calculate for managing and controlling The computer runs programs of machine hardware, software and data resource, for artificial intelligence early warning systems at different levels and internet+distribution The interface that early warning police kiosk is linked up is used for cloud computing, cloud storage, cloud database and artificial intelligence early warning system, internet+distribution The interface of early warning police kiosk and other software handshakes, for the man-machine object collaboration inter-operation system of multidimensional and mobile device and smart television Communication interface, provide support, including class cranial nerve network system, the man-machine object of multidimensional for man-machine interface for other application software Cooperate with inter-operation system, the early warning of public safety intelligent monitoring and prevention and control system, autonomous unmanned servo-system, Incorporate net Network information platform system, intelligent things and risk facior data acquisition system, risk factors management system, artificial intelligence early warning behaviour Making system (500) subsystem includes dynamic recognition system, NI Vision Builder for Automated Inspection, actuator system, cognitive behavior system, file system System, management of process, Inter-Process Communication, memory management, network communication, security mechanism, driver, user interface.
7. artificial intelligence CNN, LSTM neural network dynamic identifying system according to claim 1, it is characterised in that: described Cloud computing (600) is based on open source Hadoop framework and is designed, and carries out high speed computing and storage, cloud computing using cluster advantage (600) include that infrastructure services, platform services, software services, for calculate the risk on distributed computer because It is plain identification module, risk factors reasoning module, risk factors evaluation module, by network that huge calculation processing program is automatic It is split into numerous lesser subprogram, then bulky systems composed by multi-section server is transferred to believe through searching with the data of magnanimity Breath compares and analyzes, and processing result is returned to user again later and gone forward side by side storage of racking by hierarchical reasoning, early warning value assessment.
8. artificial intelligence CNN, LSTM neural network dynamic identifying system according to claim 1, it is characterised in that: described With cloud the comparative analysis of Database Dynamic blacklist (700) module, the cloud database includes original dynamic information database, original Image characteristic information data library, real-time risk factors acquisition image information data library, real-time risk factors acquire multidate information number Logarithm is answered according to library, risk factors collection database, risk factors reason DB, risk factors assessment database, risk factors According to library, risk factors administrative evaluation database, real-time judge according to database, judgment rule database and accident examples data Library, the cloud database are used for the cluster application of cloud computing (600) system, and distributed system file is passed through application software set Get up to cooperate, provide the work of data storage and business access for user, by the way that online data storage module, storage is arranged It is black that facial image blacklist, dynamic feature information blacklist, biological information blacklist and voice messaging are stored in module List, by the face figure in the facial image of acquisition, dynamic feature information, biological information and voice messaging and memory module As blacklist, dynamic feature information blacklist, biological information blacklist and voice messaging blacklist compare, if phase Reach preset early warning value like degree, then the information is generated early warning information in time and carries out pushing away for risk factors by early warning system Reason, assessment generate warning level warning message, feed back to the progress risk management evaluation of upper level early warning system.
9. artificial intelligence CNN, LSTM neural network dynamic identifying system according to claim 1, it is characterised in that: described Determine target person identity (800) module for handling the early warning generated with cloud Database Dynamic blacklist comparative analysis (700) Prompt information early warning value assessment, generates warning level warning message, generates the letter that pre-warning signal feeds back to upper level early warning system Breath, and believed in real time according to cloud computing (600) by the data transmitted with cloud Database Dynamic blacklist comparative analysis (700) Breath updates, and consults information data generated to cloud database information for storing the artificial intelligence early warning system (500).
10. artificial intelligence CNN, LSTM neural network dynamic identifying system according to claim 1, it is characterised in that: institute Local data library module (900) are stated for storing the same level artificial intelligence early warning operating system warning information generated, for depositing The information sent to upper level artificial intelligence early warning operating system and feedback information are stored up, for storing the information sent to cloud computing And feedback information.
CN201910436838.8A 2019-05-24 2019-05-24 Artificial intelligence CNN, LSTM neural network dynamic identifying system Pending CN110110707A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910436838.8A CN110110707A (en) 2019-05-24 2019-05-24 Artificial intelligence CNN, LSTM neural network dynamic identifying system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910436838.8A CN110110707A (en) 2019-05-24 2019-05-24 Artificial intelligence CNN, LSTM neural network dynamic identifying system

Publications (1)

Publication Number Publication Date
CN110110707A true CN110110707A (en) 2019-08-09

Family

ID=67492030

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910436838.8A Pending CN110110707A (en) 2019-05-24 2019-05-24 Artificial intelligence CNN, LSTM neural network dynamic identifying system

Country Status (1)

Country Link
CN (1) CN110110707A (en)

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110766566A (en) * 2019-10-16 2020-02-07 国网安徽省电力有限公司信息通信分公司 Intelligent operation and maintenance behavior analysis system based on bidirectional LSTM model
CN110956111A (en) * 2019-11-22 2020-04-03 苏州闪驰数控系统集成有限公司 Artificial intelligence CNN, LSTM neural network gait recognition system
CN110996108A (en) * 2019-11-29 2020-04-10 合肥图鸭信息科技有限公司 Video frame reconstruction method and device and terminal equipment
CN111274395A (en) * 2020-01-19 2020-06-12 河海大学 Power grid monitoring alarm event identification method based on convolution and long-short term memory network
CN111325160A (en) * 2020-02-25 2020-06-23 北京百度网讯科技有限公司 Method and apparatus for generating information
CN111428769A (en) * 2020-03-18 2020-07-17 周升志 Artificial intelligence translation system for designing pet behavior language by software
CN111597937A (en) * 2020-05-06 2020-08-28 北京海益同展信息科技有限公司 Fish gesture recognition method, device, equipment and storage medium
CN112070212A (en) * 2020-08-26 2020-12-11 江苏建筑职业技术学院 Artificial intelligence CNN, LSTM neural network dynamic identification system
CN112233353A (en) * 2020-09-24 2021-01-15 国网浙江兰溪市供电有限公司 Artificial intelligence-based anti-fishing monitoring system and monitoring method thereof
CN112347034A (en) * 2020-12-02 2021-02-09 北京理工大学 Multifunctional integrated system-on-chip for nursing old people
CN112395577A (en) * 2021-01-19 2021-02-23 江苏红网技术股份有限公司 Target object identification method and system based on user tags
CN112434755A (en) * 2020-12-15 2021-03-02 电子科技大学 Data anomaly sensing method based on heterogeneous system
CN112698925A (en) * 2021-03-24 2021-04-23 江苏红网技术股份有限公司 Container mixed operation processing method of server cluster
CN112700461A (en) * 2021-03-19 2021-04-23 浙江卡易智慧医疗科技有限公司 System for pulmonary nodule detection and characterization class identification
CN112766292A (en) * 2019-11-04 2021-05-07 中移(上海)信息通信科技有限公司 Identity authentication method, device, equipment and storage medium
CN113037730A (en) * 2021-02-27 2021-06-25 中国人民解放军战略支援部队信息工程大学 Network encryption traffic classification method and system based on multi-feature learning
CN113225539A (en) * 2020-12-23 2021-08-06 全民认证科技(杭州)有限公司 Floating population artificial intelligence early warning system based on cloud computing
CN113507589A (en) * 2021-06-08 2021-10-15 山西三友和智慧信息技术股份有限公司 Safety monitoring device based on artificial intelligence
CN113516046A (en) * 2021-05-18 2021-10-19 平安国际智慧城市科技股份有限公司 Method, device, equipment and storage medium for monitoring biological diversity in area
CN113627607A (en) * 2020-05-07 2021-11-09 中国石油化工股份有限公司 Carbonate reservoir sedimentary facies identification method and device, electronic equipment and medium
CN113705803A (en) * 2021-08-31 2021-11-26 南京大学 Image hardware identification system based on convolutional neural network and deployment method
CN113938310A (en) * 2021-10-29 2022-01-14 水利部发展研究中心 Quality control management system for investment statistic data of water conservancy fixed assets
CN113949656A (en) * 2021-10-15 2022-01-18 任桓影 Security protection network monitoring system based on artificial intelligence
CN113949694A (en) * 2021-10-15 2022-01-18 保升(中国)科技实业有限公司 Bottom ecological environment system based on video AI calculation and big data analysis
CN114595874A (en) * 2022-02-24 2022-06-07 武汉大学 Ultra-short-term power load prediction method based on dynamic neural network
CN116193274A (en) * 2023-04-27 2023-05-30 北京博瑞翔伦科技发展有限公司 Multi-camera safety control method and system
US11880760B2 (en) 2019-05-01 2024-01-23 Samsung Electronics Co., Ltd. Mixed-precision NPU tile with depth-wise convolution
TWI832006B (en) * 2019-12-12 2024-02-11 南韓商三星電子股份有限公司 Method and system for performing convolution operation
CN117575636A (en) * 2023-12-19 2024-02-20 东莞莱姆森科技建材有限公司 Intelligent mirror control method and system based on video processing
CN118521141A (en) * 2024-07-24 2024-08-20 新瑞数城技术有限公司 Operation management method and system for park
US12073302B2 (en) 2018-06-22 2024-08-27 Samsung Electronics Co., Ltd. Neural processor
US12099912B2 (en) 2019-06-19 2024-09-24 Samsung Electronics Co., Ltd. Neural processor

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506712A (en) * 2017-08-15 2017-12-22 成都考拉悠然科技有限公司 Method for distinguishing is known in a kind of human behavior based on 3D depth convolutional networks
CN108549841A (en) * 2018-03-21 2018-09-18 南京邮电大学 A kind of recognition methods of the Falls Among Old People behavior based on deep learning
CN109447048A (en) * 2018-12-25 2019-03-08 苏州闪驰数控系统集成有限公司 A kind of artificial intelligence early warning system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506712A (en) * 2017-08-15 2017-12-22 成都考拉悠然科技有限公司 Method for distinguishing is known in a kind of human behavior based on 3D depth convolutional networks
CN108549841A (en) * 2018-03-21 2018-09-18 南京邮电大学 A kind of recognition methods of the Falls Among Old People behavior based on deep learning
CN109447048A (en) * 2018-12-25 2019-03-08 苏州闪驰数控系统集成有限公司 A kind of artificial intelligence early warning system

Cited By (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12086700B2 (en) 2018-06-22 2024-09-10 Samsung Electronics Co., Ltd. Neural processor
US12073302B2 (en) 2018-06-22 2024-08-27 Samsung Electronics Co., Ltd. Neural processor
US11880760B2 (en) 2019-05-01 2024-01-23 Samsung Electronics Co., Ltd. Mixed-precision NPU tile with depth-wise convolution
US12099912B2 (en) 2019-06-19 2024-09-24 Samsung Electronics Co., Ltd. Neural processor
CN110766566A (en) * 2019-10-16 2020-02-07 国网安徽省电力有限公司信息通信分公司 Intelligent operation and maintenance behavior analysis system based on bidirectional LSTM model
CN112766292A (en) * 2019-11-04 2021-05-07 中移(上海)信息通信科技有限公司 Identity authentication method, device, equipment and storage medium
CN110956111A (en) * 2019-11-22 2020-04-03 苏州闪驰数控系统集成有限公司 Artificial intelligence CNN, LSTM neural network gait recognition system
CN110996108A (en) * 2019-11-29 2020-04-10 合肥图鸭信息科技有限公司 Video frame reconstruction method and device and terminal equipment
TWI832006B (en) * 2019-12-12 2024-02-11 南韓商三星電子股份有限公司 Method and system for performing convolution operation
CN111274395A (en) * 2020-01-19 2020-06-12 河海大学 Power grid monitoring alarm event identification method based on convolution and long-short term memory network
CN111274395B (en) * 2020-01-19 2021-11-12 河海大学 Power grid monitoring alarm event identification method based on convolution and long-short term memory network
CN111325160A (en) * 2020-02-25 2020-06-23 北京百度网讯科技有限公司 Method and apparatus for generating information
CN111325160B (en) * 2020-02-25 2023-08-29 北京百度网讯科技有限公司 Method and device for generating information
CN111428769A (en) * 2020-03-18 2020-07-17 周升志 Artificial intelligence translation system for designing pet behavior language by software
CN111597937B (en) * 2020-05-06 2023-08-08 京东科技信息技术有限公司 Fish gesture recognition method, device, equipment and storage medium
CN111597937A (en) * 2020-05-06 2020-08-28 北京海益同展信息科技有限公司 Fish gesture recognition method, device, equipment and storage medium
CN113627607A (en) * 2020-05-07 2021-11-09 中国石油化工股份有限公司 Carbonate reservoir sedimentary facies identification method and device, electronic equipment and medium
CN112070212A (en) * 2020-08-26 2020-12-11 江苏建筑职业技术学院 Artificial intelligence CNN, LSTM neural network dynamic identification system
CN112233353A (en) * 2020-09-24 2021-01-15 国网浙江兰溪市供电有限公司 Artificial intelligence-based anti-fishing monitoring system and monitoring method thereof
CN112347034B (en) * 2020-12-02 2024-07-12 北京理工大学 Multifunctional integrated on-chip system for aged care
CN112347034A (en) * 2020-12-02 2021-02-09 北京理工大学 Multifunctional integrated system-on-chip for nursing old people
CN112434755B (en) * 2020-12-15 2023-04-07 电子科技大学 Data anomaly sensing method based on heterogeneous system
CN112434755A (en) * 2020-12-15 2021-03-02 电子科技大学 Data anomaly sensing method based on heterogeneous system
CN113225539A (en) * 2020-12-23 2021-08-06 全民认证科技(杭州)有限公司 Floating population artificial intelligence early warning system based on cloud computing
CN112395577A (en) * 2021-01-19 2021-02-23 江苏红网技术股份有限公司 Target object identification method and system based on user tags
CN113037730A (en) * 2021-02-27 2021-06-25 中国人民解放军战略支援部队信息工程大学 Network encryption traffic classification method and system based on multi-feature learning
CN113037730B (en) * 2021-02-27 2023-06-20 中国人民解放军战略支援部队信息工程大学 Network encryption traffic classification method and system based on multi-feature learning
CN112700461A (en) * 2021-03-19 2021-04-23 浙江卡易智慧医疗科技有限公司 System for pulmonary nodule detection and characterization class identification
CN112698925A (en) * 2021-03-24 2021-04-23 江苏红网技术股份有限公司 Container mixed operation processing method of server cluster
CN112698925B (en) * 2021-03-24 2021-06-08 江苏红网技术股份有限公司 Container mixed operation processing method of server cluster
CN113516046A (en) * 2021-05-18 2021-10-19 平安国际智慧城市科技股份有限公司 Method, device, equipment and storage medium for monitoring biological diversity in area
CN113507589A (en) * 2021-06-08 2021-10-15 山西三友和智慧信息技术股份有限公司 Safety monitoring device based on artificial intelligence
CN113705803A (en) * 2021-08-31 2021-11-26 南京大学 Image hardware identification system based on convolutional neural network and deployment method
CN113705803B (en) * 2021-08-31 2024-05-28 南京大学 Image hardware identification system and deployment method based on convolutional neural network
CN113949656B (en) * 2021-10-15 2022-11-04 国家电投集团江西电力有限公司景德镇发电厂 Security protection network monitoring system based on artificial intelligence
CN113949694A (en) * 2021-10-15 2022-01-18 保升(中国)科技实业有限公司 Bottom ecological environment system based on video AI calculation and big data analysis
CN113949656A (en) * 2021-10-15 2022-01-18 任桓影 Security protection network monitoring system based on artificial intelligence
CN113938310B (en) * 2021-10-29 2023-11-28 水利部发展研究中心 Water conservancy fixed asset investment statistics data quality control management system
CN113938310A (en) * 2021-10-29 2022-01-14 水利部发展研究中心 Quality control management system for investment statistic data of water conservancy fixed assets
CN114595874A (en) * 2022-02-24 2022-06-07 武汉大学 Ultra-short-term power load prediction method based on dynamic neural network
CN114595874B (en) * 2022-02-24 2024-08-06 武汉大学 Ultra-short-term power load prediction method based on dynamic neural network
CN116193274A (en) * 2023-04-27 2023-05-30 北京博瑞翔伦科技发展有限公司 Multi-camera safety control method and system
CN117575636A (en) * 2023-12-19 2024-02-20 东莞莱姆森科技建材有限公司 Intelligent mirror control method and system based on video processing
CN117575636B (en) * 2023-12-19 2024-05-24 东莞莱姆森科技建材有限公司 Intelligent mirror control method and system based on video processing
CN118521141A (en) * 2024-07-24 2024-08-20 新瑞数城技术有限公司 Operation management method and system for park

Similar Documents

Publication Publication Date Title
CN110110707A (en) Artificial intelligence CNN, LSTM neural network dynamic identifying system
CN110738984B (en) Artificial intelligence CNN, LSTM neural network speech recognition system
CN110956111A (en) Artificial intelligence CNN, LSTM neural network gait recognition system
CN106599797B (en) A kind of infrared face recognition method based on local parallel neural network
CN110414305A (en) Artificial intelligence convolutional neural networks face identification system
Mo et al. Human physical activity recognition based on computer vision with deep learning model
JP6159489B2 (en) Face authentication method and system
CN108345894B (en) A kind of traffic incidents detection method based on deep learning and entropy model
CN111368926B (en) Image screening method, device and computer readable storage medium
Zhao et al. A data-driven crowd simulation model based on clustering and classification
CN109034020A (en) A kind of community's Risk Monitoring and prevention method based on Internet of Things and deep learning
CN114155270A (en) Pedestrian trajectory prediction method, device, equipment and storage medium
CN115100574A (en) Action identification method and system based on fusion graph convolution network and Transformer network
CN116343330A (en) Abnormal behavior identification method for infrared-visible light image fusion
CN109002746A (en) 3D solid fire identification method and system
CN111311702A (en) Image generation and identification module and method based on BlockGAN
Li et al. Hierarchical knowledge squeezed adversarial network compression
Tanchotsrinon et al. Facial expression recognition using graph-based features and artificial neural networks
CN109815887B (en) Multi-agent cooperation-based face image classification method under complex illumination
CN116523002A (en) Method and system for predicting dynamic graph generation countermeasure network track of multi-source heterogeneous data
CN116503379A (en) Lightweight improved YOLOv 5-based part identification method
Nugroho et al. A solution for imbalanced training sets problem by combnet-ii and its application on fog forecasting
Ganga et al. Object detection and crowd analysis using deep learning techniques: Comprehensive review and future directions
Szu Neural networks based on peano curves and hairy neurons
Grari et al. Comparative study of teachable machine for forest fire and smoke detection by drone

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190809