CN109146062A - A kind of Space Reconstruction method of video neuron node - Google Patents

A kind of Space Reconstruction method of video neuron node Download PDF

Info

Publication number
CN109146062A
CN109146062A CN201810924053.0A CN201810924053A CN109146062A CN 109146062 A CN109146062 A CN 109146062A CN 201810924053 A CN201810924053 A CN 201810924053A CN 109146062 A CN109146062 A CN 109146062A
Authority
CN
China
Prior art keywords
video
neuron node
space reconstruction
reconstruction method
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810924053.0A
Other languages
Chinese (zh)
Inventor
邸磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vision Cloud Fusion (guangzhou) Technology Co Ltd
Original Assignee
Vision Cloud Fusion (guangzhou) Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vision Cloud Fusion (guangzhou) Technology Co Ltd filed Critical Vision Cloud Fusion (guangzhou) Technology Co Ltd
Priority to CN201810924053.0A priority Critical patent/CN109146062A/en
Publication of CN109146062A publication Critical patent/CN109146062A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present invention discloses a kind of Space Reconstruction method of video neuron node, includes the following steps: S1, establishes manual video neuron node to each video-aware unit and networking, forms video neural network;S2, it after time shaft progress Space Reconstruction is added in the massive video image information of each video neuron node perception, accesses rear end pipe platform and is managed collectively.The present invention establishes manual video neuron node and networking to each video-aware unit, form video neural network, and it will be after massive video image information addition time shaft formation " one grade of a machine " the progress Space Reconstruction of each video neuron node perception, access rear end pipe platform is managed collectively, and is provided for trans-regional trans department video image information resource and is integrated sharing functionality.It realizes standardization cascade/interconnection between all kinds of heterogeneous platforms, while can realize that the standardization to non-national standard platform is transformed.

Description

A kind of Space Reconstruction method of video neuron node
Technical field
The invention belongs to computer software fields, and in particular to a kind of Space Reconstruction method of video neuron node.
Background technique
Traditional video surveillance system is made of camera shooting, transmission, control, display, 5 major part of record registration.Video camera is logical Coaxial video cable is crossed by transmission of video images to control host, vision signal is assigned to each monitor and record again by control host As equipment, while can be entered into needing the voice signal transmitted to synchronize in video recorder.By controlling host, operator can be sent out It instructs out, control is carried out to the movement of the upper and lower, left and right of holder and carries out the operation of focusing and zooming to camera lens, and control can be passed through Host processed realizes the switching between multichannel video camera and holder.Using special video record processing mode, image can be recorded The operation such as enter, play back, handling, the effect that makes to record a video reaches best.
With the development of the times, traditional video monitoring system embodies a series of disadvantage.The either simulation of early stage Network video monitor and control system (the IP of TV monitor system (CCTV), later digital video monitor system (DVR) and current main-stream Surveillance), core remains in the application of " prison " and " control " mostly, the main real time monitoring and thing for passing through personnel Playing back videos afterwards carry out various applications (being mainly used in security protection in early days), can not effectively integrate all kinds of resources and realize " intelligence The purpose of change ".
Summary of the invention
In order to solve, video monitoring system monitored picture of the existing technology is more, it is poor not have specific aim, real-time, goes out The high technical problem of error rate, it is an object of that present invention to provide a kind of Space Reconstruction methods of video neuron node.It is trans-regional Sharing functionality is integrated in trans-departmental video image information resource offer.
The technical scheme adopted by the invention is as follows:
A kind of Space Reconstruction method of video neuron node, includes the following steps:
S1, manual video neuron node and networking are established to each video-aware unit, form video nerve net Network;
S2, the massive video image information of each video neuron node perception is added to time shaft progress Space Reconstruction Afterwards, access rear end pipe platform is managed collectively.
The present invention establishes manual video neuron node and networking to each video-aware unit, forms video nerve Network, and time shaft is added in the massive video image information of each video neuron node perception and forms " one grade of a machine " progress After Space Reconstruction, access rear end pipe platform is managed collectively, and is provided for trans-regional trans department video image information resource whole Close sharing functionality.It realizes standardization cascade/interconnection between all kinds of heterogeneous platforms, while can realize the standardization to non-national standard platform Transformation.
The video-aware unit includes high point video unit and low spot video unit, each high point video unit and low spot Video unit establishes manual video neuron node and networking.
Specifically, the high point video unit carries out global monitoring to region, on the video images superposition holography position Figure, carries out visualization mark to focus within the vision, label is carried out image conversion displaying in video image, by video Background information in image carries out structural description, and geographical location, scene information and video flowing are encoded together, and the time is added Axis forms composite video stream.
Specifically, the low spot video unit is associated with wherein one or more focus, looks into low spot different angle It sees monitoring area details, and geographical location, scene information and video flowing is encoded together, time shaft is added and forms composite video Stream.
Further, low spot video unit superposition holography situational map on the video images, within the vision Mark object carries out visualization mark, label is carried out image conversion displaying in video image, by the background information in video image Carry out structural description.Holographic situational map is constructed, is promoted from position related network to Dynamic and Multi dimensional scene.
When searching video, a key returns to the video image time of all high point video units and low spot video unit, real Existing audio video synchronization.
The composite video stream is added time shaft and carries out Space Reconstruction.
Preferably, the video neural network includes input layer, output layer and more than one hidden layer, and uses BP Algorithm learns video neural network, input data of each video neuron node as input layer.
Preferably, the BP algorithm includes gradient descent method.
The invention has the benefit that
The present invention establishes manual video neuron node and networking to each video-aware unit, forms video nerve Network, and time shaft is added in the massive video image information of each video neuron node perception and forms " one grade of a machine " progress After Space Reconstruction, access rear end pipe platform is managed collectively, and is provided for trans-regional trans department video image information resource whole Close sharing functionality.It realizes standardization cascade/interconnection between all kinds of heterogeneous platforms, while can realize the standardization to non-national standard platform Transformation.
The present invention constructs holographic situational map, is promoted from position related network to Dynamic and Multi dimensional scene.
Detailed description of the invention
Fig. 1 is the structural schematic diagram of the present invention-embodiment neural network.
Fig. 2 is the core composition schematic diagram of holographic situational map.
Fig. 3 is holographic situational map key technology major architectural figure.
Fig. 4 is one layer of neural network structure figure.
Fig. 5 is feedforward neural network structure chart.
Fig. 6 is two layers of neural network diagram (decision boundary).
Fig. 7 is two layers of neural network diagram (spatial alternation).
Specific embodiment
With reference to the accompanying drawing and specific embodiment the present invention is further elaborated.
As shown in Figure 1, a kind of Space Reconstruction method of video neuron node of the present embodiment, includes the following steps:
The first step establishes manual video neuron node and networking to each video-aware unit, forms video mind Through network.
After time shaft progress Space Reconstruction is added in the massive video image information that second step, neuron node perceive, access Rear end pipe platform is managed collectively.
Video-aware unit includes high point video unit and low spot video unit, each high point video unit and low spot video Unit establishes manual video neuron node and networking.
In the present embodiment, high point video unit carries out global monitoring to region, on the video images superposition holography position Map carries out visualization mark to focus within the vision, and label is carried out image conversion displaying in video image, will be regarded Background information in frequency image carries out structural description, and geographical location, scene information and video flowing are encoded together, when addition Between axis formed composite video stream.
Low spot video unit is associated with wherein one or more focus, thin in low spot different angle checking monitoring region Section, superposition holography situational map, carries out visualization mark to mark object within the vision, label is being regarded on the video images Image conversion displaying is carried out in frequency image, and the background information in video image is subjected to structural description.And by geographical location, scene Information and video flowing encode together, and time shaft is added and forms composite video stream.Composite video stream is added time shaft and carries out space weight Structure.
Low spot video unit constructs holographic situational map, is promoted from position related network to Dynamic and Multi dimensional scene.
Holography situational map detailed below:
Holographic situational map refer under ubiquitous network environment using position as tie dynamically associate things or event multi-time Scales, The information of multi-threaded, multi-level, more granularities provides personalized position and intelligent Service Platform relevant to position.Its objective is It is people-oriented, it is integrated according to the application demand of user based on position and is associated with suitable geographic range content type, details journey Degree, time point or the ubiquitous information at interval.Expression way by being adapted to specific user provides information service for user.
Wherein ubiquitous network covers the network systems such as Sensor Network, internet, communication network, trade network, they are both holographic position The information source of map is set, and is its running environment.Ubiquitous information is the things obtained under ubiquitous network environment or event itself And its relevant information (such as position, state, environment), cover the Fundamental Geographic Information System of earth surface, independent geographical entity (is such as built Build object) structural information geographical entity between related information, the information of every profession and trade, itself and its preference information of people etc..It is ubiquitous Information can be directly or indirectly associated with spatial position, forms the overall information of description specific matters or event etc..
Position refers to the occupied space of specific objective in real world and virtual environment.In real world, position can To be the direct position reached with geographical coordinates table, it is also possible to the opposite of the expression such as place name, address, relative bearing and distance relation Position, to the location, social event spot, the path of mobile target etc. for describing geographical entity or element;In virtual ring IP address, URL are used under border, the forms such as social network account describe user's login or position of release information etc..
Ubiquitous information is associated by position, according to specific application and demand, selects specific tense, theme, level The feature of correlate or event is described with granularity." tense " reflects things or event with the situation of time change;It is " main Topic " refers to describes things or event from different perspectives;" level " is the division of level or rank based on things or event itself To describe the feature of its corresponding level;" granularity " refers to the detailed journey of the description things or event information that determine according to user demand Degree.
Holographic situational map emphasized will to be converged in ubiquitous information to multi-dimensional map using position as core, be associated with, analyze, Transmitting, expression.Its core composition is as shown in Figure 2.
Ubiquitous information is the most important data source of holographic situational map, provides data supporting for holography situational map;It is semantic Core element of the position as ubiquitous information provides effective connective methods for holography situational map;Dynamic and Multi dimensional scene should expire The variation demand of the ubiquitous information of foot and spatial information in time scale.Therefore, ubiquitous information, semantic locations and Dynamic and Multi dimensional field Scape expression constitutes three big core components of holographic situational map.
By the understanding to holographic situational map concept, five big features of holographic situational map are summarized, including are moved in real time State property, semantic locations association, indoor and outdoor integration, multidimensional spatial and temporal expression and adaptivity.
Wherein, real time and dynamic refers to real-time, dynamic acquisition (access) from internet, Sensor Network, trade network, communication Ubiquitous (position association) information of net is provided and is fast and accurately counted for public and professional domain user information service and application According to support;Semantic locations association, traditional location service comprehensive utilization multi-source position data is there are location expression scarce capacity, and language Adopted position sets rich connotation, based on semantic locations establish people, thing, object incidence relation, forming position related network provides for user Personalized, intelligent location-based service;Indoor and outdoor integration realizes that the indoor and outdoor of comprehensive, multiple dimensioned and more granularity is next on the ground Body Integrative expression and visualization promote the application developments such as indoor and outdoor integration navigation;Multidimensional spatial and temporal expression, holographic situational map Cover multiple subjects and cross-cutting, provides two, three-dimensional, four-dimensional map (three-dimensional space to masses, government, society and individual enterprise etc. Between+time) etc. multidimensional expression-form, three-dimensional scenic and full-view image integral fusion expression of results;Adaptivity refers to people For this, adaptively meet user demand, intelligentized interactive mode be provided, for accessing certain laboratory, when visitor into When entering hall, synthetic user position, direction and role analysis, auto-associating and the push interested information of user for example indicate Property statue, entry to the stair and office orientation etc., the content of this association and push is with position and user role Variation and it is different.
As a kind of novel map service platform, the research of holographic situational map still in its infancy, crucial skill Art frame is as shown in Figure 3.Semantic locations are associated with the position present in the ubiquitous information of dynamic sensing based on semantic locations model Information, and based on measurement, orientation, topology spatial and temporal distributions, Clustering Model and trend are associated with and passed through with the simple position such as semanteme The methods of prediction forms profound position related network, realizes comprehensive semantic locations association.The skill of Dynamic and Multi dimensional scene Then respectively from model of place and modeling, expression constructs art frame with four aspects of visualization.
Since map datum service is mainly derived from professional test department and map vendors, it is relatively single to provide information; Meanwhile the information content under big data environment is increasing, the information content is more and more abundant, needs a kind of novel ubiquitous information remittance Collection and integration technology.Important behaviour form of the multi-dimensional map as the holographic ubiquitous information of situational map, traditional map model with On the basis of modeling method, contextual data fast acquiring method and user oriented inside and outside research department is needed adaptively to construct Map Expression Model;It solves the problems, such as the semantic nonuniformity of indoor and outdoor integration, realizes the integrated real-time, quick visualization of indoor and outdoor.Semantic position It sets from a wealth of sources, type is complicated, space-time refers to the ubiquitous information of isomery correlation technology then and is, it is semantic based on holographic situational map Position model carries out depth perception association through position on semantic and knowledge hierarchy, realizes the comprehensive discovery of target object.
When searching video, a key returns to the video image time of all high point video units and low spot video unit, real Existing audio video synchronization.
Video neural network includes input layer, output layer and more than one hidden layer, and using BP algorithm to video nerve Network is learnt, input data of each video neuron node as input layer.The BP algorithm includes gradient descent method.
Wherein, hidden layer may include many logical machines, such as: structuring, holographic situational map, AI reasoning etc..
Single neuron is as shown in figure 4, there are three input value, respectively x1, x2, x3, formula as a result in figure are as follows:
Wherein, x is input signal, and generally can be matrix to be also possible to vector activation primitive is f.W in formula is known as Weight, b are an offset parameters.The activation primitive of most common activation primitive or nonlinear change, about specific Activation primitive will be described hereinafter.The neural network of analogy biology, x can regard the stimulus signal of neuron as, this mind Output signal through member is exactly the stimulus signal for passing to next neuron, and W and b can regard itself of this neuron as Characteristic has actually just carried out a signal processing to input signal by this neuron.
One neuron obviously can not handle complicated input signal, so being combined into the neuron of secondary structure such as figure Feedforward neural network structure shown in 5.Complete one input layer of feedforward neural network as we can see from the figure, one defeated Layer and more than one hidden layer form out.Initial data is inputted by input layer first, by the calculating of each layer hidden layer, finally by Output layer exports last result.By the full connection type connection between node between each layer, under upper one layer of output is exactly One layer of input.
By the different neuron number of the different network numbers of plies and every layer, we can form different neural network knots Structure, in addition different parameters, that is, weight W and biasing b, each neural network are nearly all different.Each nerve net in this way Network is essentially all a kind of computation model, theoretically can solve many different computational problems.As long as to be solved by correspondence The problem of remove to build suitable neural network structure and learn corresponding parameter out.
Each neural network parameter requires forward conduction by there is the data of label and reverse conduction to be learned It practises and optimizes.The parameter of a neural network can generally be initialized, there is random initializtion, the meeting also having utilizes Gaussian function Number is initialized.Most common training method is that the training of neural network parameter is carried out using BP algorithm.BP algorithm Algorithm core be gradient descent method, come the error amount of the objective function minimized, reach pole after study iteration later After small value, so that it may think that entire neural network parameter has reached an extremely excellent result.
In face of complicated Nonlinear Classification task, two layers of (one hidden layer of band) neural network can classify fine.Under In the example in face, A line and B line represent data.And the region that the region a and b Regional Representative are scratched by neural network, the boundary of the two Line is exactly decision boundary.As shown in Figure 6.
It can be seen that the decision boundary of this two layers of neural network is very smooth curve, and it is fine to classify.Single layer Network can only do linear classification task.Two linear classification task combinations can do Nonlinear Classification task.Output layer Decision boundary, which is individually taken out, to be had a look.As shown in Figure 7.
It can be seen that the decision boundary of output layer is still straight line.Key is exactly, when from input layer to hidden layer, data Spatial alternation has occurred.That is, hidden layer has carried out a spatial alternation to original data in two layers of neural network, Allow to by linear classification, then the decision boundary of output layer has marked a linear classification line of demarcation, classifies to it.
The key of Nonlinear Classification can be done by being thus derived two layers of neural network -- hidden layer.From start to derive Matrix Formula is substantially exactly to carry out a transformation to the coordinate space of vector it is found that matrix and multiplication of vectors.Therefore, it hides The effect of the parameter matrix of layer is exactly so that the original coordinate space of data is converted to linear separability from linearly inseparable.
Two layers of neural network simulates true nonlinear function in data by two layers of linear model.Therefore, multilayer Neural network essence be exactly complicated function fitting.
The present invention is not limited to above-mentioned optional embodiment, anyone can show that other are various under the inspiration of the present invention The product of form, however, make any variation in its shape or structure, it is all to fall into the claims in the present invention confining spectrum Technical solution, be within the scope of the present invention.

Claims (10)

1. a kind of Space Reconstruction method of video neuron node, characterized by the following steps:
S1, manual video neuron node and networking are established to each video-aware unit, form video neural network;
S2, it after time shaft progress Space Reconstruction is added in the massive video image information of each video neuron node perception, connects Enter rear end pipe platform to be managed collectively.
2. a kind of Space Reconstruction method of video neuron node according to claim 1, it is characterised in that: the video Sension unit includes high point video unit and low spot video unit, and each high point video unit and low spot video unit establish people Work video neuron node and networking.
3. a kind of Space Reconstruction method of video neuron node according to claim 2, it is characterised in that: the high point Video unit carries out global monitoring to region, on the video images superposition holography situational map, to focus within the vision Visualization mark is carried out, label is subjected to image conversion displaying in video image, the background information in video image is tied Structureization description, and geographical location, scene information and video flowing are encoded together, form composite video stream.
4. a kind of Space Reconstruction method of video neuron node according to claim 3, it is characterised in that: the low spot Video unit is associated with wherein one or more focus, in low spot different angle checking monitoring region details, and will be geographical Position, scene information and video flowing encode together, form composite video stream.
5. a kind of Space Reconstruction method of video neuron node according to claim 4, it is characterised in that: the low spot Video unit also superposition holography situational map on the video images, carries out visualization mark to mark object within the vision, will Label carries out image conversion displaying in video image, and the background information in video image is carried out structural description.
6. a kind of Space Reconstruction method of video neuron node, feature according to claim 2-5 any one exist In: when searching video, a key returns to the video image time of all high point video units and low spot video unit, realizes video It is synchronous.
7. a kind of Space Reconstruction method of video neuron node, feature according to claim 3-5 any one exist In: the composite video stream is added time shaft and carries out Space Reconstruction.
8. a kind of Space Reconstruction method of video neuron node according to claim 1, it is characterised in that: the video Neural network includes input layer, output layer and more than one hidden layer.
9. a kind of Space Reconstruction method of video neuron node according to claim 8, it is characterised in that: calculated using BP Method learns video neural network, input data of each video neuron node as input layer.
10. a kind of Space Reconstruction method of video neuron node according to claim 9, it is characterised in that: the BP Algorithm includes gradient descent method.
CN201810924053.0A 2018-08-14 2018-08-14 A kind of Space Reconstruction method of video neuron node Pending CN109146062A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810924053.0A CN109146062A (en) 2018-08-14 2018-08-14 A kind of Space Reconstruction method of video neuron node

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810924053.0A CN109146062A (en) 2018-08-14 2018-08-14 A kind of Space Reconstruction method of video neuron node

Publications (1)

Publication Number Publication Date
CN109146062A true CN109146062A (en) 2019-01-04

Family

ID=64793075

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810924053.0A Pending CN109146062A (en) 2018-08-14 2018-08-14 A kind of Space Reconstruction method of video neuron node

Country Status (1)

Country Link
CN (1) CN109146062A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294411A (en) * 2022-10-08 2022-11-04 国网浙江省电力有限公司 Power grid power transmission and transformation image data processing method based on neural network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294411A (en) * 2022-10-08 2022-11-04 国网浙江省电力有限公司 Power grid power transmission and transformation image data processing method based on neural network
CN115294411B (en) * 2022-10-08 2022-12-30 国网浙江省电力有限公司 Power grid power transmission and transformation image data processing method based on neural network

Similar Documents

Publication Publication Date Title
CN103795976B (en) A kind of full-time empty 3 d visualization method
US11151890B2 (en) 5th-generation (5G) interactive distance dedicated teaching system based on holographic terminal and method for operating same
CN108965825A (en) Video interlink dispatching method based on holographic situational map
Hussain et al. Intelligent embedded vision for summarization of multiview videos in IIoT
CN105630897B (en) Content-aware geographic video multilevel correlation method
CN112449093A (en) Three-dimensional panoramic video fusion monitoring platform
CN110532340B (en) Spatial information space-time metadata construction method
Wang Research on sports training action recognition based on deep learning
CN109902681B (en) User group relation determining method, device, equipment and storage medium
CN110610444A (en) Background data management system based on live broadcast teaching cloud
CN108875555B (en) Video interest area and salient object extracting and positioning system based on neural network
AU2022215283B2 (en) A method of training a machine learning algorithm to identify objects or activities in video surveillance data
Pan et al. Multi‐source information art painting fusion interactive 3d dynamic scene virtual reality technology application research
CN111914938A (en) Image attribute classification and identification method based on full convolution two-branch network
CN109146062A (en) A kind of Space Reconstruction method of video neuron node
Li et al. SEEVis: A smart emergency evacuation plan visualization system with data‐driven shot designs
Chen [Retracted] Semantic Analysis of Multimodal Sports Video Based on the Support Vector Machine and Mobile Edge Computing
Zhong A convolutional neural network based online teaching method using edge-cloud computing platform
Huacón et al. SURV: A system for massive urban data visualization
Rimboux et al. Smart IoT cameras for crowd analysis based on augmentation for automatic pedestrian detection, simulation and annotation
Zhao et al. Research on human behavior recognition in video based on 3DCCA
Sanyal et al. Study of holoportation: using network errors for improving accuracy and efficiency
Ramesh et al. An evaluation framework for auto-conversion of 2D to 3D video streaming using depth profile and pipelining technique in handheld cellular devices
CN117440140B (en) Multi-person remote festival service system based on virtual reality technology
Sen et al. Information three-dimensional display design of video surveillance command management system based on GIS technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190104