CN103903242B - Adaptive targets compressed sensing fusion tracking method based on video sensor network - Google Patents

Adaptive targets compressed sensing fusion tracking method based on video sensor network Download PDF

Info

Publication number
CN103903242B
CN103903242B CN201410148440.1A CN201410148440A CN103903242B CN 103903242 B CN103903242 B CN 103903242B CN 201410148440 A CN201410148440 A CN 201410148440A CN 103903242 B CN103903242 B CN 103903242B
Authority
CN
China
Prior art keywords
video
image
frame
sampling
sensor network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410148440.1A
Other languages
Chinese (zh)
Other versions
CN103903242A (en
Inventor
方武
冯蓉珍
宋志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huang Zhenqiang
Original Assignee
Suzhou Institute of Trade and Commerce
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Institute of Trade and Commerce filed Critical Suzhou Institute of Trade and Commerce
Priority to CN201410148440.1A priority Critical patent/CN103903242B/en
Publication of CN103903242A publication Critical patent/CN103903242A/en
Application granted granted Critical
Publication of CN103903242B publication Critical patent/CN103903242B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present invention relates to a kind of adaptive targets compressed sensing fusion tracking method based on video sensor network, the present invention builds the video sensor network test system of the video node including some different visual angles;At the image sparse degree of initial time, the measurement initial value calculating each video node respectively, its initial pictures sequence is compressed perception CS sampling according to video node each in some video node;Using the expectation of variance of image as threshold value, by the measured value of t two field picture, iteration obtains the measured value of each video node t+1 two field picture and its t frame image sequence is carried out CS sampling.Measured value is regulated by image sparse degree, thus obtain optional sampling amount and with higher compression ratio, image can be compressed, reduce and calculate and the data volume of communication, multi-view image in conjunction with tracking creditability and testing result merges, more preferable tracking accuracy, and longer Web vector graphic time can be obtained.

Description

Adaptive targets compressed sensing fusion tracking method based on video sensor network
Technical field
The present invention relates to a kind of targeted compression perception fusion tracking method, particularly relate to a kind of based on video sensing The adaptive targets compressed sensing fusion tracking method of device network.
Background technology
Owing to Computer Vision and transmitted data amount are big, limited by sensor node resource and energy consumption, In order to extend network life, it is necessary to be compressed view data reducing amount of calculation and the traffic.
Compressive sensing theory is pointed out: as long as signal is compressible or is sparse at certain transform domain, then Just with conversion base incoherent observing matrix, conversion gained height dimensional signal can be projected to a low-dimensional with one Spatially, then by solve an optimization problem just can from the projection that these are a small amount of with high probability reconstruct Go out original signal, may certify that such projection contains the enough information of reconstruction signal.The required measured value of decoding Number much smaller than the sample number under traditional theory.
As shown in equation (1), < < n(m is for measuring square with m for compressed sensing CS (Compressive Sensing) Battle array dimension and, n is signal dimension) calculation matrix signal is measured.
y=φx=φΨα=Θα (1)
Wherein α is the rarefaction representation of signal, and Ψ is rarefaction representation matrix,For calculation matrix.
If x signal has openness at certain domain of variation, as shown in equation (2), and proper build up φ.
α=ΨTx (2)
So can pass through equation (3) come this signal of perfect recovery:
min &alpha; | | &alpha; | | l 0 s . ty = &Phi;&Psi;&alpha; - - - ( 3 )
In recent years, much study and compressed sensing algorithm is incorporated into video sensor network, but traditional pressure Contracting perception algorithm uses fixing measured value M to sample image sequence, when image sparse degree changes greatly Time, the sampled data output of minimum can not be obtained.For the Target Tracking System of single-view, it is easily subject to hide , there is the problems such as tracking accuracy difference in gear, therefore, a kind of solution to the problems described above of invention is the most necessary 's.
Summary of the invention
It is an object of the invention to provide a kind of adaptive targets compressed sensing based on video sensor network to melt Close tracking, solve when image sparse degree changes greatly, the sampled data output of optimum can not be obtained, And for the Target Tracking System of single-view, the problem of tracking accuracy difference.
In order to solve the problems referred to above, the present invention relates to a kind of adaptive targets based on video sensor network Compressed sensing fusion tracking method, comprises the following steps:
S1: build the video sensor network test system of the video node including some different visual angles, by mixing Gauss model GMM builds t frame background, the t frame video sequence that test obtainsx tIt it is t frame foreground picture Picture and t frame background sum, wherein t=0,1,2 ...;
S2: according to video node each in some video node at the image sparse degree K of initial time0, calculate respectively The measurement initial value M of each video node0Its initial pictures sequence is compressed perception CS sampling, obtains every Initial pictures y after the sampling of individual video nodeoAfter carry out S4, wherein M0≈K0log(n/K0), n is signal dimension; S3: use the expectation E (σ of the variance of imaget) as threshold value, by the measured value M of t two field picturet, iteration Obtain the measured value M of each video node t+1 two field picturet+1, It is target image xd Variance;
WhenThen Mt+10Mt
WhenThen Mt+11Mt
Wherein,For the target image recovered, xdFor target image,Represent and use norm 2 to recover Error amount;β0≤1,β1≥1,β01Respectively reducing and amplification factor, α is regulation parameter;
And by the measured value M of t+1 two field picturet+1T+1 two field picture is compressed perception CS sampling, obtains every T two field picture y after the sampling of individual video nodetAfter carry out S4;
S4: by t two field picture y after samplingtRecover to obtain t frame video sequence and recovery obtains t frame target figure Picture.
It is preferred that further comprising the steps of in S2 and S3:
S51: after each video node is recovered to obtain t frame target image, uses Unscented kalman filtering UKF to carry out The detection of target following information;
S52: the target following information many video node detected, is sent to Centroid and carries out data fusion, the T frame ground level coordinate [ut,vt] and t frame image plane coordinate [xt,ytMapping relations between] are:
&lambda; u t v t 1 = H j x t y t 1 , Wherein HjMapping matrix for jth pixel: H j = H 11 H 12 H 13 H 21 H 22 H 23 H 31 H 32 H 33 , Draw each Ground level coordinate [the u of pixelt,vt]: u t = H 11 x t + H 12 y t + H 13 H 31 x t + H 32 y t + H 33 , v t = H 21 x t + H 22 y t + H 23 H 31 x t + H 32 y t + H 33 , λ is yardstick Parameter;
S53: for each video node, by G (u, v)=∑jwjGj(u, v) andTarget after being merged The weighted value of each pixel, wherein G in tracking informationj(u, v) is the target location of jth pixel, and (u v) is G The position of target, η after fusionjFor jth pixel confidence level, wjFor based on confidence level ηjFusion after The weighted value of jth pixel.
It is preferred that sampling rate m/n is at least 25%, wherein m is calculation matrix dimension, and n is signal dimension Degree.
It is preferred that described image sparse degree is: Kt=(λ0logSt1)St, wherein (λ01)∈R201For Function model parameter, R is Spatial Dimension size, StThe pixel number of target.
It is preferred that S4 also includes pass throughT two field picture y after samplingtRecover to obtain the T frame target imageWhereinFor t frame background after sampling, Δ represents the recovery process of CS.
It is preferred that described compressed sensing CS is sampled as by t frame video sequence xtBy random measurement matrix φt? Spatial domain obtains sampled images.
Due to the fact that the above technical scheme of employing, compared with prior art, have the following advantages that and actively Effect:
1) present invention regulates measured value according to image sparse degree, it is thus achieved that optional sampling amount, with higher compression ratio Image is compressed, thus obtains the longer Web vector graphic time;
2) present invention uses multi-view image fusion to be obtained under different visual angles by video sensor network node The image information of Same Scene is in addition comprehensive, obtains target image the most accurate, comprehensive, reliable description, Thus improve tracking performance.
Accompanying drawing explanation
Fig. 1 is a kind of adaptive targets compressed sensing fusion tracking side based on video sensor network of the present invention The flow chart of method;
Fig. 2 is to be the FB(flow block) of the embodiment of the present invention;
Fig. 3 is to use tradition fixation measuring value compressed sensing algorithm to compare comparison diagram with the measurement of the present invention;
Fig. 4 is the root-mean-square error comparison diagram using tradition fixation measuring value compressed sensing algorithm with the present invention;
Fig. 5 is target following track result figure in the embodiment of the present invention.
Detailed description of the invention
The present invention is further illustrated with reference to the accompanying drawings with specific embodiment.
As shown in accompanying drawing 1-5, a kind of based on video sensor network the adaptive targets compressed sensing that the present invention provides Fusion tracking method, comprises the following steps:
S1: build the video sensor network test system of the video node including some different visual angles, pass through Mixed Gauss model GMM builds t frame backgroundWherein t=0,1,2 ...;
For t frame video sequence xt, image can be expressed as t frame foreground imageWith t frame backgroundIt With, such as equation (4).
x t = x t f + x t b - - - ( 4 )
S2: according to video node each in some video node at the image sparse degree K of initial time0, respectively Calculate the measurement initial value M of each video node0Come its initial pictures sequence xoIt is compressed perception CS to adopt Sample, initial pictures y after being sampledoAfter carry out S4.
Image sequence x at the t frame of time-varyingtIn, can guarantee that again image to obtain higher compression ratio simultaneously The quality recovered.The degree of rarefication that measured value M allows for according to image is automatically adjusted size.
Image sparse degree function Kt,
Kt=(λ0logSt1)St (6)
Wherein, (λ01)∈R2, λ01For function model parameter, R is Spatial Dimension size, StTarget Pixel number.
At initial time by equation (7) computation and measurement initial value M0Carry out image sampling M0≈K0log(n/K0)
(7)
Wherein n is signal dimension.
T frame video sequence xtBy random measurement matrix φtObtain sampled images in spatial domain, be i.e. compressed Perception CS samples.Calculate the measurement initial value M of each video node respectively0Come its initial pictures sequence xoEnter Row CS samples, initial pictures y after being sampledo
Knowable to compressive sensing theory, the least M value can cause recovering poor image quality, and M value is excessive, adopts Sampled images data volume is big, and compression ratio is low, and calculating and transmission energy consumption can be caused bigger.Can from CS sampling experimental Show that the recovery image obtained with sampling rate m/n=10% can clearly comprise profile and the position letter of image Breath, for the segmentation of foreground target, the most at least needs the sampling rate of m/n=25%, and wherein m is for measuring square Battle array dimension, n is signal dimension.
S3: use the expectation E (σ of the variance of imaget) as threshold value, by the measured value M of t two field picturet, Iteration obtains the measured value M of each video node t+1 two field picturet+1, and by the measurement of t+1 two field picture Value Mt+1T+1 two field picture is compressed perception algorithm CS sampling, obtains t after the sampling of each video node Two field picture ytAfter carry out S4.
This method uses the expectation of the variance of equation (8) employing image as threshold value.
E ( &sigma; t ) = ( N - M t ) &sigma; d 2 - - - ( 8 )
WhereinIt is target image xdVariance, measured value M can be passed throughtControl the size of threshold value.
Represent and use norm 2 restoration errors value
IfThen Mt+10Mt (9)
IfThen Mt+11Mt (10)
Wherein,For the target image recovered, xdFor target image,Represent and use norm 2 Restoration errors value;β0≤1,β1≥1,β01Respectively reducing and amplification factor, α is regulation parameter;
S4: by t two field picture y after samplingtRecover to obtain t frame video sequence xt
T two field picture y after equation (5) sampling can be passed throughtRecover to obtain t frame target imageWhereinFor The background of t frame after sampling.By the y after samplingtThe background of t frame after the sampling of figure image subtractionIt is easy to The target image of t frame after sampling.
ytFor sampling after image,The recovery process of CS is represented for the target image recovered, Δ.
Various visual angles target following based on video sensor network, the most further comprising the steps of in S2 and S3:
S51: after each video node is recovered to obtain t frame target image, use Unscented kalman filtering UKF Carry out the detection of target following information.
Concrete model is as follows
x t = I 2 &times; 2 &Delta;tI 2 &times; 2 0 I 2 &times; 2 x t - 1 + &Delta;t 2 2 I 2 &times; 2 &Delta;tI 2 &times; 2 v t - 1 - - - ( 11 )
zt=Hxt+wt (12)
Wherein xtTarget is at the state vector of t, I2×2It is 2 dimension unit matrixs, vt-1There is Gauss distribution Process noise, Δ t is interval time, ztGround level coordinate figure zt=[ut,vt].Assume that photographic head is demarcated, as Plane is H to the homography matrix of ground level.
Ground level coordinate figure [u to targett,vt] be updated, renewal process is as follows:
x ^ t - = F x ^ t - 1 - - - ( 13 )
P t - = FP t - 1 F T + Q - - - ( 14 )
K t = P t - H T ( HP t - H T + R ) - 1 - - - ( 15 )
x ^ t = x ^ t - + K t ( z t - H x ^ t - ) - - - ( 16 )
P t = ( I - K t H ) P t - - - - ( 17 )
WhereinBeing respectively t two field picture dbjective state and the covariance matrix of prediction, F is dbjective state Transfer matrix.
This method uses equation (18) weighted value as confidence level ηjThe performance of target following weighed by measured value. Comprise two factors: the number of 1, target image size, i.e. pixel;2, degree of accuracy P of target followingt.Mesh Dimensioning is the biggest, illustrates that Detection results is the best.PtRepresent the uncertainty followed the tracks of.
&eta; j = &rho;S t trace ( P t ) - - - ( 18 )
Wherein trace (Pt) it is the order seeking covariance matrix, ρ is normalized parameter, StObject pixel for detection Point number.
S52: the information of each video node detecting and tracking, is sent to Centroid and carries out data fusion.T frame ground Plane coordinates [ut,vt] and t frame image plane coordinate [xt,ytMapping relations between] are by HjDetermine:
&lambda; u t v t 1 = H j x t y t 1 - - - ( 19 )
Wherein, λ scale parameter, HjMapping matrix for jth pixel:
H j = H 11 H 12 H 13 H 21 H 22 H 23 H 31 H 32 H 33 - - - ( 20 )
For each video node, the ground level coordinate position [ut, vt] of target can be calculated:
u t = H 11 x t + H 12 y t + H 13 H 31 x t + H 32 y t + H 33 - - - ( 21 )
v t = H 21 x t + H 22 y t + H 23 H 31 x t + H 32 y t + H 33 - - - ( 22 )
S53: for each video node, the weighted value of each pixel after being merged by (23) and (24):
G(u,v)=∑jwjGj(u,v) (23)
w j = &eta; j &Sigma; j &eta; j - - - ( 24 )
Wherein Gj(u, v) is the target location of jth pixel, and (u, v) for the position of target, η after merging for GjFor jth Individual pixel confidence level, wjFor based on confidence level ηjFusion after the weighted value of jth pixel.
In order to verify the effect of the method, the present embodiment builds indoor test systems based on 8 video node. Video node resolution be 640 × 480pixels frame per second be 25fps, 15 meters of visuals field, 60 degree of visual angles, with 13 seconds track persistent period.As it is shown on figure 3, conventional compression perception algorithm is measured is fixed on 15% than m/n, no Can be adjusted according to image sparse degree, and self-adapting compressing perception algorithm, according to image sparse degree, especially It is when image sequence 150-220 frame, automatically reduces measured value, thus obtain higher compression ratio.But from Adaptation compressed sensing is owing to reducing the hits of image, therefore the quality of image restoring passes through root-mean-square error (RMES) weigh, as shown in Figure 4, the most traditional compressed sensing algorithm.
And the image of self adaptation CS compression algorithm can well obtain the status information of target, each video is saved Point is followed the tracks of the target information of various visual angles and is carried out data fusion at Centroid, can obtain and follow the tracks of knot the most accurately Really.
As it is shown in figure 5, X, Y-axis is respectively abscissa and vertical coordinate, from various visual angles the track of fusion tracking relative to The measured value of single-view, is closer to real track.This illustrates that method in this paper is reducing communication While amount, can effectively target be tracked.
In sum, the self-adapting compressing perception algorithm that the present invention is carried out based on testing result is dilute according to image Dredge degree regulation measured value, thus obtain optional sampling amount and with higher compression ratio, image can be compressed, Reducing and calculate and the data volume of communication, the multi-view image in conjunction with tracking creditability and testing result merges, More preferable tracking accuracy, and longer Web vector graphic time can be obtained.
The disclosed above specific embodiment being only the present invention, this embodiment is only the clearer explanation present invention Used, and not limitation of the invention, the changes that any person skilled in the art can think of, all should fall In protection domain.

Claims (6)

1. an adaptive targets compressed sensing fusion tracking method based on video sensor network, it is special Levy and be, comprise the following steps:
S1: build the video sensor network test system of the video node including some different visual angles is logical Cross mixed Gauss model GMM and build t frame background, the t frame video sequence x that test obtainstIt is t Frame foreground image and t frame background sum, wherein t=0,1,2 ...;
S2: according to video node each in some video node at the image sparse degree K of initial time0, point Do not calculate the measurement initial value M of each video node0Its initial pictures sequence is compressed perception CS adopt Sample, obtains initial pictures y after the sampling of each video nodeoAfter carry out S4, wherein M0≈K0log(n/K0), N is signal dimension;
S3: use the expectation E (σ of the variance of imaget) as threshold value, by the measured value M of t two field picturet, Iteration obtains the measured value M of each video node t+1 two field picturet+1, It it is target Image xdVariance;
WhenThen Mt+10Mt
WhenThen Mt+11Mt
Wherein,For the target image recovered, xdFor target image,Represent and use norm 2 restoration errors values;β0≤1,β1≥1,β01Respectively reducing and amplification factor, α is regulation parameter, and N is The picture size size that video node obtains, i.e. pixel number;
And by the measured value M of t+1 two field picturet+1T+1 two field picture is compressed perception CS sampling, Obtain t two field picture y after the sampling of each video nodetAfter carry out S4;
S4: by t two field picture y after samplingtRecover to obtain t frame video sequence and recovery obtains t Frame target image.
A kind of adaptive targets compressed sensing based on video sensor network Fusion tracking method, it is characterised in that further comprising the steps of in S2 and S3:
S51: after each video node is recovered to obtain t frame target image, use Unscented kalman filtering UKF Carry out the detection of target following information;
S52: the target following information many video node detected is sent to Centroid and carries out data fusion, T frame ground level coordinate [ut,vt] and t frame image plane coordinate [xt,ytMapping relations between] are:Wherein HjMapping matrix for jth pixel:Draw Ground level coordinate [the u of each pixelt,vt]:λ is Scale parameter;
S53: for each video node, by G (u,V)=∑jwjGj(u,V) andAfter being merged Target following information in the weighted value of each pixel, wherein Gj(u v) is the target position of jth pixel Putting, (u, v) for the position of target, η after merging for GjFor jth pixel confidence level, wjFor based on confidence level ηjFusion after the weighted value of jth pixel.
A kind of adaptive targets compressed sensing based on video sensor network Fusion tracking method, it is characterised in that sampling rate m/n is at least 25%, wherein m is calculation matrix Dimension, n is signal dimension.
A kind of adaptive targets compressed sensing based on video sensor network Fusion tracking method, it is characterised in that described image sparse degree is: Kt=(λ0logSt1)St, wherein (λ01)∈R201For function model parameter, R is Spatial Dimension size, StThe pixel number of target.
The compression sense of a kind of adaptive targets based on video sensor network Know fusion tracking method, it is characterised in that S4 also includes pass throughT after sampling Two field picture ytRecover to obtain t frame target imageWhereinFor t frame background after sampling, Δ represents CS Recovery process, φtThe random measurement matrix built for t.
A kind of adaptive targets compressed sensing based on video sensor network Fusion tracking method, it is characterised in that described compressed sensing CS is sampled as by t frame video sequence xtBy Random measurement matrix φtSampled images is obtained in spatial domain.
CN201410148440.1A 2014-04-14 2014-04-14 Adaptive targets compressed sensing fusion tracking method based on video sensor network Expired - Fee Related CN103903242B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410148440.1A CN103903242B (en) 2014-04-14 2014-04-14 Adaptive targets compressed sensing fusion tracking method based on video sensor network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410148440.1A CN103903242B (en) 2014-04-14 2014-04-14 Adaptive targets compressed sensing fusion tracking method based on video sensor network

Publications (2)

Publication Number Publication Date
CN103903242A CN103903242A (en) 2014-07-02
CN103903242B true CN103903242B (en) 2016-08-31

Family

ID=50994549

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410148440.1A Expired - Fee Related CN103903242B (en) 2014-04-14 2014-04-14 Adaptive targets compressed sensing fusion tracking method based on video sensor network

Country Status (1)

Country Link
CN (1) CN103903242B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104599290B (en) * 2015-01-19 2017-05-10 苏州经贸职业技术学院 Video sensing node-oriented target detection method
CN108171727B (en) * 2017-12-05 2023-04-07 温州大学 Sub-region-based self-adaptive random projection visual tracking method
CN111209943B (en) * 2019-12-30 2020-08-25 广州高企云信息科技有限公司 Data fusion method and device and server

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102915562A (en) * 2012-09-27 2013-02-06 天津大学 Compressed sensing-based multi-view target tracking and 3D target reconstruction system and method
CN103514600A (en) * 2013-09-13 2014-01-15 西北工业大学 Method for fast robustness tracking of infrared target based on sparse representation
CN103593833A (en) * 2013-10-25 2014-02-19 西安电子科技大学 Multi-focus image fusion method based on compressed sensing and energy rule
CN103632382A (en) * 2013-12-19 2014-03-12 中国矿业大学(北京) Compressive sensing-based real-time multi-scale target tracking method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102915562A (en) * 2012-09-27 2013-02-06 天津大学 Compressed sensing-based multi-view target tracking and 3D target reconstruction system and method
CN103514600A (en) * 2013-09-13 2014-01-15 西北工业大学 Method for fast robustness tracking of infrared target based on sparse representation
CN103593833A (en) * 2013-10-25 2014-02-19 西安电子科技大学 Multi-focus image fusion method based on compressed sensing and energy rule
CN103632382A (en) * 2013-12-19 2014-03-12 中国矿业大学(北京) Compressive sensing-based real-time multi-scale target tracking method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Compressed Sensing for Real-Time Energy-Efficient ECG Compression on Wireless Body Sensor Nodes;Hossein Mamaghanian 等;《IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING》;20110930;第58卷(第9期);第2456-2466页 *
Compressive Sensing: From Theory to Applications, a Survey;Saad Qaisar;《JOURNAL OF COMMUNICATIONS AND NETWORKS》;20130831;第15卷(第5期);第443-456页 *
Real-time visual tracking using compressive sensing;H. Li 等;《Computer Vision and Pattern Recognition》;20110625;第1305-1312页 *

Also Published As

Publication number Publication date
CN103903242A (en) 2014-07-02

Similar Documents

Publication Publication Date Title
US10187617B2 (en) Automatic detection of moving object by using stereo vision technique
JP2018148367A (en) Image processing device, image processing system, image processing method, and program
CN105279772B (en) A kind of trackability method of discrimination of infrared sequence image
CN112597864B (en) Monitoring video anomaly detection method and device
US10853949B2 (en) Image processing device
CN104376577A (en) Multi-camera multi-target tracking algorithm based on particle filtering
CN106127741A (en) Non-reference picture quality appraisement method based on improvement natural scene statistical model
CN103903242B (en) Adaptive targets compressed sensing fusion tracking method based on video sensor network
CN103093243B (en) The panchromatic remote sensing image clouds of high-resolution sentences method
CN109657717A (en) A kind of heterologous image matching method based on multiple dimensioned close packed structure feature extraction
WO2022199360A1 (en) Moving object positioning method and apparatus, electronic device, and storage medium
CN116403294B (en) Transformer-based multi-view width learning living body detection method, medium and equipment
CN115375581A (en) Dynamic visual event stream noise reduction effect evaluation method based on event time-space synchronization
CN103065320A (en) Synthetic aperture radar (SAR) image change detection method based on constant false alarm threshold value
CN103886287A (en) Perspective-crossing gait recognition method based on 3D projection
CN114594476A (en) Pi/4 simple-polarization synthetic aperture radar building area extraction method
CN110276379A (en) A kind of the condition of a disaster information rapid extracting method based on video image analysis
CN103134476A (en) Sea and land boundary detection method based on level set algorithm
CN110365966B (en) Video quality evaluation method and device based on window
JP5230354B2 (en) POSITIONING DEVICE AND CHANGED BUILDING DETECTION DEVICE
CN115719464A (en) Water meter durability device water leakage monitoring method based on machine vision
Yang et al. Image analyses for video-based remote structure vibration monitoring system
US20170169576A1 (en) Crowd intelligence on flow velocity measurement
CN113591714A (en) Flood detection method based on satellite remote sensing image
CN114429515A (en) Point cloud map construction method, device and equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Huang Zhenqiang

Inventor before: Fang Wu

Inventor before: Feng Rongzhen

Inventor before: Song Zhiqiang

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20170613

Address after: Gulou District of Nanjing city in Jiangsu province 210000 dinghuai Gate No. 4, the ancient forest park north gate rich technology center two floor

Patentee after: NANJING RUICHI DINGXIN TECHNOLOGY CO.,LTD.

Address before: 215009 Suzhou City, Jiangsu Province University Road, No. 287

Patentee before: Suzhou Institute of Trade & Commerce

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211201

Address after: 210000 Room 501, unit 3, No. 5, Dinghuaimen, Gulou District, Nanjing, Jiangsu Province

Patentee after: Huang Zhenqiang

Address before: 210000 second floor, Ruichi science and technology center, north gate of Gulin Park, No.4 Dinghuaimen, Gulou District, Nanjing, Jiangsu Province

Patentee before: NANJING RUICHI DINGXIN TECHNOLOGY CO.,LTD.

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160831