CN110136175A - A kind of indoor typical scene matching locating method neural network based - Google Patents

A kind of indoor typical scene matching locating method neural network based Download PDF

Info

Publication number
CN110136175A
CN110136175A CN201910422946.XA CN201910422946A CN110136175A CN 110136175 A CN110136175 A CN 110136175A CN 201910422946 A CN201910422946 A CN 201910422946A CN 110136175 A CN110136175 A CN 110136175A
Authority
CN
China
Prior art keywords
neural network
typical scene
deep neural
similarity
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910422946.XA
Other languages
Chinese (zh)
Inventor
郭春生
容培盛
应娜
陈华华
杨萌
章建武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Hangzhou Electronic Science and Technology University
Original Assignee
Hangzhou Electronic Science and Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Electronic Science and Technology University filed Critical Hangzhou Electronic Science and Technology University
Priority to CN201910422946.XA priority Critical patent/CN110136175A/en
Publication of CN110136175A publication Critical patent/CN110136175A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present invention proposes a kind of indoor typical scene matching locating method neural network based, comprising: Step 1: establishing standard typical scene positioning image library in server end;Step 2:, by mass data training, making the method for neural network judgment of learning similarity measurement from data using Siamese deep neural network model;Step 3: deep neural network exports feature vector, the similarity with standard typical scene image library is calculated using feature vector, the height of typical scene matching degree, the quality of assessment models are judged by similarity size;Step 4: carrying trained model into server, obtains trained deep neural network in video data feeding server and carry out calculating similarity, differentiate the position being currently located.The method of the invention has many advantages, such as that training effectiveness is high, convergence is strong, modeling accuracy is high, matching effect is good, meets complex environment, can accurately and efficiently realize the matching positioning in equipment on-line room in typical scene.

Description

A kind of indoor typical scene matching locating method neural network based
Technical field
The present invention relates to computer vision field, in particular to indoor typical scene neural network based matches positioning side Method.
Background technique
With the progress of science and technology and the raising of people's economic strength, position location services are more by the weight of people Depending on currently, outdoor positioning system is very mature, and indoors in environment, since wall within doors blocks and the stream of people The factors such as mobile, the outdoor positionings such as GPS system system can not effective position.It is existing to be based in iBeacon bluetooth module room The method of positioning utilizes method of Wi-Fi location technology etc., the easy low, building by such as positioning method self poisoning precision The influence for the factors such as blocking, can not be accurately located user current location.
Nowadays location technology equipment as needed for it of the view-based access control model risen is simple, and impacted factor is smaller and obtains Extensive concern.Since camera has become the standard configuration of mobile phone, vision positioning is without adding optional equipment, simultaneously because building It builds up and changes after type small, keep the positioning of the view-based access control model factor that is affected smaller.In the present system, the image obtained using user It is matched with the image in standard typical scene database, and then obtains the location information of camera line picture.It can see The rate of images match, precision and robustness directly affect the rate, precision and robustness of positioning out.
In the indoor orientation method of view-based access control model, image matching technology is mostly important sport technique segment, traditional figure As matching technique (such as histogram, SIFT algorithm) has been difficult to meet the requirement that present data volume is big, environment is complicated.However, A large amount of experiment is it has been proved that depth learning technology can reach in terms of image matching technology using deep neural network model To good effect.Deep neural network model is trained by mass data, and updates network by back-propagation algorithm Parameter, trained deep neural network can obtain the realtime image data of camera, and calculate image by network model Feature and standard typical scene image between similarity, the corresponding position information of typical scene image can be returned.Base In this, the invention proposes a kind of indoor typical scene matching locating methods neural network based.
Summary of the invention
The invention proposes a kind of indoor typical scene matching locating methods neural network based, have training effectiveness Height, convergence is strong, modeling accuracy is high, matching effect is good, meets the advantages that complex environment.
To realize the above-mentioned technical purpose, the invention adopts the following technical scheme:
Image library is positioned in the typical scene that server end establishes standard first, the image in library, which marks, position Confidence breath.Secondly, one depth convolutional neural networks model of training, function is the feature for extracting image and calculates between image Similarity.Neural network calculates backpropagation after loss function by mass data training, updates network parameter, improves prison Survey the accuracy of effect.Again, the neural network trained and completed is built in server, dollying head obtains real-time video Data, input neural network after data prediction, and neural network is exported feature vector, calculated separately using feature vector defeated Enter the similarity s of n typical scene image in video frame and standard typical scene image library1,...,sn, when similarity is greater than When the threshold value of setting, it was demonstrated that the video frame of input has matched typical scene, and returns to similarity s1,...,snMiddle maximum value The position of corresponding typical scene.Finally, indoor typical scene matching positioning function may be implemented by calculating similarity.
Compared with existing indoor typical scene matching locating method, the beneficial effects of the present invention are:
1) depth learning technology is used, establishes deep neural network model, and by mass data to deep neural network Model is trained, and improves Detection accuracy, detection efficiency.
2) dollying head is used in conjunction with deep neural network, can carry out scene matching in real time, effectively more The matched function of indoor typical scene may be implemented in the shortcomings that having mended indoor positioning and deficiency.
3) depth learning technology is used, is more able to satisfy than traditional image matching technology (such as histogram, SIFT algorithm) The requirement of environment complexity.It once deployed with devices finishes, can efficiently work for a long time, complete the matching of indoor typical scene, The function of returning to present position, provides a kind of novel solution for indoor positioning.
Detailed description of the invention
The device structure schematic diagram that Fig. 1 the method for the invention uses.
Deep neural network model schematic diagram Fig. 2 of the invention.
Video data acquiring flow diagram Fig. 3 of the invention.
Typical scene arbiter flow diagram Fig. 4 of the invention.
Specific embodiment
Present invention is further described in detail with specific embodiment with reference to the accompanying drawing:
As shown in Figure 1, the present embodiment the method uses video data acquiring device, computer/server and typical field Scape arbiter, server are connect with video acquisition device;As shown in figure 3, video data acquiring device includes dollying head, view Frequency frame formatting processing module and image pre-processing module, dollying head is for obtaining video data, at video frame formats Reason module is used to be converted to real time video data the video frame f (x, t) of formatting, and wherein t indicates time, function f () table Show video data formatting function;The video image that image pre-processing module is acquired according to video acquisition device judges whether to need Acquisition image is pre-processed, other function is identical as existing video capture technology, and this will not be repeated here.
One kind of the present invention indoor typical scene matching locating method neural network based comprising the steps of:
Step 1: establishing standard typical scene positioning image library in server end, the image in library, which marks, to be had Location information.And a large amount of typical scene data set is made, for training deep neural network;
To obtain higher monitoring accuracy, deep neural network needs to be trained by a large amount of data, by big Measure the training of data, the feature vector of available data and the similarity of calculating and standard picture.For this purpose, building depth Before neural network, needs to make a positioning image library and perfect training dataset, be respectively used to scene matching and training Deep neural network.
Step 2:, by mass data training, making neural network from data using Siamese deep neural network model The method of middle judgment of learning similarity measurement;
Present invention employs Siamese deep neural network model, which is widely used in image vision field and table Existing excellent, function is for measuring the similitude between input data.
Its network structure is as shown in Figure 2:
The characteristics of Siamese network is that two networks of the right and left are identical network structures, they share identical Weight W, input data is a pair of of picture (X1,X2, Y), wherein Y=0 indicates X1And X2Belong to the other picture of same class, Y= 1 indicates not to be the other picture of same class.It is G that network, which will export lower dimensional space result,W(X1) and GW(X2), they are by X1With X2It is obtained by network mapping.Then the two obtained output results are used into function EW(X1,X2) be compared.
The loss function of network is defined as:
Wherein (Y, X1,X2)iIt is i-th group of sample, is made of a pair of of picture and a label Y, W is the weight of network, M is the threshold value of setting, DWThe feature vector exported in lower dimensional space for network.Comparison loss function can drive similar sample This is close, and dissimilar sample is separate, using Euclidean distance it may determine that the similarity of two pictures, Euclidean distance is smaller, Sample is more similar;Euclidean distance is bigger, and sample is dissimilar.When training, one group of image and label are inputted into neural network, nerve net Network maps an image to new space, forms feature vector.Pairs of training data is inputted into neural network, calculates output and instruction Practice the loss function between data label, further according to back-propagation algorithm, updates the parameters in network, nerve net can be made The method of network judgment of learning similarity measurement from data goes to compare and match new unknown class with the measurement that this study obtains Other sample.
Step 3: deep neural network exports feature vector, calculated and standard typical scene image library using feature vector Similarity, the height of typical scene matching degree, the quality of assessment models are judged by similarity size;
The feature vector for the neural network output data trained, by the video frame and standard allusion quotation that calculate camera input The similarity s of the middle n image of type scene image library1,...,sn, when similarity is greater than the threshold value of setting, it was demonstrated that camera is defeated The video frame entered has matched typical scene, and returns to similarity s1,...,snThe position of the corresponding typical scene of middle maximum value Judge the position being currently located, and compared with true location information, verifies the quality of model.If model is bad, modification The parameter of network, re -training.
Step 4: will be trained and reach the model of required precision and carry into server, use video data acquiring Device obtains video data, is sent into trained deep neural network in server and is calculated, is extracted by neural network Feature vector calculates the similarity with n typical scene image of standard, differentiates the position being currently located.
Trained deep neural network model is built on the server, is obtained by video data acquiring device current Real time video data at place.As shown in figure 4, the video data handled well to be inputted to the depth mind put up in server Through network, similarity s, similarity s with n typical scene image of standard are calculated by the feature vector that neural network is extracted After normalized, the value range of s is from 0 to 1;The position being currently located can be differentiated by typical scene arbiter.It is logical The above method is crossed, to realize the function of indoor typical scene matching positioning.

Claims (7)

1. a kind of indoor typical scene matching locating method neural network based, which is characterized in that comprise the steps of:
Step 1: establishing standard typical scene positioning image library in server end, positioning the image in image library and marking has The location information answered, and typical scene training dataset is made, for training deep neural network;
Step 2:, by mass data training, making deep neural network from data using Siamese deep neural network model The method of middle judgment of learning similarity measurement;
Step 3: deep neural network exports feature vector, the phase with standard typical scene image library is calculated using feature vector Like degree, the height of typical scene matching degree, the quality of assessment models are judged by similarity size;
Step 4: will be trained and reach the Siamese deep neural network model of required precision and carry into server, make Video data is obtained with video data acquiring device, trained deep neural network in server is sent into and is calculated, passed through The feature vector that deep neural network is extracted calculates the similarity with n typical scene image of standard, differentiates and is currently located Position.
2. a kind of indoor typical scene matching locating method neural network based as described in claim 1, which is characterized in that In step 2, when by mass data training, one group of image and label are inputted into neural network, deep neural network reflects image It is mapped to new space, forms feature vector;Pairs of training data is inputted into neural network, calculates output and training data label Between loss function update the parameters in network, make deep neural network from data further according to back-propagation algorithm The method of judgment of learning similarity measurement.
3. a kind of indoor typical scene matching locating method neural network based as described in claim 1, which is characterized in that The video data acquiring device includes dollying head, video frame formats processing module and image pre-processing module.
4. a kind of indoor typical scene matching locating method neural network based as claimed in claim 3, which is characterized in that Step 3 specifically:
The feature vector for the deep neural network output data trained calculates the view of dollying head input using feature vector The similarity s of frequency frame and the middle n image of standard typical scene image library1,...,sn, when similarity is greater than the threshold value of setting, It proves that the video frame of dollying head input has matched typical scene, and returns to similarity s1,...,snMiddle maximum value is corresponding The position of typical scene judge the position being currently located, and compared with true location information, verify the quality of model;Such as Fruit model is bad, modifies the parameter of deep neural network, re -training.
5. a kind of indoor typical scene matching locating method neural network based as claimed in claim 4, which is characterized in that The video frame of the dollying head input, is obtained, video frame formats processing module will by video frame formats processing module Real time video data is converted to the video frame f (x, t) of formatting, and wherein t indicates the time, and function f () indicates video data lattice Formula function.
6. a kind of indoor typical scene matching locating method neural network based as claimed in claim 5, it is characterised in that: In step 4, the server connects video acquisition device, and video acquisition device transmits formatted video frame f (x, t) It is handled to server:
Video frame f (x, t) is inputted into the deep neural network in server, video frame and standard are calculated by deep neural network The similarity s of typical scene image;Similarity s is after normalized, and the value range of s is from 0 to 1;Sentenced by typical scene Other device returns to the position being currently located, and realizes indoor typical scene matching positioning.
7. a kind of indoor typical scene matching locating method neural network based as claimed in claim 3, it is characterised in that: The video image that described image preprocessing module is acquired according to video capture device judges whether to need to carry out acquisition image pre- Processing.
CN201910422946.XA 2019-05-21 2019-05-21 A kind of indoor typical scene matching locating method neural network based Pending CN110136175A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910422946.XA CN110136175A (en) 2019-05-21 2019-05-21 A kind of indoor typical scene matching locating method neural network based

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910422946.XA CN110136175A (en) 2019-05-21 2019-05-21 A kind of indoor typical scene matching locating method neural network based

Publications (1)

Publication Number Publication Date
CN110136175A true CN110136175A (en) 2019-08-16

Family

ID=67571826

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910422946.XA Pending CN110136175A (en) 2019-05-21 2019-05-21 A kind of indoor typical scene matching locating method neural network based

Country Status (1)

Country Link
CN (1) CN110136175A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110704712A (en) * 2019-09-20 2020-01-17 武汉大学 Scene picture shooting position range identification method and system based on image retrieval
CN111050294A (en) * 2020-02-24 2020-04-21 张早 Indoor positioning system and method based on deep neural network
CN111127532A (en) * 2019-12-31 2020-05-08 成都信息工程大学 Medical image deformation registration method and system based on deep learning characteristic optical flow
CN111563564A (en) * 2020-07-20 2020-08-21 南京理工大学智能计算成像研究院有限公司 Speckle image pixel-by-pixel matching method based on deep learning
CN111738993A (en) * 2020-06-05 2020-10-02 吉林大学 G-W distance-based ant colony graph matching method
CN112446799A (en) * 2019-09-03 2021-03-05 全球能源互联网研究院有限公司 Power grid scheduling method and system based on AR device virtual interaction
CN112985419A (en) * 2021-05-12 2021-06-18 中航信移动科技有限公司 Indoor navigation method and device, computer equipment and storage medium
CN113256541A (en) * 2021-07-16 2021-08-13 四川泓宝润业工程技术有限公司 Method for removing water mist from drilling platform monitoring picture by machine learning
CN113554754A (en) * 2021-07-30 2021-10-26 中国电子科技集团公司第五十四研究所 Indoor positioning method based on computer vision
CN113984055A (en) * 2021-09-24 2022-01-28 北京奕斯伟计算技术有限公司 Indoor navigation positioning method and related device
CN114510044A (en) * 2022-01-25 2022-05-17 北京圣威特科技有限公司 AGV navigation ship navigation method and device, electronic equipment and storage medium
CN115201833A (en) * 2021-04-08 2022-10-18 中强光电股份有限公司 Object positioning method and object positioning system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106683195A (en) * 2016-12-30 2017-05-17 上海网罗电子科技有限公司 AR scene rendering method based on indoor location
CN107131883A (en) * 2017-04-26 2017-09-05 中山大学 The full-automatic mobile terminal indoor locating system of view-based access control model
US9854206B1 (en) * 2016-12-22 2017-12-26 TCL Research America Inc. Privacy-aware indoor drone exploration and communication framework
CN108805149A (en) * 2017-05-05 2018-11-13 中兴通讯股份有限公司 A kind of winding detection method and device of visual synchronization positioning and map structuring
CN109506658A (en) * 2018-12-26 2019-03-22 广州市申迪计算机系统有限公司 Robot autonomous localization method and system
CN109671119A (en) * 2018-11-07 2019-04-23 中国科学院光电研究院 A kind of indoor orientation method and device based on SLAM

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9854206B1 (en) * 2016-12-22 2017-12-26 TCL Research America Inc. Privacy-aware indoor drone exploration and communication framework
CN106683195A (en) * 2016-12-30 2017-05-17 上海网罗电子科技有限公司 AR scene rendering method based on indoor location
CN107131883A (en) * 2017-04-26 2017-09-05 中山大学 The full-automatic mobile terminal indoor locating system of view-based access control model
CN108805149A (en) * 2017-05-05 2018-11-13 中兴通讯股份有限公司 A kind of winding detection method and device of visual synchronization positioning and map structuring
CN109671119A (en) * 2018-11-07 2019-04-23 中国科学院光电研究院 A kind of indoor orientation method and device based on SLAM
CN109506658A (en) * 2018-12-26 2019-03-22 广州市申迪计算机系统有限公司 Robot autonomous localization method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MENGYUN LIU ET AL: ""Unsupervised Visual Representation Learning for Indoor Scenes with a Siamese ConvNet and Graph Constraints"", 《PREPRINTS.ORG》 *
R. HADSELL ET AL.: ""Dimensionality Reduction by Learning an Invariant Mapping"", 《2006 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112446799A (en) * 2019-09-03 2021-03-05 全球能源互联网研究院有限公司 Power grid scheduling method and system based on AR device virtual interaction
CN112446799B (en) * 2019-09-03 2024-03-19 全球能源互联网研究院有限公司 Power grid dispatching method and system based on AR equipment virtual interaction
CN110704712A (en) * 2019-09-20 2020-01-17 武汉大学 Scene picture shooting position range identification method and system based on image retrieval
CN111127532A (en) * 2019-12-31 2020-05-08 成都信息工程大学 Medical image deformation registration method and system based on deep learning characteristic optical flow
CN111050294A (en) * 2020-02-24 2020-04-21 张早 Indoor positioning system and method based on deep neural network
CN111738993A (en) * 2020-06-05 2020-10-02 吉林大学 G-W distance-based ant colony graph matching method
CN111563564A (en) * 2020-07-20 2020-08-21 南京理工大学智能计算成像研究院有限公司 Speckle image pixel-by-pixel matching method based on deep learning
CN115201833A (en) * 2021-04-08 2022-10-18 中强光电股份有限公司 Object positioning method and object positioning system
CN112985419A (en) * 2021-05-12 2021-06-18 中航信移动科技有限公司 Indoor navigation method and device, computer equipment and storage medium
CN112985419B (en) * 2021-05-12 2021-10-01 中航信移动科技有限公司 Indoor navigation method and device, computer equipment and storage medium
CN113256541A (en) * 2021-07-16 2021-08-13 四川泓宝润业工程技术有限公司 Method for removing water mist from drilling platform monitoring picture by machine learning
CN113554754A (en) * 2021-07-30 2021-10-26 中国电子科技集团公司第五十四研究所 Indoor positioning method based on computer vision
CN113984055A (en) * 2021-09-24 2022-01-28 北京奕斯伟计算技术有限公司 Indoor navigation positioning method and related device
CN114510044A (en) * 2022-01-25 2022-05-17 北京圣威特科技有限公司 AGV navigation ship navigation method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110136175A (en) A kind of indoor typical scene matching locating method neural network based
CN106845357B (en) A kind of video human face detection and recognition methods based on multichannel network
CN103530881B (en) Be applicable to the Outdoor Augmented Reality no marks point Tracing Registration method of mobile terminal
CN110532970B (en) Age and gender attribute analysis method, system, equipment and medium for 2D images of human faces
CN109785337A (en) Mammal counting method in a kind of column of Case-based Reasoning partitioning algorithm
CN109344813B (en) RGBD-based target identification and scene modeling method
CN109190446A (en) Pedestrian's recognition methods again based on triple focused lost function
CN109635875A (en) A kind of end-to-end network interface detection method based on deep learning
US20220180534A1 (en) Pedestrian tracking method, computing device, pedestrian tracking system and storage medium
CN111209892A (en) Crowd density and quantity estimation method based on convolutional neural network
CN104700408B (en) A kind of indoor single goal localization method based on camera network
CN107977656A (en) A kind of pedestrian recognition methods and system again
CN110874583A (en) Passenger flow statistics method and device, storage medium and electronic equipment
CN109886356A (en) A kind of target tracking method based on three branch's neural networks
CN112016497A (en) Single-view Taijiquan action analysis and assessment system based on artificial intelligence
CN108960142B (en) Pedestrian re-identification method based on global feature loss function
CN107590460B (en) Face classification method, apparatus and intelligent terminal
CN108470178B (en) Depth map significance detection method combined with depth credibility evaluation factor
CN108961330A (en) The long measuring method of pig body and system based on image
CN113435282A (en) Unmanned aerial vehicle image ear recognition method based on deep learning
CN109886242A (en) A kind of method and system that pedestrian identifies again
CN107679469A (en) A kind of non-maxima suppression method based on deep learning
CN109711267A (en) A kind of pedestrian identifies again, pedestrian movement's orbit generation method and device
CN109117753A (en) Position recognition methods, device, terminal and storage medium
CN106447695A (en) Same object determining method and device in multi-object tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190816

RJ01 Rejection of invention patent application after publication