CN106709409A - Novel robot indoor positioning technology - Google Patents
Novel robot indoor positioning technology Download PDFInfo
- Publication number
- CN106709409A CN106709409A CN201510788890.1A CN201510788890A CN106709409A CN 106709409 A CN106709409 A CN 106709409A CN 201510788890 A CN201510788890 A CN 201510788890A CN 106709409 A CN106709409 A CN 106709409A
- Authority
- CN
- China
- Prior art keywords
- feature
- scene
- purport
- indoor positioning
- saliency
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/35—Categorising the entire scene, e.g. birthday party or wedding scene
- G06V20/36—Indoor scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
Abstract
The invention relates to vision positioning technique for an indoor robot. According to the technology, a scene recognition algorithm based on vision environment context for a mobile robot is used. According to the set of system, through capturing the Gist feature of a scene image and carrying out dimensionality reduction on the Gist feature, early visual features at different scales are obtained to distinguish different indoor scenes. Specifically, according to the algorithm, a theme feature is extracted on the basis of visual saliency, after the dimensionality reduction processing, a three-layer neural network and a back-propagation algorithm are used to identify the scene, and an ideal result is obtained. Compared with the prior research, the technology of the invention has higher biological rationality, same positioning accuracy is obtained with a lower computing time, and the technology can be operated on the robot in real time.
Description
Technical field:
The present invention relates to robot indoor positioning technologies, more particularly to a kind of indoor scene identification technology of view-based access control model conspicuousness principle.
Background technology:
The mobile robot of a new generation needs to solve such as self poisoning, and the problems such as map building and autonomous, and how accurate and robustly obtain robot current positional information the key for solving these problems is.The abundant scene information provided by analyzing and processing visual information recognizes place scene, is the important method of robot indoor positioning.Being currently based on the scene recognition method of vision mainly has following several:
Scene Recognition based on object uses some marks as the basis of scene Recognition.The shortcoming of this algorithm is susceptible to the influence of scene noise and Irradiance, and permanent reliable object is always generally acknowledged problem as mark in how selecting scene.
Visual scene is divided into some regions by the scene Recognition based on region first, and recognizes different scenes as the mark of scene using interregional structural relation.Similar with a upper method, how to mark off the feature with robustness is the algorithm problem demanding prompt solution with the region for associating.
The image of input is regarded as an entirety by the scene recognition method based on environmental context, and extracts image information in statistics and semantically, then image information is simplified the feature for being summarized as a low dimensional.This method has robustness for picture noise and small-sized object, but obscures the close scene of semantic information sometimes.
The content of the invention:
Regarding to the issue above, the present invention can be quickly found inspiring for attractive this Biological characteristics of region in scene by the mankind, by finding the salient region in image, and combine the purport information of scene, there is provided a kind of indoor scene recognition methods based on biological vision conspicuousness.Salient region is the region with the environment significant difference of surrounding, and scene purport then includes the statistical information of whole scene image.Be associated for both key features that can be processed by human brain visual cortex by the present invention, has biology reasonability and calculates the low-down scene characteristic of cost so as to obtain one kind, and has obtained recognition accuracy higher.
It is of the invention mainly to realize that step is as follows:
1. bottom Saliency feature extractions
Texture, three passages of color and gray scale, wherein subchannel of the texture channel comprising four direction are extracted from input picture using conspicuousness model, Color Channel then includes two subchannels of RG and BY.Each subchannel obtains 1 using Gaussian smoothing:1 to 1:256 9 gaussian pyramids of yardstick, and different levels are done with difference treatment to extract the feature of different scale.
2. Gist feature extractions
After the low-level image feature for being extracted image, the characteristic pattern of all yardsticks is divided into 16 grids of 4*4, extracting 16 values by average operation is used as purport characteristic vector.
3. PCA/ICA dimensionality reductions
The purport that every image zooming-out goes out is characterized in the vector by totally 544 dimensions of 34 characteristic patterns every, 16 extracted regions.We are tieed up its dimensionality reduction to 80 using principal component analysis (PCA) and independent component analysis (ICA), while remaining 97% information.
4. scene classification
Using the scene characteristic after dimensionality reduction, we establish one has 200 nodes and 100 three-layer neural networks of node respectively, and is trained using back-propagation algorithm.Finally nine scenes are obtained with 84.21%~86.45% classification accuracy, the average time of identification is less than 10ms.
Claims (3)
1. a kind of new indoor positioning technologies, it is characterised in that extract scene purport (Gist) feature using vision significance model (Saliency), wherein
The vision significance model, the object of visual attention can be most caused for recognizing;
The scene purport feature, the vector for extracting description global features of scene.
2. indoor positioning technologies according to claim 1, the vision significance model (Saliency) specifically includes:
Channel separation unit, RGB, gray scale, direction passage for separate picture;
Change of scale unit, for capturing expression of the channel signal under different scale;
Saliency feature extraction units, for detecting regions substantially different from surrounding;
Fusion Features unit, the feature of the different passages for previous step to be extracted carries out linear fusion.
3. indoor positioning technologies according to claim 1, scene purport (Gist) feature is specifically included:
Gray scale purport feature, represents for catching image in the overall situation of gray space;
Direction purport feature, represents for catching image in the overall situation of texture space;
Color Channel purport feature, represents for catching the overall situation of the image in particular color space.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510788890.1A CN106709409A (en) | 2015-11-17 | 2015-11-17 | Novel robot indoor positioning technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510788890.1A CN106709409A (en) | 2015-11-17 | 2015-11-17 | Novel robot indoor positioning technology |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106709409A true CN106709409A (en) | 2017-05-24 |
Family
ID=58932098
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510788890.1A Pending CN106709409A (en) | 2015-11-17 | 2015-11-17 | Novel robot indoor positioning technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106709409A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107609565A (en) * | 2017-09-21 | 2018-01-19 | 哈尔滨工业大学 | A kind of indoor vision positioning method based on image overall feature principal component linear regression |
CN109484330A (en) * | 2018-11-27 | 2019-03-19 | 合肥工业大学 | New hand driver's driving efficiency secondary lift system based on Logistic model |
CN109839111A (en) * | 2019-01-10 | 2019-06-04 | 王昕� | A kind of indoor multi-robot formation system of view-based access control model positioning |
-
2015
- 2015-11-17 CN CN201510788890.1A patent/CN106709409A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107609565A (en) * | 2017-09-21 | 2018-01-19 | 哈尔滨工业大学 | A kind of indoor vision positioning method based on image overall feature principal component linear regression |
CN107609565B (en) * | 2017-09-21 | 2020-08-11 | 哈尔滨工业大学 | Indoor visual positioning method based on image global feature principal component linear regression |
CN109484330A (en) * | 2018-11-27 | 2019-03-19 | 合肥工业大学 | New hand driver's driving efficiency secondary lift system based on Logistic model |
CN109839111A (en) * | 2019-01-10 | 2019-06-04 | 王昕� | A kind of indoor multi-robot formation system of view-based access control model positioning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109359538B (en) | Training method of convolutional neural network, gesture recognition method, device and equipment | |
Jin et al. | A mobile application of American sign language translation via image processing algorithms | |
US20190362144A1 (en) | Eyeball movement analysis method and device, and storage medium | |
CN103854016B (en) | Jointly there is human body behavior classifying identification method and the system of feature based on directivity | |
CN103218605B (en) | A kind of fast human-eye positioning method based on integral projection and rim detection | |
CN102496157B (en) | Image detection method based on Gaussian multi-scale transform and color complexity | |
CN110032932B (en) | Human body posture identification method based on video processing and decision tree set threshold | |
CN105160310A (en) | 3D (three-dimensional) convolutional neural network based human body behavior recognition method | |
CN104063722A (en) | Safety helmet identification method integrating HOG human body target detection and SVM classifier | |
CN106909887A (en) | A kind of action identification method based on CNN and SVM | |
CN102521565A (en) | Garment identification method and system for low-resolution video | |
CN103150019A (en) | Handwriting input system and method | |
CN105095870A (en) | Pedestrian re-recognition method based on transfer learning | |
CN102592115B (en) | Hand positioning method and system | |
US10650234B2 (en) | Eyeball movement capturing method and device, and storage medium | |
CN109086772A (en) | A kind of recognition methods and system distorting adhesion character picture validation code | |
CN103106409A (en) | Composite character extraction method aiming at head shoulder detection | |
CN102184404B (en) | Method and device for acquiring palm region in palm image | |
CN103218601B (en) | The method and device of detection gesture | |
CN106709409A (en) | Novel robot indoor positioning technology | |
CN110472625A (en) | A kind of pieces of chess visual identity method based on Fourier descriptor | |
Anis et al. | Digital electric meter reading recognition based on horizontal and vertical binary pattern | |
Hossain et al. | Extending GLCM to include color information for texture recognition | |
CN102799885B (en) | Lip external outline extracting method | |
CN117475353A (en) | Video-based abnormal smoke identification method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20170524 |