CN108921892A - A kind of indoor scene recognition methods based on laser radar range information - Google Patents

A kind of indoor scene recognition methods based on laser radar range information Download PDF

Info

Publication number
CN108921892A
CN108921892A CN201810725992.2A CN201810725992A CN108921892A CN 108921892 A CN108921892 A CN 108921892A CN 201810725992 A CN201810725992 A CN 201810725992A CN 108921892 A CN108921892 A CN 108921892A
Authority
CN
China
Prior art keywords
sample
training
test sample
training sample
radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810725992.2A
Other languages
Chinese (zh)
Inventor
黄学艺
刘华平
宋彦
袁胜
赵江海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Sino-Science Automation System Co Ltd
Original Assignee
Hefei Sino-Science Automation System Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Sino-Science Automation System Co Ltd filed Critical Hefei Sino-Science Automation System Co Ltd
Priority to CN201810725992.2A priority Critical patent/CN108921892A/en
Publication of CN108921892A publication Critical patent/CN108921892A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour

Abstract

The invention discloses a kind of indoor scene recognition methods based on laser radar range information, this method in mobile robot by installing laser radar, in real time acquisition mobile robot indoors in environment driving process laser radar ranging information, and determine that mobile robot is presently in the type of scene based on data collected.This method can solve the problems, such as indoor environment identification of most of mobile robot in common living scene, and laser radar has the characteristics that high-precision and high density range scans can be carried out, and improve the robustness and accuracy rate of indoor scene identification.

Description

A kind of indoor scene recognition methods based on laser radar range information
Technical field
The present invention relates to a kind of indoor scene recognition methods, in particular to a kind of interior based on laser radar range information Scene recognition method.
Background technique
Indoor scene recognition capability to the positioning to be carried out during the daily production operation of indoor mobile robot, lead The activities such as boat, path planning have tremendous influence.Laser radar can sweep the high-precision range of indoor carry out 360 ° omni-directional It retouches, more can accurately reflect the features of shape of locating indoor environment, and easy to operate, become current research hotspot.
In existing technical literature, patent of invention " indoor and outdoor scene recognition method and system ", publication No. is CN104457751A gives different sensors using the related data for the multiple sensors acquisition local environment that mobile terminal carries Index of correlation is arranged in data, determines that mobile terminal is presently in the probability of indoor and outdoor surroundings based on index and related data, and right The corresponding probability of acquired each index is weighted summation, completes indoor and outdoor surroundings identification.This method is only used for identification institute Locating environment is indoor or outdoor, can not further be judged the type of indoor environment, and affected by many factors, accidentally Property is very big, is not also suitable for robot system.
Summary of the invention
The purpose of the present invention is overcoming the shortcoming of traditional technology, a kind of quickly and effectively indoor scene positioning side is proposed Method, density height, easy to operate advantage high using laser radar scanning precision, on the basis based on laser radar range information Upper realization mobile robot indoor scene identification.
To achieve the goals above, present invention employs following technical solutions:
A kind of indoor scene recognition methods based on laser radar range information proposed by the present invention, specifically includes following step Suddenly:
(1) laser radar that scanning range is 360 ° is installed in mobile robot;
(2) in different types of indoor scene, the artificial mobile robot that controls carries out collisionless traveling, while acquiring work Training sample data collection S is then obtained if the number of training sample is N for the radar information of training sampletrExpression formula be:
Str={ Str1,Str2,Λ,StrN}
Wherein Str1,Str2,Λ,StrNRespectively indicate training sample data collection StrIn first training sample, second training Sample ... n-th training sample;
(3) with reference to the method for the step (2), the radar information as test sample is acquired, if the number of test sample For M, then ultrasonic tesint sample data set S is obtainedteExpression formula be:
Ste={ Ste1,Ste2,Λ,SteM}
Wherein Ste1,Ste2,Λ,SteMRespectively indicate test sample data set SteIn first test sample, second test Sample ... m-th test sample, N and M are respectively the number of training sample and the number of test sample, and M≤N;
(4) to radar range finding training sample data collection StrSample information carry out feature extraction, obtain new training sample Data set Str';
(5) new training sample data collection S is giventr' in the sample from different type room set different labels, and it is raw At trained label matrix T corresponding with training data matrix;
(6) to radar range finding test sample data set SteSample information carry out feature extraction, obtain new test sample Data set Ste';
(7) new test sample data set S is given referring to the step (5)te' in sample set label, generate and test The corresponding test label matrix T' of data matrix;
(8) by training sample data collection StrMatrix and corresponding training label matrix T are passed through the pole based on local receptor field Training pattern in learning machine model is limited, then model is applied to test sample data set SteMatrix obtains classification results.
Preferably, the concrete processing procedure of the step (4) is as follows:
(4-1) remembers training sample data collection StrIn any one training sample be SI, 1≤I≤N, SIIt is one by thunder The one-dimensional characteristic vector that the radar data obtained up to run-down is constituted, i.e. SI=[SI.1, SI.2, Λ, SI.l], wherein SI.1, SI.2, Λ, SI.lThe radar data for indicating l sampled point in single pass, converts polar coordinate image for this group of radar data;
(4-2) extracts polar coordinate image obtained in the step (4-1) in rectangular coordinate system, and by polar diagram Center of the center of circle of picture as rectangular image carries out color filling to the part within profile in image;
(4-3) carries out gray proces to image obtained in the step (4-2), becomes single pass grayscale image;
Grayscale image obtained in (4-3) as new input sample, is finally obtained new training sample data collection by (4-4) Str':
Str'={ Str1',Str2',Λ,Strk',Λ,StrN'}
Wherein, Str1',Str2',Λ,Strk',Λ,StrN' respectively indicate training set Str' in first training sample, Two training samples ..., k-th of training sample ..., n-th training sample, N is number of training.
Preferably, the concrete processing procedure of the step (6) is as follows:
(6-1) remembers test sample data set SteIn any one training sample be SJ, 1≤J≤M, SJIt is one by thunder The one-dimensional characteristic vector that the radar data obtained up to run-down is constituted, i.e. SJ=[SJ.1, SJ.2, Λ, SJ.l], wherein SJ.1, SJ.2, Λ, SJ.lThe radar data for indicating l sampled point in single pass, converts polar coordinate image for this group of radar data;
(6-2) extracts polar coordinate image obtained in the step (6-1) in rectangular coordinate system, and by polar diagram Center of the center of circle of picture as rectangular image carries out color filling to the part within profile in image;
(6-3) carries out gray proces to image obtained in the step (6-2), becomes single pass grayscale image;
(6-4) finally obtains new test specimens using grayscale image obtained in the step (6-3) as new input sample Notebook data collection Ste':
Ste'={ Ste1',Ste2',Λ,Stek',Λ,SteM'}
Wherein, Ste1',Ste2',Λ,Stek',Λ,SteM' respectively indicate new test sample data set Ste' in first Test sample, second test sample ..., k-th of test sample ..., m-th test sample, M be test sample number.
Beneficial effect of the present invention:
Mobile robot indoor scene recognition methods proposed by the present invention based on laser radar range information has following Advantage:
1, the present invention can be used in most of universal indoor living scenes, such as home environment, working environment, range Extensively, practical;
2, the present invention carries out range scans using laser radar, and precision is high, live effect is good, easy to operate;
3, the present invention converts ring projection vector by series of features extraction for the original ranging information of laser radar, realizes The advantages of non-deformed and contractive invariance, improve the accuracy rate of scene Recognition;
4, the present invention enormously simplifies calculating using the extreme learning machine based on local receptor field as classifier, and has There is good robustness.
Detailed description of the invention
Fig. 1 is the mobile robot indoor scene recognition methods flow chart element of the invention based on laser radar range information Figure.
Fig. 2 is the sampling cartridge of the mobile robot indoor scene recognition methods of the invention based on laser radar range information It sets.
Specific embodiment
In order to deepen the understanding of the present invention, present invention work is further retouched in detail below in conjunction with drawings and examples It states, the present embodiment for explaining only the invention, does not constitute protection scope of the present invention and limits.
A kind of indoor scene recognition methods based on laser radar range information proposed by the present invention is by mobile machine Human body turns serial ports by USB with computer and connect, and laser radar information collected is stored in computer in real time;Acquisition The laser radar range information arrived is realized the indoor scene identification based on laser radar by computer, and specific embodiment is into one Detailed description are as follows for step.
(1) laser radar RPLIDAR-A2 is installed on mobile robot platform, RPLIDAR-A2 may be implemented to surrounding The 360 degrees omnidirection scanning ranging detection of environment, measurement radius is 8 meters, and sample frequency is up to 4000 times per second, every scanning one The range data of all available 400 sampled points;
(2) in different types of indoor scene, the artificial mobile robot that controls carries out collisionless traveling, while acquiring work Training sample data collection S is then obtained if the number of training sample is N for the ranging data of training sampletrExpression formula be:
Str={ Str1,Str2,Λ,StrN}
Strk={ strk.1,strk.2,L,strk.400}
Wherein Str1,Str2,Λ,StrNRespectively indicate training sample data collection StrIn first training sample, second training Sample ... n-th training sample, StrkIndicate that training sample data concentrate k-th of training sample, strk.1,strk.2,L,strk.400 Respectively represent the range data of first sampled point included in k-th of training sample, the distance number of second sampled point According to ..., the range data of the 400th sampled point;
(3) with reference to the method for (2), the radar information as test sample is acquired, if the number of test sample is M, then To ultrasonic tesint sample data set SteExpression formula be:
Ste={ Ste1,Ste2,Λ,SteM}
Stek={ stek.1,stek.2,L,stek.400}
Wherein Ste1,Ste2,Λ,SteMRespectively indicate test sample data set SteIn first test sample, second test Sample ... m-th test sample, StekIndicate k-th of test sample s in test sample data settek.1,stek.2,L,stek.400Point The range data of first sampled point included in k-th of test sample, the distance number of second sampled point are not represented According to ..., the range data of the 400th sampled point, N and M are respectively the number of training sample and the number of test sample, and N= 4M;
(4) to radar range finding training sample data collection StrSample information carry out feature extraction, concrete processing procedure is as follows:
(4-1) remembers training sample set StrIn any one training sample be SI, 1≤I≤N, SIIt is one to be swept by radar Retouch the one-dimensional characteristic vector that the radar data that one week obtains is constituted, i.e. SI=[SI.1,SI.2,L,SI.400], wherein SI.1,SI.2, L,SI.400The range data for indicating 400 sampled points in single pass, converts polar coordinate image for this group of radar data, i.e., solid Centered on determining a bit, the length of each sampled point to center is is somebody's turn to do in the collected range data of sampled point, by each Sampled point connects, and can visually indicate the shape and size of institute's scanning circumstance;
(4-2) extracts polar coordinate image obtained in (4-1) in rectangular coordinate system, and by the center of circle of polar coordinate image As the center of rectangular image, color filling is carried out to the part within profile in image;
(4-3) carries out gray proces to image obtained in (4-2), becomes single pass grayscale image, grayscale image Size is 43 × 43;
Grayscale image obtained in (4-3) as new training sample, is finally obtained new training sample data collection by (4-4) Str':
Str'={ Str1',Str2',Λ,Strk',Λ,StrN'}
Wherein, Strk' indicate training set Str' in k-th training sample grayscale image picture element matrix;
(5) new training sample data collection S is giventr' in the sample from different type room set different labels, and it is raw At trained label matrix T corresponding with training sample data collection;
(6) to radar range finding test sample data set SteSample information carry out feature extraction, concrete processing procedure is as follows:
(6-1) refers to (4-1) for test sample data set SteIn test sample be converted into polar coordinate image;
(6-2) extracts polar coordinate image obtained in (6-1) in rectangular coordinate system, and by the center of circle of polar coordinate image As the center of rectangular image, color filling is carried out to the part within profile in image;
(6-3) carries out gray proces to image obtained in (6-2), becomes single pass grayscale image;
Grayscale image obtained in (6-3) as new test sample, is finally obtained new test sample data set by (6-4) Ste':
Ste'={ Ste1',Ste2',Λ,Stek',Λ,SteM'}
Wherein, Stek' indicate new test sample data set Ste' in k-th test sample grayscale image picture element matrix;
(7) new test sample data set S is given referring to (5)te' in sample set label, generate with test data matrix Corresponding test label matrix T';
(8) by training sample data collection StrMatrix and corresponding training label matrix T are passed through the pole based on local receptor field Limit learning machine is trained model, then model is applied to test sample data set SteMatrix obtains classification results.
The foregoing is only a preferred embodiment of the present invention, but scope of protection of the present invention is not limited thereto, Anyone skilled in the art in the technical scope disclosed by the present invention, according to the technique and scheme of the present invention and its Inventive concept is subject to equivalent substitution or change, should be covered by the protection scope of the present invention.

Claims (3)

1. a kind of indoor scene recognition methods based on laser radar range information, which is characterized in that this method specifically include with Lower step:
(1) laser radar that scanning range is 360 ° is installed in mobile robot;
(2) in different types of indoor scene, the artificial mobile robot that controls carries out collisionless traveling, while acquiring as instruction Practice the radar information of sample, if the number of training sample is N, then obtains training sample data collection StrExpression formula be:
Str={ Str1,Str2,Λ,StrN}
Wherein Str1,Str2,Λ,StrNRespectively indicate training sample data collection StrIn first training sample, second trained sample This ... n-th training sample;
(3) with reference to the method for the step (2), the radar information as test sample is acquired, if the number of test sample is M, Then obtain ultrasonic tesint sample data set SteExpression formula be:
Ste={ Ste1,Ste2,Λ,SteM}
Wherein Ste1,Ste2,Λ,SteMRespectively indicate test sample data set SteIn first test sample, second test specimens This ... m-th test sample, N and M are respectively the number of training sample and the number of test sample, and M≤N;
(4) to radar range finding training sample data collection StrSample information carry out feature extraction, obtain new training sample data Collect Str';
(5) new training sample data collection S is giventr' in the sample from different type room set different labels, and generate with The corresponding trained label matrix T of training data matrix;
(6) to radar range finding test sample data set SteSample information carry out feature extraction, obtain new test sample data Collect Ste';
(7) new test sample data set S is given referring to the step (5)te' in sample set label, generate and test data The corresponding test label matrix T' of matrix;
(8) by training sample data collection StrMatrix and corresponding training label matrix T are passed through the limit based on local receptor field Training pattern in habit machine model, then model is applied to test sample data set SteMatrix obtains classification results.
2. the method according to claim 1, wherein the concrete processing procedure of the step (4) is as follows:
(4-1) remembers training sample data collection StrIn any one training sample be SI, 1≤I≤N, SIIt is one by radar scanning The one-dimensional characteristic vector that one week radar data obtained is constituted, i.e. SI=[SI.1, SI.2, Λ, SI.l], wherein SI.1, SI.2, Λ, SI.lThe radar data for indicating l sampled point in single pass, converts polar coordinate image for this group of radar data;
(4-2) extracts polar coordinate image obtained in the step (4-1) in rectangular coordinate system, and by polar coordinate image Center of the center of circle as rectangular image carries out color filling to the part within profile in image;
(4-3) carries out gray proces to image obtained in the step (4-2), becomes single pass grayscale image;
Grayscale image obtained in (4-3) as new input sample, is finally obtained new training sample data collection S by (4-4)tr':
Str'={ Str1',Str2',Λ,Strk',Λ,StrN'}
Wherein, Str1',Str2',Λ,Strk',Λ,StrN' respectively indicate training set Str' in first training sample, second Training sample ..., k-th of training sample ..., n-th training sample, N is number of training.
3. the method according to claim 1, wherein the concrete processing procedure of the step (6) is as follows:
(6-1) remembers test sample data set SteIn any one training sample be SJ, 1≤J≤M, SJIt is one by radar scanning The one-dimensional characteristic vector that one week radar data obtained is constituted, i.e. SJ=[SJ.1, SJ.2, Λ, SJ.l], wherein SJ.1, SJ.2, Λ, SJ.lThe radar data for indicating l sampled point in single pass, converts polar coordinate image for this group of radar data;
(6-2) extracts polar coordinate image obtained in the step (6-1) in rectangular coordinate system, and by polar coordinate image Center of the center of circle as rectangular image carries out color filling to the part within profile in image;
(6-3) carries out gray proces to image obtained in the step (6-2), becomes single pass grayscale image;
(6-4) finally obtains new test sample number using grayscale image obtained in the step (6-3) as new input sample According to collection Ste':
Ste'={ Ste1',Ste2',Λ,Stek',Λ,SteM'}
Wherein, Ste1',Ste2',Λ,Stek',Λ,SteM' respectively indicate new test sample data set Ste' in first test Sample, second test sample ..., k-th of test sample ..., m-th test sample, M be test sample number.
CN201810725992.2A 2018-07-04 2018-07-04 A kind of indoor scene recognition methods based on laser radar range information Pending CN108921892A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810725992.2A CN108921892A (en) 2018-07-04 2018-07-04 A kind of indoor scene recognition methods based on laser radar range information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810725992.2A CN108921892A (en) 2018-07-04 2018-07-04 A kind of indoor scene recognition methods based on laser radar range information

Publications (1)

Publication Number Publication Date
CN108921892A true CN108921892A (en) 2018-11-30

Family

ID=64423826

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810725992.2A Pending CN108921892A (en) 2018-07-04 2018-07-04 A kind of indoor scene recognition methods based on laser radar range information

Country Status (1)

Country Link
CN (1) CN108921892A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113233270A (en) * 2021-06-15 2021-08-10 上海有个机器人有限公司 Elevator internal and external judgment method based on robot running safety and related equipment
CN113324549A (en) * 2021-05-28 2021-08-31 广州科语机器人有限公司 Method, device, equipment and storage medium for positioning mobile robot charging seat

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104457751A (en) * 2014-11-19 2015-03-25 中国科学院计算技术研究所 Method and system for recognizing indoor and outdoor scenes
CN106874961A (en) * 2017-03-03 2017-06-20 北京奥开信息科技有限公司 A kind of indoor scene recognition methods using the very fast learning machine based on local receptor field

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104457751A (en) * 2014-11-19 2015-03-25 中国科学院计算技术研究所 Method and system for recognizing indoor and outdoor scenes
CN106874961A (en) * 2017-03-03 2017-06-20 北京奥开信息科技有限公司 A kind of indoor scene recognition methods using the very fast learning machine based on local receptor field

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113324549A (en) * 2021-05-28 2021-08-31 广州科语机器人有限公司 Method, device, equipment and storage medium for positioning mobile robot charging seat
CN113233270A (en) * 2021-06-15 2021-08-10 上海有个机器人有限公司 Elevator internal and external judgment method based on robot running safety and related equipment

Similar Documents

Publication Publication Date Title
CN106951900B (en) A kind of automatic identifying method of arrester meter reading
CN109977813B (en) Inspection robot target positioning method based on deep learning framework
CN109685762A (en) A kind of Downtilt measurement method based on multiple dimensioned deep semantic segmentation network
CN110021033B (en) Target tracking method based on pyramid twin network
KR100776215B1 (en) Apparatus and method for estimating location and generating map of mobile body, using upper image, computer-readable recording media storing computer program controlling the apparatus
CN109635875A (en) A kind of end-to-end network interface detection method based on deep learning
CN108711172B (en) Unmanned aerial vehicle identification and positioning method based on fine-grained classification
CN109029429B (en) WiFi and geomagnetic fingerprint based multi-classifier global dynamic fusion positioning method
CN110648307A (en) Method for checking state of transformer substation pressure plate by using image comparison technology
CN109182081B (en) Single cell sorting system based on image processing model
CN109086763B (en) Pointer instrument reading method and device
CN114973002A (en) Improved YOLOv 5-based ear detection method
CN111598098A (en) Water gauge water line detection and effectiveness identification method based on full convolution neural network
CN112613397B (en) Method for constructing target recognition training sample set of multi-view optical satellite remote sensing image
CN109829476A (en) End-to-end three-dimension object detection method based on YOLO
CN108921892A (en) A kind of indoor scene recognition methods based on laser radar range information
CN106874961A (en) A kind of indoor scene recognition methods using the very fast learning machine based on local receptor field
CN109255279A (en) A kind of method and system of road traffic sign detection identification
CN112381190B (en) Cable force testing method based on mobile phone image recognition
CN109636856A (en) Object 6 DOF degree posture information union measuring method based on HOG Fusion Features operator
CN105043544B (en) A kind of image spectrum tracking and system
CN108376238B (en) Multi-target unmarked aquatic organism identification tracking method and system
CN111104523A (en) Audio-visual cooperative learning robot based on voice assistance and learning method
CN115767424A (en) Video positioning method based on RSS and CSI fusion
CN115830474A (en) Method and system for identifying wild Tibetan medicine lamiophlomis rotata and distribution thereof and calculating yield thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181130