CN106874961A - A kind of indoor scene recognition methods using the very fast learning machine based on local receptor field - Google Patents

A kind of indoor scene recognition methods using the very fast learning machine based on local receptor field Download PDF

Info

Publication number
CN106874961A
CN106874961A CN201710123021.6A CN201710123021A CN106874961A CN 106874961 A CN106874961 A CN 106874961A CN 201710123021 A CN201710123021 A CN 201710123021A CN 106874961 A CN106874961 A CN 106874961A
Authority
CN
China
Prior art keywords
sample
training
test
radar
training sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710123021.6A
Other languages
Chinese (zh)
Inventor
王裕基
刘华平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Olympic Mdt Infotech Ltd
Original Assignee
Beijing Olympic Mdt Infotech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Olympic Mdt Infotech Ltd filed Critical Beijing Olympic Mdt Infotech Ltd
Priority to CN201710123021.6A priority Critical patent/CN106874961A/en
Publication of CN106874961A publication Critical patent/CN106874961A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

Use the very fast learning machine (LRF ELM) based on local receptor field come the method for processing indoor scene classification problem the present invention relates to a kind of, belong to the indoor scene recognition methods of mobile robot.The method includes:1) gather as 2 D radar informations of training sample;2) gather as 2 D radar informations of test sample;3) binary image information extraction is carried out to radar information;4) training LRF ELM networks obtain optimal output weight;5) verify that the method carries out the accuracy rate of scene Recognition by test set.On the basis of neutral net, the correctness to the indoor scene identification based on 2 D radar informations judges the present invention, shortens operation time, substantially increases the efficiency of scene Recognition.

Description

A kind of indoor scene recognition methods using the very fast learning machine based on local receptor field
Technical field
Indoor scene point is processed using the very fast learning machine (LRF-ELM) based on local receptor field the present invention relates to a kind of The method of class problem, belongs to the indoor scene recognition methods field of mobile robot.
Background technology
Indoor scene identification produces far-reaching influence as current study hotspot to daily life.It should Video monitoring, smart home, the daily production operation of robot and rescue of hazardous environment etc. are essentially consisted in value.
With modern sensor, control and the development of artificial intelligence technology, scientific research personnel is to based on computer vision and base Extensive research is carried out in the scene recognition method of the broad aspect of depth perception two." one kind is based on the patent of invention of Zhejiang University The depth map of the present frame that the indoor scene localization method of hybrid camera " is shot with hybrid camera and cromogram and training Good recurrence forest, calculates the corresponding world coordinates of current camera, completes indoor scene positioning, but its effect receives light shadow Sound is larger.
The content of the invention
The purpose of the present invention is the weak point for overcoming conventional art, proposes a kind of fast and effectively indoor scene positioning side Method, realizes that the indoor scene based on 2-D radar informations is positioned on the basis of the very fast learning machine based on local receptor field, improves The efficiency and accuracy rate of indoor scene identification.
A kind of indoor scene localization method using the very fast learning machine based on local receptor field proposed by the present invention, including Following steps:
(1) collection if the number of training sample is N, then obtains training sample as the radar information of the scene of training sample Notebook data collection StrExpression formula be:
Str={ Str1,Str2,…,StrN}
Wherein Str1,Str2,…,StrNTraining sample data collection S is represented respectivelytrIn first training sample, second ultrasound Training sample ... n-th training sample.In different scenes, the training sample number for being gathered is roughly the same;
(2) collection of radar information is carried out to the test sample scene that needs are identified.If the number of test sample is M, then obtain ultrasonic tesint sample data set SteExpression formula be:
Ste={ Ste1,Ste2,…,SteM}
Wherein Ste1,Ste2,…,SteMTest sample data set S is represented respectivelyteIn first test sample, second test Sample ... m-th test sample.In different scenes, the training sample number for being gathered is roughly the same.M and N are respectively training sample The number of this number and test sample, generally M≤N;
(3) to radar range finding training set StrSample information carry out feature extraction, concrete processing procedure is as follows:
(3-1) note training sample set StrIn any one training sample be SI, 1≤I≤N, SIIt is one to be swept by radar Retouch the one-dimensional characteristic vector that the radar data for obtaining for a week is constituted, i.e. SI=[SI.1, SI.2..., SI.l], wherein SI.1, SI.2..., SI.lThe l radar data of sampled point in single pass is represented, it is converted into polar coordinate image by this group of radar data;
(3-2) extracts polar coordinate image in rectangular coordinate system, and using the center of circle of polar coordinate image as rectangular image Center, point red filling is carried out to contoured interior;
(3-3) carries out binary conversion treatment to the rectangle red blank map obtained in (3-2) with white as background, make its into It is black white image, obtains view data, obtains the sample S of new training setI', complete the feature extraction to training set sample;
And then obtain new training sample set Str':
Str'={ Str1',Str2',…,Strk',…,StrN'};
Wherein, Str1',Str2',…,Strk',…,StrN' represent respectively through asking the test obtained after binary image data Training set Str' in first training sample, second training sample ..., k-th training sample ..., n-th training sample, N is number of training;
(3-4) gives training set Str' in sample from different type room set different labels, and constitute label matrix T;
(4) to radar range finding test set SteSample information carry out feature extraction, concrete processing procedure is as follows:
(4-1) note test sample collection SteIn any one training sample be SJ, 1≤J≤M, SJIt is one to be swept by radar Retouch the one-dimensional characteristic vector that the radar data for obtaining for a week is constituted, i.e. SJ=[SJ.1, SJ.2..., SJ.l], wherein SJ.1, SJ.2..., SJ.lThe l radar data of sampled point in single pass is represented, it is converted into polar coordinate image by this group of radar data;
(4-2) extracts polar coordinate image in rectangular coordinate system, and using the center of circle of polar coordinate image as rectangular image Center, point red filling is carried out to contoured interior;
(4-3) carries out binary conversion treatment to the rectangle red blank map obtained in (4-2) with white as background, make its into It is black white image, obtains view data, obtains the sample S of new training setJ', complete the feature extraction to training set sample;
And then obtain new test sample collection Ste':
Ste'={ Ste1',Ste2',…,Stek',…,SteM'};
Wherein, Ste1',Ste2',…,Stek',…,SteM' represent respectively through asking the test obtained after binary image data Collection Ste' in first test sample, second test sample ..., k-th test sample ..., m-th test sample, M is Test sample number;
(4-4) gives test set S in the way of in (3-4)te' in sample from different type room set different marks Sign, and constitute label matrix T';
(5) by training set Str' and corresponding label matrix T, test set Ste' and corresponding label matrix T' as being based on The input of the very fast learning machine of local receptor field, and the relevant parameters such as convolutional layer, pond layer are set;
(5-1) generates input weight W at randomi=[wi1,,,win]TWith the biasing b of Hidden uniti=[bi1,…bin]T, it is right Initial weight is orthogonalized, and obtains new input weightIf training set sample input size is (d × d), receptive field size It is (r × r), k-th value c of convolution node (i, j) of characteristic patterni,j,kCalculated by following formula:
(5-2) carries out square root pond to characteristic pattern, and pond size e represents Chi Hua centers to the distance on side, hp,q,kRepresent Combined joint (p, q) in k-th pond figure, is calculated as follows:
The value that (5-3) simply connects all combined joints forms a row vector, and N number of training set input sample Row vector is put together, obtains combination layer matrix H, and output weight beta is calculated by following formula:
As N > K (d-r+1)2,
As N≤K (d-r+1)2,
(5-4) input weightIt is constant, to test set Ste' sample carry out and identical convolution in (5-1) and (5-2) With pond process, combination layer H' is obtained, imputation method is Y to the Tag Estimation of test set sample, be can be calculated:
Y=H'* β
Prediction label Y is contrasted with the given label T' of test set, the accuracy of scene Recognition is drawn.
It is proposed by the present invention based on 2-D radar informations, using the indoor scene of the very fast learning machine based on local receptor field Recognition methods, substantially reduces operation time, improves the efficiency of indoor scene identification.And the inventive method is simple and reliable, With very strong practicality.
Brief description of the drawings
Fig. 1 is algorithm flow chart used by the present invention.
Fig. 2 is the schematic diagram of very fast learning machine (LRF-ELM) of the algorithm used based on local receptor field in the present invention.
Fig. 3 is the procedure chart of binary image information extraction in carrying out characteristic extraction procedure to training set and test set.
Specific embodiment
It is proposed by the present invention a kind of based on 2-D radar informations, using the interior of the very fast learning machine based on local receptor field Scene recognition method, specific embodiment further describes as follows.
(1) radar sensor is installed in mobile robot, the radar information as the scene of training sample is gathered, if instruction The number for practicing sample is N, then obtain training sample data collection StrExpression formula be:
Str={ Str1,Str2,…,StrN}
Wherein Str1,Str2,…,StrNTraining sample data collection S is represented respectivelytrIn first training sample, second ultrasound Training sample ... n-th training sample.In different scenes, the training sample number for being gathered is roughly the same.
(2) collection of radar information is carried out to the test sample scene that needs are identified.If the number of test sample is M, then obtain ultrasonic tesint sample data set SteExpression formula be:
Ste={ Ste1,Ste2,…,SteM}
Wherein Ste1,Ste2,…,SteMTest sample data set S is represented respectivelyteIn first test sample, second test Sample ... m-th test sample.In different scenes, the training sample number for being gathered is roughly the same.M and N are respectively training sample The number of this number and test sample, generally M≤N.
(3) feature extraction is carried out to radar range finding sample information, concrete processing procedure is as follows:
(3-1) note training sample set StrIn any one training sample be SI, 1≤I≤N, SIIt is one to be swept by radar Retouch the one-dimensional characteristic vector that the radar data for obtaining for a week is constituted, i.e. SI=[SI.1, SI.2..., SI.l], wherein SI.1, SI.2..., SI.lThe l radar data of sampled point in single pass is represented, it is converted into polar coordinate image by this group of radar data;
(3-2) extracts polar coordinate image in rectangular coordinate system, and using the center of circle of polar coordinate image as rectangular image Center, point red filling is carried out to contoured interior;
(3-3) carries out binary conversion treatment to the rectangle red blank map obtained in (3-2) with white as background, make its into It is black white image, obtains view data, obtains the sample S of new training setI', complete the feature extraction to training set sample.
And then obtain new training sample set Str':
Str'={ Str1',Str2',…,Strk',…,StrN'}
Wherein, Str1',Str2',…,Strk',…,StrN' represent respectively through asking the test obtained after binary image data Training set Str' in first training sample, second training sample ..., k-th training sample ..., n-th training sample, N is number of training;
(3-4) gives training set Str' in sample from different type room set different labels, such as corridor is 1, bathroom It is 2, bedroom is 3 etc., and constitutes label matrix T.
(4-1) note test sample collection SteIn any one training sample be SJ, 1≤J≤M, SJIt is one to be swept by radar Retouch the one-dimensional characteristic vector that the radar data for obtaining for a week is constituted, i.e. SJ=[SJ.1, SJ.2..., SJ.l], wherein SJ.1, SJ.2..., SJ.lThe l radar data of sampled point in single pass is represented, it is converted into polar coordinate image by this group of radar data;
(4-2) extracts polar coordinate image in rectangular coordinate system, and using the center of circle of polar coordinate image as rectangular image Center, point red filling is carried out to contoured interior;
(4-3) carries out binary conversion treatment to the rectangle red blank map obtained in (4-2) with white as background, make its into It is black white image, obtains view data, obtains the sample S of new training setJ', complete the feature extraction to training set sample.
And then obtain new test sample collection Ste':
Ste'={ Ste1',Ste2',…,Stek',…,SteM'}
Wherein, Ste1',Ste2',…,Stek',…,SteM' represent respectively through asking the test obtained after binary image data Collection Ste' in first test sample, second test sample ..., k-th test sample ..., m-th test sample, M is Test sample number;
(4-4) gives test set S in the way of in (3-4)te' in sample from different type room set different marks Sign, and constitute label matrix T'.
(5) by training set Str' and corresponding label matrix T, test set Ste' and corresponding label matrix T' as being based on The input of the very fast learning machine of local receptor field, and the relevant parameters such as convolutional layer, pond layer are set.
(5-1) generates input weight W at randomi=[wi1,,,win]TWith the biasing b of Hidden uniti=[bi1,…bin]T, it is right Initial weight is orthogonalized, and obtains new input weightIf training set sample input size is (d × d), receptive field size It is (r × r), k-th value c of convolution node (i, j) of characteristic patterni,j,kCalculated by following formula:
(5-2) carries out square root pond to characteristic pattern, and pond size e represents Chi Hua centers to the distance on side, hp,q,kRepresent Combined joint (p, q) in k-th pond figure, is calculated as follows:
The value that (5-3) simply connects all combined joints forms a row vector, and N number of training set input sample Row vector is put together, obtains combination layer matrix H, and output weight beta is calculated by following formula:
As N > K (d-r+1)2,
As N≤K (d-r+1)2,
(5-4) input weightIt is constant, to test set Ste' sample carry out with identical convolution in (5-1) and (5-2) with Pond process, obtains combination layer H', and imputation method is Y to the Tag Estimation of test set sample, be can be calculated:
Y=H'* β
Finally, prediction label Y is contrasted with the given label T' of test set, is drawn the correct of this scene Recognition Rate.

Claims (1)

1. it is a kind of to realize that indoor scene knows method for distinguishing using the very fast learning machine (LRF-ELM) based on local receptor field, its It is characterised by, the method is comprised the following steps:
(1) collection if the number of training sample is N, then obtains number of training as the radar information of the scene of training sample According to collection StrExpression formula be:
Str={ Str1,Str2,…,StrN}
Wherein Str1,Str2,…,StrNTraining sample data collection S is represented respectivelytrIn first training sample, second ultrasound training Sample ... n-th training sample.In different scenes, the training sample number for being gathered is roughly the same;
(2) collection of radar information is carried out to the test sample scene that needs are identified.If the number of test sample is M, then Obtain ultrasonic tesint sample data set SteExpression formula be:
Ste={ Ste1,Ste2,…,SteM}
Wherein Ste1,Ste2,…,SteMTest sample data set S is represented respectivelyteIn first test sample, second test specimens This ... m-th test sample.In different scenes, the training sample number for being gathered is roughly the same.M and N are respectively training sample Number and test sample number, generally M≤N;
(3) to radar range finding training set StrSample information carry out feature extraction, concrete processing procedure is as follows:
(3-1) note training sample set StrIn any one training sample be SI, 1≤I≤N, SIIt it is one by radar scanning one week The one-dimensional characteristic vector that the radar data of acquisition is constituted, i.e. SI=[SI.1, SI.2..., SI.l], wherein SI.1, SI.2..., SI.l The l radar data of sampled point in single pass is represented, it is converted into polar coordinate image by this group of radar data;
(3-2) extracts polar coordinate image in rectangular coordinate system, and using the center of circle of polar coordinate image as in rectangular image The heart, red filling is carried out to contoured interior point;
(3-3) carries out binary conversion treatment to the red blank map of the rectangle with white as background obtained in (3-2), becomes black White image, obtains view data, obtains the sample S of new training setI', complete the feature extraction to training set sample;
And then obtain new training sample set Str':
Str'={ Str1',Str2',…,Strk',…,StrN'};
Wherein, Str1',Str2',…,Strk',…,StrN' represent respectively through asking the test obtained after binary image data to train Collection Str' in first training sample, second training sample ..., k-th training sample ..., n-th training sample, N is Number of training;
(3-4) gives training set Str' in sample from different type room set different labels, and constitute label matrix T;
(4) to radar range finding test set SteSample information carry out feature extraction, concrete processing procedure is as follows:
(4-1) note test sample collection SteIn any one training sample be SJ, 1≤J≤M, SJIt it is one by radar scanning one week The one-dimensional characteristic vector that the radar data of acquisition is constituted, i.e. SJ=[SJ.1, SJ.2..., SJ.l], wherein SJ.1, SJ.2..., SJ.l The l radar data of sampled point in single pass is represented, it is converted into polar coordinate image by this group of radar data;
(4-2) extracts polar coordinate image in rectangular coordinate system, and using the center of circle of polar coordinate image as in rectangular image The heart, red filling is carried out to contoured interior point;
(4-3) carries out binary conversion treatment to the red blank map of the rectangle with white as background obtained in (4-2), becomes black White image, obtains view data, obtains the sample S of new training setJ', complete the feature extraction to training set sample;
And then obtain new test sample collection Ste':
Ste'={ Ste1',Ste2',…,Stek',…,SteM'};
Wherein, Ste1',Ste2',…,Stek',…,SteM' represent respectively through seeking the test set S obtained after binary image datate' In first test sample, second test sample ..., k-th test sample ..., m-th test sample, M is test specimens This number;
(4-4) gives test set S in the way of in (3-4)te' in sample from different type room set different labels, and Composition label matrix T';
(5) by training set Str' and corresponding label matrix T, test set Ste' and corresponding label matrix T' as based on local sense It is input into by wild very fast learning machine, and the relevant parameters such as convolutional layer, pond layer is set;
(5-1) generates input weight W at randomi=[wi1,,,win]TWith the biasing b of Hidden uniti=[bi1,…bin]T, to initial Weight is orthogonalized, and obtains new input weightIf training set sample input size is (d × d), receptive field size is (r × r), k-th value c of convolution node (i, j) of characteristic patterni,j,kCalculated by following formula:
c i , j , k = Σ m - 1 r Σ n - 1 r ( x i + m - 1 , j + n - 1 · a m , n , k ) , i , j = 1 , ... ( d - r + 1 )
(5-2) carries out square root pond to characteristic pattern, and pond size e represents Chi Hua centers to the distance on side, hp,q,kRepresent k-th Combined joint (p, q) in Chi Huatu, is calculated as follows:
h p , q , k = Σ i = p - e p + e Σ j = q - e q + e c i , j , k 2 , p , q = 1 , ... , ( d - r + 1 ) ;
(5-3) simply connect all combined joints value formed a row vector, and N number of training set input sample row to Amount is put together, obtains combination layer matrix H, and output weight beta is calculated by following formula:
As N > K (d-r+1)2,
β = ( I C + H T H ) - 1 H T T
As N≤K (d-r+1)2,
β = H T ( I C + HH T ) - 1 T
(5-4) input weightIt is constant, to test set Ste' sample carry out and identical convolution in (5-1) and (5-2) and pond Process, obtains combination layer H', and imputation method is Y to the Tag Estimation of test set sample, be can be calculated:
Y=H'* β
Prediction label Y is contrasted with the given label T' of test set, the accuracy of scene Recognition is drawn.
CN201710123021.6A 2017-03-03 2017-03-03 A kind of indoor scene recognition methods using the very fast learning machine based on local receptor field Pending CN106874961A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710123021.6A CN106874961A (en) 2017-03-03 2017-03-03 A kind of indoor scene recognition methods using the very fast learning machine based on local receptor field

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710123021.6A CN106874961A (en) 2017-03-03 2017-03-03 A kind of indoor scene recognition methods using the very fast learning machine based on local receptor field

Publications (1)

Publication Number Publication Date
CN106874961A true CN106874961A (en) 2017-06-20

Family

ID=59169922

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710123021.6A Pending CN106874961A (en) 2017-03-03 2017-03-03 A kind of indoor scene recognition methods using the very fast learning machine based on local receptor field

Country Status (1)

Country Link
CN (1) CN106874961A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107463952A (en) * 2017-07-21 2017-12-12 清华大学 A kind of object material sorting technique based on multi-modal fusion deep learning
CN107945227A (en) * 2017-11-21 2018-04-20 北京工业大学 The generation method in radar asorbing paint area in infrared panorama monitoring
CN108509768A (en) * 2018-03-31 2018-09-07 中南大学 Key protein matter recognition methods based on protein space-time sub-network and identifying system
CN108921892A (en) * 2018-07-04 2018-11-30 合肥中科自动控制系统有限公司 A kind of indoor scene recognition methods based on laser radar range information
CN109190638A (en) * 2018-08-09 2019-01-11 太原理工大学 Classification method based on the online order limit learning machine of multiple dimensioned local receptor field
CN111007496A (en) * 2019-11-28 2020-04-14 成都微址通信技术有限公司 Through-wall perspective method based on neural network associated radar

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104517289A (en) * 2014-12-12 2015-04-15 浙江大学 Indoor scene positioning method based on hybrid camera
CN104598920A (en) * 2014-12-30 2015-05-06 中国人民解放军国防科学技术大学 Scene classification method based on Gist characteristics and extreme learning machine
CN104700078A (en) * 2015-02-13 2015-06-10 武汉工程大学 Scale-invariant feature extreme learning machine-based robot scene recognition method
CN105891780A (en) * 2016-04-01 2016-08-24 清华大学 Indoor scene positioning method and indoor scene positioning device based on ultrasonic array information
US9530042B1 (en) * 2016-06-13 2016-12-27 King Saud University Method for fingerprint classification

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104517289A (en) * 2014-12-12 2015-04-15 浙江大学 Indoor scene positioning method based on hybrid camera
CN104598920A (en) * 2014-12-30 2015-05-06 中国人民解放军国防科学技术大学 Scene classification method based on Gist characteristics and extreme learning machine
CN104700078A (en) * 2015-02-13 2015-06-10 武汉工程大学 Scale-invariant feature extreme learning machine-based robot scene recognition method
CN105891780A (en) * 2016-04-01 2016-08-24 清华大学 Indoor scene positioning method and indoor scene positioning device based on ultrasonic array information
US9530042B1 (en) * 2016-06-13 2016-12-27 King Saud University Method for fingerprint classification

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HUANG G B等: ""Local Receptive Fields Based Extreme Learning Machine"", 《IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE》 *
HUAPING LIU等: ""Robotic Room-level Localization Using Multiple Sets of Sonar Measurements"", 《IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT》 *
李桂芝等: ""基于场景识别的移动机器人定位方法研究"", 《机器人》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107463952A (en) * 2017-07-21 2017-12-12 清华大学 A kind of object material sorting technique based on multi-modal fusion deep learning
CN107463952B (en) * 2017-07-21 2020-04-03 清华大学 Object material classification method based on multi-mode fusion deep learning
CN107945227A (en) * 2017-11-21 2018-04-20 北京工业大学 The generation method in radar asorbing paint area in infrared panorama monitoring
CN108509768A (en) * 2018-03-31 2018-09-07 中南大学 Key protein matter recognition methods based on protein space-time sub-network and identifying system
CN108921892A (en) * 2018-07-04 2018-11-30 合肥中科自动控制系统有限公司 A kind of indoor scene recognition methods based on laser radar range information
CN109190638A (en) * 2018-08-09 2019-01-11 太原理工大学 Classification method based on the online order limit learning machine of multiple dimensioned local receptor field
CN111007496A (en) * 2019-11-28 2020-04-14 成都微址通信技术有限公司 Through-wall perspective method based on neural network associated radar

Similar Documents

Publication Publication Date Title
CN106874961A (en) A kind of indoor scene recognition methods using the very fast learning machine based on local receptor field
CN105787439B (en) A kind of depth image human synovial localization method based on convolutional neural networks
CN109949316A (en) A kind of Weakly supervised example dividing method of grid equipment image based on RGB-T fusion
CN107688856B (en) Indoor robot scene active identification method based on deep reinforcement learning
CN108803617A (en) Trajectory predictions method and device
CN109635875A (en) A kind of end-to-end network interface detection method based on deep learning
CN109271888A (en) Personal identification method, device, electronic equipment based on gait
CN108549876A (en) The sitting posture detecting method estimated based on target detection and human body attitude
CN108648274A (en) A kind of cognition point cloud map creation system of vision SLAM
CN111582234B (en) Large-scale oil tea tree forest fruit intelligent detection and counting method based on UAV and deep learning
CN106709462A (en) Indoor positioning method and device
CN105197252A (en) Small-size unmanned aerial vehicle landing method and system
CN107967474A (en) A kind of sea-surface target conspicuousness detection method based on convolutional neural networks
CN106991147A (en) A kind of Plant identification and recognition methods
CN114612769B (en) Integrated sensing infrared imaging ship detection method integrated with local structure information
CN107194338A (en) Traffic environment pedestrian detection method based on human body tree graph model
CN111881802B (en) Traffic police gesture recognition method based on double-branch space-time graph convolutional network
CN109886155A (en) Man power single stem rice detection localization method, system, equipment and medium based on deep learning
CN110135277A (en) A kind of Human bodys' response method based on convolutional neural networks
CN109117717A (en) A kind of city pedestrian detection method
CN110221290A (en) Unmanned plane target based on ant group algorithm optimization searches for construction method
CN101286236B (en) Infrared object tracking method based on multi- characteristic image and average drifting
CN109766790A (en) A kind of pedestrian detection method based on self-adaptive features channel
CN114581307A (en) Multi-image stitching method, system, device and medium for target tracking identification
CN113627326B (en) Behavior recognition method based on wearable equipment and human skeleton

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170620