CN110427953A - Robot is allowed to carry out the implementation method of vision place identification in changing environment based on convolutional neural networks and sequences match - Google Patents

Robot is allowed to carry out the implementation method of vision place identification in changing environment based on convolutional neural networks and sequences match Download PDF

Info

Publication number
CN110427953A
CN110427953A CN201910544572.9A CN201910544572A CN110427953A CN 110427953 A CN110427953 A CN 110427953A CN 201910544572 A CN201910544572 A CN 201910544572A CN 110427953 A CN110427953 A CN 110427953A
Authority
CN
China
Prior art keywords
picture
sequence
pictures
feature
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910544572.9A
Other languages
Chinese (zh)
Other versions
CN110427953B (en
Inventor
王勇
薛韬略
刘金鑫
常祥锋
李雯雯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
China Mobile Communications Co Ltd
Original Assignee
Central South University
China Mobile Communications Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University, China Mobile Communications Co Ltd filed Critical Central South University
Priority to CN201910544572.9A priority Critical patent/CN110427953B/en
Publication of CN110427953A publication Critical patent/CN110427953A/en
Application granted granted Critical
Publication of CN110427953B publication Critical patent/CN110427953B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Library & Information Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The implementation method for allowing robot to carry out the identification of vision place in changing environment based on convolutional neural networks and sequences match that the invention discloses a kind of, by the feature for estimating the Non-overlapping Domain between picture on the convolution characteristic pattern of picture, then it removes this feature and calculates the similarity distance between picture again, relative to the existing method based on convolutional neural networks, the present invention has real-time performance and stronger to the anti-interference ability of robot visual angle change.Relative to the existing method based on sequences match technology, the matching Sequence Detection operator proposed in the invention can more fully utilize the information of sequence of pictures, be that robot can have higher place recognition accuracy and recall rate under with extreme environmental change.

Description

Robot is allowed to carry out in changing environment based on convolutional neural networks and sequences match The implementation method of vision place identification
Technical field
The invention belongs to robots and computer vision field, and in particular to one kind is based on convolutional neural networks and sequence The implementation method for allowing robot to carry out the identification of vision place in changing environment matched
Background technique
The identification of robotic vision place is the picture that robot acquisition is currently located place, then by picture and robot History picture library in picture be compared, and then confirm current location whether belong to the place in history picture library it is same Place allows robot to identify the place reached before robot, is always the difficulties in robot vision field.In It includes two main challenges that the identification of vision place is carried out in the environment of variation.First is that the environmental factors such as illumination, weather, season become Change the variation for leading to environment appearance, this deposits robot between the content of the picture in the same place that different time sections acquire In very big appearance difference.Second is that robot is different in the posture of same place acquisition picture, this makes in the picture of acquisition There are the differences at visual angle between content.This two o'clock all makes the similarity between the picture that same place acquires lower, unfavorable Place identification is carried out in robot.Currently, the bag of words method based on the crucial point feature such as SIFT, SUFT is that robot carries out The typical method of vision place identification, however, its disadvantage is also it will be apparent that the validity of this method is built upon robot On the basis of the environmental factors such as illumination, the weather of locating environment are constant, therefore narrow scope of application, environmental change is resisted Interference performance is not strong.When robot is chronically in true environment, the environmental conditions such as illumination, weather change, these methods are past It is past helpless.
Recently, deep learning achieves achievement outstanding in a variety of Computer Vision Tasks.It identifies and leads in vision place Domain, also have much the method based on deep learning be suggested.Method based on convolutional neural networks is roughly divided into two classes.One kind is Picture is divided into many local picture blocks, feature is extracted respectively to these picture blocks using neural network.It is another kind of refreshing to use Feature is extracted to whole picture through network, the similarity between these feature calculation pictures is recycled to carry out place matching.It is deep Spending convolutional neural networks feature has good robustness to environmental change.But there are still have some problems in these methods.It is right In extracting the method that depth nerve net coughs up feature using local picture block, due to needing multiple operation neural network to extract feature With carry out Fusion Features, the time complexity of such methods is higher, it is difficult to reach real-time requirement.For using whole picture as The input of neural network belongs to the picture in same place when robot visual angle change come the method for extracting the global characteristics of picture Between Non-overlapping Domain it is larger when, the picture feature that such methods are extracted can also be very different, and make it to robot The robustness of visual angle change is difficult to be protected.
Summary of the invention
The purpose of the present invention is to solve in the existing method based on convolutional neural networks, real-time is poor and to robot Visible change not robust the problem of, propose a kind of new vision place identification side based on convolutional neural networks and sequences match Method allows robot to have the anti-interference ability of the stronger variation to environmental factor in the identification of vision place and to itself vision The anti-interference ability of variation, and there is real-time performance and the Generalization Capability to different scenes.Comprising the following steps:
It is a kind of to allow robot to carry out the identification of vision place in changing environment based on convolutional neural networks and sequences match Implementation method, comprising the following steps:
Step 1: plurality of pictures is shot to the position for needing to carry out place identification, sequence of pictures to be checked is formed, by sequence In single picture input convolutional neural networks extract convolution feature, then by original picture captured by corresponding location formed Same convolutional neural networks extraction convolution feature is inputted wait compare every picture in sequence of pictures, then intercepts convolution feature Central area, and the region and the convolution feature wait compare every picture in sequence of pictures are compared, to find tool There is the corresponding region of maximum similarity, Non-overlapping Domain between picture is obtained after being then aligned central area with corresponding region Position removes the feature of Non-overlapping Domain, uses the similarity distance between the feature calculation picture of remaining overlapping region;
Step 2: after calculating every picture to be checked and every wait compare the distance of the similarity between picture, with two Picture number in sequence establishes the matrix with respective dimensions, and with the similarity distance of each picture as matrix phase The element value of position is answered to construct the similarity distance matrix of sequence of pictures, place then is normalized to similarity distance matrix Reason;
Step 3: design matching Sequence Detection operator makes it slide convolution on similarity distance matrix, calculates each picture Matching score between sequence then judges that corresponding sequence of pictures belongs to same place when matching score meets threshold requirement, Otherwise judgement belongs to different location.
The method, extraction convolution feature described in step 1 are that picture is input to the volume for having been subjected to pre-training In product neural network, when the layer 5 of used convolutional neural networks is pond layer, then extracts this layer and is used as picture feature, Otherwise the last layer of convolutional layer is extracted as picture feature.
The method, the convolutional neural networks for having been subjected to pre-training are instructed using place scene picture Practice, wherein place scene picture is using a certain place scene as content of shooting, and including in different times with it is same under environment The picture of point.
The method in the step one, estimates in picture and picture library to be checked Non-overlapping Domain between picture The step of position are as follows:
A) remember that picture to be checked is picture A, the picture in picture library is picture B, intercepts the center of picture A convolution feature Domain Fc;
B) Fc is slided in the convolution feature of picture B and is calculated the phase of each regional area of convolution feature of Fc and picture B Like degree, the regional area Rm that there is maximum similarity with Fc is found;
C) the central area Fc of the convolution feature of picture A is aligned with the regional area Rm of the convolution feature of picture B, is determined Non-overlapping Domain between two features;
D) reject Non-overlapping Domain feature, using the similarity between two picture of feature calculation of overlapping region away from From.
The method, the convolution of picture is special in the central area of picture convolution feature to be checked and picture library in step b) The mathematic(al) representation of similarity calculating method between the regional area of sign are as follows:
Sc=cos (Fc, Rl)
Wherein Fc is the central area of picture convolution feature to be checked, and Rl is the l of the convolution feature of picture in picture library A regional area, Sc are the similarity of the two.
The method, similarity distance calculating method in step d) in picture and picture library to be checked between picture Mathematic(al) representation are as follows:
D=1-cos (Fa, Fb)
Fa is that picture to be checked removes the convolution feature after Non-overlapping Domain feature, and Fb is that picture removal is non-heavy in picture library Convolution feature after folded provincial characteristics, D are the similarity distance of the two.
The method, the similarity distance matrix of sequence of pictures is based on following rule building in the step 2:
It include n picture with history picture library for first sequence of pictures, sequence of pictures to be checked is second picture sequence Column include 10 pictures, then having the dimension of similarity distance matrix is (n, 10), by the i-th picture of first sequence of pictures With the similarity of the jth picture of second sequence of pictures distance as i-th row of matrix, the element value of jth column, similarity away from With a distance from the similarity that each of matrix element representation one opens the picture in picture to be checked and a picture library.
The method, in the step 2, non-small value inhibits normalized specific steps are as follows:
A, the element in each column of similarity distance matrix M is ranked up by ascending order, it is σ × n the smallest before retaining The value of element, it is the first hyper parameter that remaining element, which is assigned a value of 1, σ, and value range is between 0 to 1, and n is history image data collection Size;
B, preceding σ × n of each column of M the smallest elements are normalized, are n with the picture number in history picture library, The length of sequence of pictures to be checked is m, then the dimension of M is (n, m), and non-small value inhibits the normalized following institute of mathematic(al) representation Show:
sj=min (Mj)
Wherein M (i, j) is the element of similarity matrix M the i-th row jth column, and M ' (i, j) is the matrix M ' i-th after normalization 1≤i of the element≤n, 1≤j≤m, M of row jth columnjThe jth column element of representing matrix M, sjFor similarity distance matrix M jth column Minimum value, lFor MjMaximum value in preceding σ × n the smallest elements.
The method, in the step 3, the mathematic(al) representation for matching Sequence Detection operator C is as follows, wherein to The picture inquired in sequence of pictures is 10:
θ is the second hyper parameter, and value is between 0 to 1.
The technical effects of the invention are that: by estimating the Non-overlapping Domain between picture on the convolution characteristic pattern of picture Feature, then remove this feature calculate the similarity distance between picture again, relative to it is existing be based on convolutional neural networks Method, which has real-time performance and stronger to the anti-interference ability of robot visual angle change.Relative to existing base The matching Sequence Detection operator proposed in the method for sequences match technology, the present invention can more fully utilize sequence of pictures Information enables robot to have higher place recognition accuracy and recall rate under with extreme environmental change.
Detailed description of the invention
Fig. 1 is to identify implementation method flow figure based on the vision place of convolutional neural networks and sequences match.
Fig. 2 is that the estimation method of non-overlap characteristic area between picture in picture to be checked and picture library is illustrated.
Similarity distance matrix of the Fig. 3 between sequence of pictures.
Fig. 4 is using similarity between matching Sequence Detection operator calculating sequence of pictures apart from schematic diagram.
Specific embodiment
Embodiments of the present invention are further described with reference to the accompanying drawing.
Implementation method is identified based on the vision place of convolutional neural networks and sequences match the invention proposes a kind of, entirely Frame diagram is as shown in Fig. 1, comprising the following steps:
Step 1: for the identification of vision place, the initial data of input is image, which is similar to a figure As search problem, i.e., the picture of current scene is compared with the picture in history picture library, is picked out and current scene figure The highest history picture of piece similarity, then whether belong to same place by both threshold decisions.Picture is inputted into pre-training first Convolutional neural networks, extract convolutional layer feature.The convolutional neural networks for the pre-training taken are Place205-Alexnet, The network model is that depth convolutional neural networks Alexnet training on locality data collection Place205 obtains.Model ginseng Number is as shown in the table:
Each layer characteristic dimension parameter of Alexnet convolutional neural networks
Continued
Description of the present embodiment using the feature of the depth convolutional neural networks layer 5 pond layer as image, it is special Levying dimension is (6,6,256).It is worth noting that, in the method proposed being not limited to that Place205- can only be used Alexnet, the convolutional neural networks of any pre-training may serve to extract the feature of picture.When used convolutional Neural net When the layer 5 of network is pond layer, then this layer is extracted as picture feature, otherwise extract the last layer of convolutional layer as picture Feature.To convolutional neural networks carry out pre-training when, be using a large amount of picture data collection as training data, wherein this A little photos should be centered on the scene of place, that is, what is characterized is the information in place.It is not with personage, centered on the objects such as automobile Picture.Simultaneously in data set, each place should include the picture that multiple are acquired under difficult environmental conditions, such as: simultaneously Included in the picture at a certain place noon, dusk and evening acquisition.
After extracting the convolution feature of picture, this two picture is estimated using the similarity of two picture local features Between Non-overlapping Domain.As shown in Fig. 2, its concrete operations is as follows:
A) the central area F on picture A characteristic pattern is interceptedc, the size in the present embodiment region collected be (2,2, 256) region that, generally when being executed, can use the one third that long and width is all global characteristics is center region, and central area is set It is set to the center position in characteristic pattern.
B) by local feature FcIt is slided on the characteristic pattern of picture B and calculates FcWith each regional area on figure B characteristic pattern Similarity, searching and FcRegional area R with maximum similaritym.Calculating formula of similarity between local feature is as follows:
Sc=cos (Fc, Rl)
Wherein RlFor first of regional area on the characteristic pattern of picture B.
C) by the F on picture A characteristic patterncWith the R on picture B characteristic patternmAlignment, finds out the non-overlap between two pictures Region.
After finding out the Non-overlapping Domain feature between two pictures again, the feature extraction of the Non-overlapping Domain uses Similarity distance between remaining feature calculation picture, calculation formula are as follows:
D=1-cos (Fa, Fb)
Wherein, Fa is that picture A to be checked removes the convolution feature after Non-overlapping Domain feature, and Fb is picture B in picture library Convolution feature after removing Non-overlapping Domain feature, D are the similarity distance of the two.
Step 2: according to the similarity of the similarity distance calculating method construction sequence of pictures between single picture apart from square Battle array.As shown in figure 3, including n picture with history picture library for first sequence of pictures, sequence of pictures to be checked is second Sequence of pictures includes 10 pictures, then having the dimension of similarity distance matrix is (n, 10), by the i-th of first sequence of pictures The element value that the similarity of picture and the jth picture of second sequence of pictures distance is arranged as i-th row of matrix, jth, phase The similarity distance of the picture in picture to be checked and a picture library is opened like each of degree distance matrix element representation one, I-th picture and current queries sequence of pictures in the element representation history picture library that the i-th row jth arranges i.e. in similarity distance matrix Jth picture similarity distance.N picture structure in current scene acquisition 10 pictures, with history picture library every time Produce the similarity distance matrix that dimension is (n, 10).Then, the present embodiment uses method for normalizing, i.e., non-small value inhibits It is normalized to the contrast of element in enhancing similarity matrix.Non-small value inhibits normalized specific steps are as follows: a) to similar The element spent in each column of distance matrix M is ranked up, and the value of σ × n the smallest elements before only retaining, remaining element is assigned Value is that 1, σ is hyper parameter, and value range is between 0 to 1.B) the smallest element of preceding σ × 100% of each column of M is done most Big Returning to one for minimum value.Non-small value inhibits normalized mathematic(al) representation as follows:
sj=min (Mj)
Wherein M (i, j) is the element of similarity matrix M the i-th row jth column, and M ' (i, j) is the matrix M ' i-th after normalization The element of row jth column, 1≤i≤n, 1≤j≤10, MjThe jth column element of representing matrix M, sjFor similarity distance matrix M jth column Minimum value, lFor MjMaximum value in preceding σ × n the smallest elements.
Step 3: in this step, the present embodiment uses convolution operator-matching Sequence Detection operator, enhances with contrast Similarity distance matrix M ' do convolution algorithm, calculate the picture sequence of each sequence of pictures and current scene in historical data base The similarity distance of column.Matching Sequence Detection operator is made of summation operator and difference operator two parts.The following institute of summation operator H Show:
Difference operator B is as follows:
Matching Sequence Detection operator C is made of the weighted sum of the two:
C=θ B+ (1- θ) H
Matching Sequence Detection operator C above slides convolution in M ' as shown in Figure 4, calculates corresponding sequence of pictures in history picture library With the similarity distance s of current scene sequence of picturesi.Enable s=[s1, s2, s3, s4..., sn], predict the smallest similarity in s Distance srThe sequence of pictures of sequence of pictures and current scene in corresponding history picture library belongs to same place.That is:
R=argmin (s)
The sequence of pictures of r-th of sequence of pictures and current scene in prediction history picture library belongs to same place, confidence Degree is sr.Work as srWhen meeting threshold requirement, method output history picture library in place where r-th of sequence of pictures, it is otherwise defeated Current location is new place (can not find matching place in history picture library) out.

Claims (9)

1. a kind of allow robot to carry out the identification of vision place based on convolutional neural networks and sequences match in changing environment Implementation method, which comprises the following steps:
Step 1: shooting plurality of pictures to the position for needing to carry out place identification, form sequence of pictures to be checked, will be in sequence Single picture inputs convolutional neural networks and extracts convolution feature, then by the formation of original picture captured by corresponding location to right Same convolutional neural networks are inputted than every picture in sequence of pictures and extract convolution feature, are then intercepted in convolution feature Heart district domain, and the region and the convolution feature wait compare every picture in sequence of pictures are compared, have most to find The corresponding region of big similarity, obtains the position of Non-overlapping Domain between picture after being then aligned central area with corresponding region It sets, removes the feature of Non-overlapping Domain, use the similarity distance between the feature calculation picture of remaining overlapping region;
Step 2: after calculating every picture to be checked and every wait compare the distance of the similarity between picture, with two sequences In picture number establish the matrix with respective dimensions, and with the similarity distance of each picture as matrix corresponding positions Then the element value set is normalized similarity distance matrix with constructing the similarity distance matrix of sequence of pictures;
Step 3: design matching Sequence Detection operator makes it slide convolution on similarity distance matrix, calculates each sequence of pictures Between matching score then judge that corresponding sequence of pictures belongs to same place, otherwise when matching score meets threshold requirement Judgement belongs to different location.
2. the method according to claim 1, wherein extraction convolution feature described in step 1, is by picture It is input in the convolutional neural networks for having been subjected to pre-training, when the layer 5 of used convolutional neural networks is pond layer, This layer is then extracted as picture feature, otherwise extracts the last layer of convolutional layer as picture feature.
3. according to the method described in claim 2, it is characterized in that, the convolutional neural networks for having been subjected to pre-training, are It is trained using place scene picture, wherein place scene picture is using a certain place scene as content of shooting, and is included in not The picture in same place under the same time and environment.
4. the method according to claim 1, wherein estimating picture and picture to be checked in the step one In library between picture the step of Non-overlapping Domain position are as follows:
A) remember that picture to be checked is picture A, the picture in picture library is picture B, intercepts the central area of picture A convolution feature Fc;
B) Fc is slided in the convolution feature of picture B and is calculated the similarity of each regional area of convolution feature of Fc and picture B, Find the regional area Rm that there is maximum similarity with Fc;
C) the central area Fc of the convolution feature of picture A is aligned with the regional area Rm of the convolution feature of picture B, determines two Non-overlapping Domain between feature;
D) feature for rejecting Non-overlapping Domain, uses the similarity distance between two picture of feature calculation of overlapping region.
5. according to the method described in claim 4, it is characterized in that, in step b) picture convolution feature to be checked central area With the mathematic(al) representation of the similarity calculating method in picture library between the regional area of the convolution feature of picture are as follows:
Sc=cos (Fc, Rl)
Wherein Fc is the central area of picture convolution feature to be checked, and Rl is first innings of the convolution feature of picture in picture library Portion region, Sc are the similarity of the two.
6. according to the method described in claim 4, it is characterized in that, in step d) in picture and picture library to be checked between picture Similarity distance calculating method mathematic(al) representation are as follows:
D=1-cos (Fa, Fb)
Fa is that picture to be checked removes the convolution feature after Non-overlapping Domain feature, and Fb is that picture removes non-overlap area in picture library Convolution feature after characteristic of field, D are the similarity distance of the two.
7. the method according to claim 1, wherein in the step 2 sequence of pictures similarity distance matrix Based on following rule building:
It include n picture with history picture library for first sequence of pictures, sequence of pictures to be checked is second sequence of pictures, Comprising 10 pictures, then having the dimension of similarity distance matrix is (n, 10), by the i-th picture of first sequence of pictures and the The element value that the similarity distance of the jth picture of two sequence of pictures is arranged as i-th row of matrix, jth, similarity is apart from square Each of battle array element representation one opens the similarity distance of picture to be checked and the picture in a picture library.
8. the method according to claim 1, wherein non-small value inhibits normalized tool in the step 2 Body step are as follows:
A, the element in each column of similarity distance matrix M is ranked up by ascending order, σ × n the smallest elements before retaining Value, remaining element be assigned a value of 1, σ be the first hyper parameter, value range be 0 to 1 between, n be history image data collection it is big It is small;
B, preceding σ × n of each column of M the smallest elements are normalized, is n with the picture number in history picture library, it is to be checked The length for asking sequence of pictures is m, then the dimension of M is (n, m), and non-small value inhibits normalized mathematic(al) representation as follows:
sj=min (Mj)
Wherein M (i, j) is the element of similarity matrix M the i-th row jth column, and M ' (i, j) is matrix M ' the i-th row jth after normalization 1≤i of element≤n of column, 1≤j≤m, MjThe jth column element of representing matrix M, sjFor the minimum of similarity distance matrix M jth column Value, lFor MjMaximum value in preceding σ × n the smallest elements.
9. the method according to claim 1, wherein matching the mathematics of Sequence Detection operator C in the step 3 Expression formula is as follows, wherein the picture in sequence of pictures to be checked is 10:
θ is the second hyper parameter, and value is between 0 to 1.
CN201910544572.9A 2019-06-21 2019-06-21 Implementation method for enabling robot to perform visual place recognition in variable environment based on convolutional neural network and sequence matching Active CN110427953B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910544572.9A CN110427953B (en) 2019-06-21 2019-06-21 Implementation method for enabling robot to perform visual place recognition in variable environment based on convolutional neural network and sequence matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910544572.9A CN110427953B (en) 2019-06-21 2019-06-21 Implementation method for enabling robot to perform visual place recognition in variable environment based on convolutional neural network and sequence matching

Publications (2)

Publication Number Publication Date
CN110427953A true CN110427953A (en) 2019-11-08
CN110427953B CN110427953B (en) 2022-11-29

Family

ID=68409340

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910544572.9A Active CN110427953B (en) 2019-06-21 2019-06-21 Implementation method for enabling robot to perform visual place recognition in variable environment based on convolutional neural network and sequence matching

Country Status (1)

Country Link
CN (1) CN110427953B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778604A (en) * 2015-12-15 2017-05-31 西安电子科技大学 Pedestrian's recognition methods again based on matching convolutional neural networks
WO2018076212A1 (en) * 2016-10-26 2018-05-03 中国科学院自动化研究所 De-convolutional neural network-based scene semantic segmentation method
CN108960331A (en) * 2018-07-10 2018-12-07 重庆邮电大学 A kind of recognition methods again of the pedestrian based on pedestrian image feature clustering

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778604A (en) * 2015-12-15 2017-05-31 西安电子科技大学 Pedestrian's recognition methods again based on matching convolutional neural networks
WO2018076212A1 (en) * 2016-10-26 2018-05-03 中国科学院自动化研究所 De-convolutional neural network-based scene semantic segmentation method
CN108960331A (en) * 2018-07-10 2018-12-07 重庆邮电大学 A kind of recognition methods again of the pedestrian based on pedestrian image feature clustering

Also Published As

Publication number Publication date
CN110427953B (en) 2022-11-29

Similar Documents

Publication Publication Date Title
CN106127204B (en) A kind of multi-direction meter reading Region detection algorithms of full convolutional neural networks
CN105139015B (en) A kind of remote sensing images Clean water withdraw method
CN110008913A (en) The pedestrian's recognition methods again merged based on Attitude estimation with viewpoint mechanism
CN109685013B (en) Method and device for detecting head key points in human body posture recognition
CN108596010B (en) Implementation method of pedestrian re-identification system
CN104599286B (en) A kind of characteristic tracking method and device based on light stream
CN107424161B (en) Coarse-to-fine indoor scene image layout estimation method
CN109635695B (en) Pedestrian re-identification method based on triple convolution neural network
CN111507296A (en) Intelligent illegal building extraction method based on unmanned aerial vehicle remote sensing and deep learning
CN104850857B (en) Across the video camera pedestrian target matching process of view-based access control model spatial saliency constraint
CN106030610A (en) Real-time 3D gesture recognition and tracking system for mobile devices
CN105279769A (en) Hierarchical particle filtering tracking method combined with multiple features
CN114187665A (en) Multi-person gait recognition method based on human body skeleton heat map
CN107230219A (en) A kind of target person in monocular robot is found and follower method
CN109389156A (en) A kind of training method, device and the image position method of framing model
CN110533661A (en) Adaptive real-time closed-loop detection method based on characteristics of image cascade
CN108089695A (en) A kind of method and apparatus for controlling movable equipment
CN111695460B (en) Pedestrian re-identification method based on local graph convolution network
CN112989889A (en) Gait recognition method based on posture guidance
CN113989333A (en) Pedestrian tracking method based on face and head and shoulder information
CN113378691B (en) Intelligent home management system and method based on real-time user behavior analysis
Liang et al. Egocentric hand pose estimation and distance recovery in a single RGB image
CN117422963A (en) Cross-modal place recognition method based on high-dimension feature mapping and feature aggregation
CN113486751A (en) Pedestrian feature extraction method based on graph volume and edge weight attention
CN116912763A (en) Multi-pedestrian re-recognition method integrating gait face modes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant