CN109785387A - Winding detection method, device and the robot of robot - Google Patents
Winding detection method, device and the robot of robot Download PDFInfo
- Publication number
- CN109785387A CN109785387A CN201811543605.XA CN201811543605A CN109785387A CN 109785387 A CN109785387 A CN 109785387A CN 201811543605 A CN201811543605 A CN 201811543605A CN 109785387 A CN109785387 A CN 109785387A
- Authority
- CN
- China
- Prior art keywords
- view image
- image sequence
- sequence
- image
- robot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
Abstract
The application is suitable for technical field of image processing, provides winding detection method, device, robot and the computer readable storage medium of robot, comprising: obtain the multi-view image sequence of robot;The characteristics of image sequence of multi-view image sequence is extracted by the SDA network trained;N number of nearest-neighbors of each multi-view image in multi-view image sequence are searched in the tree construction of creation, and calculate each multi-view image to the distance of N number of nearest-neighbors, and the characteristics of image of map is stored in advance in tree construction, and N is positive integer;Each multi-view image is stored to the distance of N number of nearest-neighbors into sparse difference matrix;The best match position of each multi-view image is searched for from N number of position of sparse difference matrix;If the best match position searched in default bias distance, determines multi-view image sequence for winding at a distance from the position of true winding.By the above method, winding detection can be improved to the robustness of large viewing variation and intense environment variation.
Description
Technical field
The application belong to the winding detection method of technical field of image processing more particularly to robot, device, robot and
Computer readable storage medium.
Background technique
The place that winding detection is intended to that robot identification is allowed to access before, if winding is correctly detected, machine
People can relocate own, and help subsequent to build figure and registration Algorithm obtains more acurrate and consistent result.Winding
Detection is similar to scene Recognition, requires to extract suitable feature to scene image, the difference is that, winding detection processing
Data are continuous video frames, and do not have class label.
Existing winding detection method is broadly divided into three kinds:
(1) many state-of-the-art winding detection algorithms based on appearance use bag of words (BoW) model, and visual signature is retouched
State the set that symbol is clustered into referred to as " dictionary ".When New Observer arrives, visual signature is extracted.Other are special using manual extraction part
The method of sign includes FV, VLAD, and the sub- GIST of global description, BoVW are also commonly used for SLAM and achieve good results, but this
The feature of hand-designed is belonged to a bit, they are very sensitive to illumination variation in environment, and under complicated illumination, they detect and close
The success rate of ring is not high.
(2) deep learning method attempts directly to indicate from original sensor data learning data by multilayer neural network,
Learn these features in the training process.A large number of studies show that being based on convolutional network (Convolutional Neural
Networks, CNN) method it is better than the method effect of manual extraction feature, but CNN belongs to supervised learning, needs to obtain a large amount of
Label could train.
(3) method based on image sequence can carry out position identification in the case where perceiving environment and extremely changing, such as:
Daytime, night;Fine day, rainy day;Summer, winter;But this method is largely dependent upon detailed sequences match, is calculated as
This is high, is not suitable for handling large-scale map.
In actual conditions, winding, which detects, can encounter such as visual angle change, seasonal variations, day and night the various situations such as difference, but
It is that existing winding detection technique is difficult to meet the requirement to unchanged view angle and environment invariance simultaneously.
Therefore, it is desirable to provide a kind of new method, to solve the above technical problems.
Summary of the invention
In view of this, winding detection method, device, robot and computer that the embodiment of the present application provides robot can
Storage medium is read, is difficult to meet simultaneously to unchanged view angle and environment invariance to solve winding detection technique in the prior art
It is required that the problem of.
The first aspect of the embodiment of the present application provides a kind of winding detection method of robot, comprising:
Obtain the multi-view image sequence of robot;
The sparse characteristics of image that the multi-view image sequence is extracted from coding SDA network is removed dryness by the stacking-type trained
Sequence;
N number of nearest-neighbors of each multi-view image in the multi-view image sequence are searched in the tree construction of creation, and are counted
Each multi-view image is calculated to the distance of N number of nearest-neighbors, the characteristics of image of map, N is stored in advance in the tree construction
For positive integer;
Each multi-view image is stored to the distance of N number of nearest-neighbors into sparse difference matrix;
The best match position of each multi-view image is searched for from N number of position of the sparse difference matrix;
If the best match position searched in default bias distance, determines institute at a distance from the position of true winding
Stating multi-view image sequence is winding.
The second aspect of the embodiment of the present application provides a kind of winding detection device of robot, comprising:
Multi-view image retrieval unit, for obtaining the multi-view image sequence of robot;
Characteristics of image sequence extraction unit removes dryness sparse from coding SDA network extraction for the stacking-type by having trained
The characteristics of image sequence of the multi-view image sequence;
Nearest-neighbors search unit, for searching for each visual angle figure in the multi-view image sequence in the tree construction of creation
N number of nearest-neighbors of picture, and each multi-view image is calculated to the distance of N number of nearest-neighbors, the tree construction is preparatory
The characteristics of image of map is stored, N is positive integer;
Data storage cell, for storing each multi-view image to the distance of N number of nearest-neighbors to sparse difference
In different matrix;
Best match position search unit, for searching for each visual angle figure from N number of position of the sparse difference matrix
The best match position of picture;
Winding judging unit, if the best match position for searching is at a distance from the position of true winding default inclined
It moves in distance, then determines that the multi-view image sequence is winding.
The third aspect of the embodiment of the present application provides a kind of robot, including memory, processor and is stored in institute
The computer program that can be run in memory and on the processor is stated, the processor realizes the computer journey when executing
The step of winding detection method such as the robot is realized when sequence.
The fourth aspect of the embodiment of the present application provides a kind of computer readable storage medium, the computer-readable storage
Media storage has computer program, and the computer program realizes the winding detection method of the robot when being executed by processor
The step of.
Existing beneficial effect is the embodiment of the present application compared with prior art:
Characteristics of image sequence is extracted due to using a kind of SDA network this unsupervised network structure, and unsupervised net
Network structure, which is not necessarily to the data set with label, to train, so that training process is easier, in addition, passing through sequences match
Method determines best match position, therefore, significantly improves detection efficiency, is suitble to extensive, complicated outdoor environment.To sum up,
Due to the method that the application combines unsupervised depth network and sequences match when detecting winding, this improves winding inspections
Survey the robustness to large viewing variation and intense environment variation.
Detailed description of the invention
It in order to more clearly explain the technical solutions in the embodiments of the present application, below will be to embodiment or description of the prior art
Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only some of the application
Embodiment for those of ordinary skill in the art without creative efforts, can also be attached according to these
Figure obtains other attached drawings.
Fig. 1 is a kind of flow chart of the winding detection method of robot provided by the embodiments of the present application;
Fig. 2 is a kind of structural schematic diagram of the winding detection device of robot provided by the embodiments of the present application;
Fig. 3 is the schematic diagram of robot provided by the embodiments of the present application.
Specific embodiment
In being described below, for illustration and not for limitation, the tool of such as particular system structure, technology etc is proposed
Body details, so as to provide a thorough understanding of the present application embodiment.However, it will be clear to one skilled in the art that there is no these specific
The application also may be implemented in the other embodiments of details.In other situations, it omits to well-known system, device, electricity
The detailed description of road and method, so as not to obscure the description of the present application with unnecessary details.
In order to illustrate technical solution described herein, the following is a description of specific embodiments.
It should be appreciated that ought use in this specification and in the appended claims, term " includes " instruction is described special
Sign, entirety, step, operation, the presence of element and/or component, but be not precluded one or more of the other feature, entirety, step,
Operation, the presence or addition of element, component and/or its set.
It is also understood that mesh of the term used in this present specification merely for the sake of description specific embodiment
And be not intended to limit the application.As present specification and it is used in the attached claims, unless on
Other situations are hereafter clearly indicated, otherwise " one " of singular, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in present specification and the appended claims is
Refer to any combination and all possible combinations of one or more of associated item listed, and including these combinations.
As used in this specification and in the appended claims, term " if " can be according to context quilt
Be construed to " when ... " or " once " or " in response to determination " or " in response to detecting ".Similarly, phrase " if it is determined that " or
" if detecting [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to true
It is fixed " or " once detecting [described condition or event] " or " in response to detecting [described condition or event] ".
Embodiment one:
Fig. 1 shows a kind of flow chart of the winding detection method of robot provided by the embodiments of the present application, and details are as follows:
Step S11 obtains the multi-view image sequence of robot;
In the step, obtain multi-view image that robot is shot in current time and the current time for the previous period
The multi-view image of interior shooting, these images constitute robotic vision image sequence.Certainly, since the embodiment of the present application is inspection
Survey time ring position, therefore, the length of the multi-view image sequence are greater than 0, and are less than pre-set length threshold.
Step S12 removes dryness the sparse SDA network of coding certainly by the stacking-type trained and extracts the multi-view image sequence
Characteristics of image sequence;
In some embodiments, in order to improve the extraction rate of characteristics of image sequence, before the step S12, comprising:
A1, the fritter that the multi-view image in the multi-view image sequence is divided into default size;
A2, following steps are executed to each multi-view image in the multi-view image sequence: the fritter is passed through sparse
Critical point detection algorithm detects the maximum top n key point of characteristic response, and N number of key point is handled as N number of image
Block vector;
Accordingly, the step S12 includes:
Following steps are executed to each multi-view image in the multi-view image sequence: being mentioned by the SDA network trained
The characteristics of image of N number of image block vector is taken, the characteristics of image of each multi-view image forms the figure of the multi-view image sequence
As characteristic sequence.
In the present embodiment, SDA is a kind of unsupervised neural network, may learn the compression expression of input data.It
Comprising several end-to-end layers, each end layer is a denoising autocoder DA.In SDA, each end layer (DA) it is defeated
It is used as the input of next end layer (next DA) out.For single DA, it is by up of three-layer: (1) input layer x;(2) hidden layer
h;(3) retrieving layer y.Every layer includes many nodes being fully connected, these nodes are the basic elements of network.Each node is from even
The input connect calculates a simple nonlinear function (usually sigmoid function).Enable x for input, y is output, then function
It can be written as following formula (1), wherein w, b is the weight for including and straggling parameter in individual node.The w, b are SDA training
Target.The parameter (w, b) of trained SDA can obtain useful information from input data.
Wherein, n refers to the number of input data.
Then original gray level image (if the multi-view image obtained is not gray level image, which is converted
To be reprocessed after gray level image) it is divided into the fritter having a size of g × g.In conjunction with the useful information obtained from input data, by dilute
The key point for dredging the fritter that the detection of critical point detection algorithm divides, is then filtered to extend whole image.According still further to feature
The descending mode of response is ranked up the key point detected, then the preceding N number of key point of selected and sorted adjusts them
For image block.By image block vector quantization, it is then sent to SDA neural network.Therefore, an input picture will have N number of piece,
It forms input matrix XN×g2.Then, SDA, which is destroyed, inputs and trains to rebuild it.The final hidden layer of SDA, which is used as, has NFDimension
The feature output layer of degree.Therefore, once there are new images, we can obtain characteristic Z from SDAN×NF。
Step S13 searches for N number of arest neighbors of each multi-view image in the multi-view image sequence in the tree construction of creation
It occupies, and calculates each multi-view image to the distance of N number of nearest-neighbors, the image of map is stored in advance in the tree construction
Feature, N are positive integer;
In the step, due to passing through the characteristics of image of storage of data structure map, can be improved search multi-view image
The search speed of N number of nearest-neighbors.
In the step, in order to improve search speed, using fast nearest-neighbor search (Fast_Library_for_
Approximate_Nearest_Neighbors, FLANN) multi-view image N number of nearest-neighbors.The FLANN can pass through setting
Its precision parameter carrys out the accuracy of control result.FLANN is that data-oriented collection and aimed at precision selection approximate KNN occupy automatically
(Approximate_Nearest_Neighbors, ANN) parameter, using random kd tree, (kd tree is that each node is that k is tieed up for it
The binary tree of point) or layering k mean value tree algorithm.For the traversal of n image, each image can be in O (log (n)) in the time
Its approximate KNN residence is found in map, the distance value that algorithm returns is for replacing initial value when calculating them.
In some embodiments, in order to improve the consistency of subsequent multi-view image sequences match, tree construction is stored in advance ground
The characteristics of image of figure is extracted by the SDA network.
Since the characteristics of image sequence of the multi-view image sequence of the characteristics of image and acquisition of map is all by SDA network
It extracts, therefore, helps to improve both subsequent matched matching speed, and reduce the matched difficulty of the two.The present embodiment
Map refers to the multi-view image sequence that robot obtains before current time.Preferably, to working as when referring to the starting of this robot
The multi-view image sequence for all multi-view images composition that the preceding time obtains.
Step S14, the distance of storage each multi-view image to N number of nearest-neighbors is into sparse difference matrix;
In the step, it includes to arrive (approximation) N number of nearest-neighbors along each column (or every row) that the difference matrix of construction, which is sparse,
Distance.The range information of current multi-view image to its N number of neighbour are stored in sparse difference matrix D.
Step S15 searches for the best match position of each multi-view image from N number of position of the sparse difference matrix;
In some embodiments, the step S15 includes:
According to the nearest of the multi-view image in the sequence length of the multi-view image sequence, the acquisition multi-view image sequence
Time calculates the cumulative difference value of the image path of process, by the corresponding map image of the smallest image path of cumulative difference value
Sequence is as the best match position for searching for each multi-view image from N number of position of the sparse difference matrix.
In the present embodiment, the column of sparse difference matrix D are difference vector D^(T)Along a series of in the row searching map of D
Multi-view image, wherein ds is the length of the multi-view image sequence obtained, and T is current time.In order to identify in map with current machine
The sequence for the multi-view image sequences match that device people obtains, executes search, M is made of image difference vector in the M of space.
The cumulative difference value S of the image path (or track) passed through is calculated by following formula (3) and (4).Wherein,It is the difference value between the multi-view image t of robot and the multi-view image j of map, V is path velocity, and s is that the sequence of calculation is poor
Multi-view image serial number in map corresponding to score value.
J=s+V (t-T) (4)
In order to efficiently find upper consistent loop closure node of continuous or time, sparse difference matrix is not carried out here
Thoroughly search, but use greedy moving description technology.For the multi-view image sequence of current robot, in map
In find most matched image sequence.For each possible closed loop node, the difference value of different tracks is calculated, wherein each rail
The corresponding speed of mark or motion model.If a node is actually a closed loop node, robot should be along visual angle
The track that the speed of preceding node defines in image sequence is mobile.That is, best fit track is the smallest rail of difference value
Mark.In some embodiments, the current location of robot, the motion model or speed are updated using motion model and rate pattern
Model corresponds to best fit track.
Step S16, if the best match position searched at a distance from the position of true winding default bias distance in,
Then determine that the multi-view image sequence is winding.
Certainly, if the best match position searched at a distance from the position of true winding default bias distance except,
Then determine that the multi-view image sequence is not winding.
In the embodiment of the present application, removed dryness by the stacking-type trained sparse from the coding SDA network extraction visual angle figure
As the characteristics of image sequence of sequence, the N number of of each multi-view image in the multi-view image sequence is searched in the tree construction of creation
Nearest-neighbors, and each multi-view image is calculated to the distance of N number of nearest-neighbors, then store each multi-view image
To N number of nearest-neighbors distance into sparse difference matrix, searched for from N number of position of the sparse difference matrix each
The best match position of multi-view image, if the best match position searched is at a distance from the position of true winding in default bias
In distance, then determine that the multi-view image sequence is winding.It is mentioned due to using a kind of SDA network this unsupervised network structure
Characteristics of image sequence is taken, and unsupervised network structure is not necessarily to the data set with label and can train, so that training
Journey is easier, in addition, determining best match position by the method for sequences match, therefore, significantly improves detection efficiency, is suitble to
On a large scale, complicated outdoor environment.To sum up, since the application combines unsupervised depth network and sequence when detecting winding
The method matched, this improves winding detections to the robustness of large viewing variation and intense environment variation.
Embodiment two:
Fig. 2 shows a kind of structural schematic diagram of the winding detection device of robot provided by the embodiments of the present application, in order to
Convenient for explanation, part relevant to the embodiment of the present application is illustrated only:
The winding detection device of the robot includes: multi-view image retrieval unit 21, characteristics of image sequential extraction procedures list
First 22, nearest-neighbors search unit 23, data storage cell 24, best match position search unit 25, winding judging unit 26.
Wherein:
Multi-view image retrieval unit 21, for obtaining the multi-view image sequence of robot;
Characteristics of image sequence extraction unit 22 removes dryness the sparse SDA network of coding certainly for the stacking-type by having trained and mentions
Take the characteristics of image sequence of the multi-view image sequence;
In some embodiments, in order to improve the extraction rate of characteristics of image sequence, the winding of the robot detects dress
It sets further include:
Image block unit, for the multi-view image in the multi-view image sequence to be divided into the fritter of default size;
Critical point detection unit, for executing following steps to each multi-view image in the multi-view image sequence: right
The fritter detects the maximum top n key point of characteristic response by sparse critical point detection algorithm, and by N number of key
Point processing is N number of image block vector;
Accordingly, described image characteristic sequence extraction unit 22 is specifically used for:
Following steps are executed to each multi-view image in the multi-view image sequence: being mentioned by the SDA network trained
The characteristics of image of N number of image block vector is taken, the characteristics of image of each multi-view image forms the figure of the multi-view image sequence
As characteristic sequence.
Wherein, the structure of SDA network is as described embodiments, and details are not described herein again.
Nearest-neighbors search unit 23, for searching for each visual angle in the multi-view image sequence in the tree construction of creation
N number of nearest-neighbors of image, and each multi-view image is calculated to the distance of N number of nearest-neighbors, the tree construction is pre-
The characteristics of image of map is first stored, N is positive integer;
In order to improve search speed, using N number of nearest-neighbors of FLANN search multi-view image.
In some embodiments, the characteristics of image that map is stored in advance in tree construction is extracted by the SDA network.
Data storage cell 24, the distance for storing each multi-view image to N number of nearest-neighbors is to sparse
In difference matrix;
Best match position search unit 25, for searching for each visual angle from N number of position of the sparse difference matrix
The best match position of image;
In some embodiments, the best match position search unit 25 is specifically used for:
According to the nearest of the multi-view image in the sequence length of the multi-view image sequence, the acquisition multi-view image sequence
Time calculates the cumulative difference value of the image path of process, by the corresponding map image of the smallest image path of cumulative difference value
Sequence is as the best match position for searching for each multi-view image from N number of position of the sparse difference matrix.
Winding judging unit 26, if the best match position for searching is at a distance from the position of true winding default
In offset distance, then determine that the multi-view image sequence is winding.
In the embodiment of the present application, due to using a kind of this unsupervised network structure extraction characteristics of image sequence of SDA network
Column, and unsupervised network structure is not necessarily to the data set with label and can train, so that training process is easier, this
Outside, best match position is determined by the method for sequences match, therefore, significantly improves detection efficiency, be suitble to extensive, complicated
Outdoor environment.To sum up, the method for combining unsupervised depth network and sequences match when detecting winding due to the application, because
This, improves winding detection to the robustness of large viewing variation and intense environment variation.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process
Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present application constitutes any limit
It is fixed.
Embodiment three:
Fig. 3 is the schematic diagram for the robot that one embodiment of the application provides.As shown in figure 3, the robot 3 of the embodiment wraps
It includes: processor 30, memory 31 and being stored in the computer that can be run in the memory 31 and on the processor 30
Program 32.The processor 30 realizes that the winding detection method of above-mentioned each robot is implemented when executing the computer program 32
Step in example, such as step S11 to S16 shown in FIG. 1.Alternatively, when the processor 30 executes the computer program 32
Realize the function of each module/unit in above-mentioned each Installation practice, such as the function of module 21 to 26 shown in Fig. 2.
Illustratively, the computer program 32 can be divided into one or more module/units, it is one or
Multiple module/units are stored in the memory 31, and are executed by the processor 30, to complete the application.Described one
A or multiple module/units can be the series of computation machine program instruction section that can complete specific function, which is used for
Implementation procedure of the computer program 32 in the robot 3 is described.For example, the computer program 32 can be divided
At multi-view image retrieval unit, characteristics of image sequence extraction unit, nearest-neighbors search unit, data storage cell, most
Good matching position search unit, winding judging unit, each unit concrete function are as follows:
Multi-view image retrieval unit, for obtaining the multi-view image sequence of robot;
Characteristics of image sequence extraction unit removes dryness sparse from coding SDA network extraction for the stacking-type by having trained
The characteristics of image sequence of the multi-view image sequence;
Nearest-neighbors search unit, for searching for each visual angle figure in the multi-view image sequence in the tree construction of creation
N number of nearest-neighbors of picture, and each multi-view image is calculated to the distance of N number of nearest-neighbors, the tree construction is preparatory
The characteristics of image of map is stored, N is positive integer;
Data storage cell, for storing each multi-view image to the distance of N number of nearest-neighbors to sparse difference
In different matrix;
Best match position search unit, for searching for each visual angle figure from N number of position of the sparse difference matrix
The best match position of picture;
Winding judging unit, if the best match position for searching is at a distance from the position of true winding default inclined
It moves in distance, then determines that the multi-view image sequence is winding.
The robot may include, but be not limited only to, processor 30, memory 31.It will be understood by those skilled in the art that
Fig. 3 is only the example of robot 3, does not constitute the restriction to robot 3, may include than illustrating more or fewer portions
Part perhaps combines certain components or different components, such as the robot can also include input-output equipment, network
Access device, bus etc..
Alleged processor 30 can be central processing unit (Central Processing Unit, CPU), can also be
Other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit
(Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor
Deng.
The memory 31 can be the internal storage unit of the robot 3, such as the hard disk or memory of robot 3.
The memory 31 is also possible to the External memory equipment of the robot 3, such as the plug-in type being equipped in the robot 3 is hard
Disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card, flash card
(Flash Card) etc..Further, the memory 31 can also both include the internal storage unit of the robot 3 or wrap
Include External memory equipment.The memory 31 is for other programs needed for storing the computer program and the robot
And data.The memory 31 can be also used for temporarily storing the data that has exported or will export.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function
Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different
Functional unit, module are completed, i.e., the internal structure of described device is divided into different functional unit or module, more than completing
The all or part of function of description.Each functional unit in embodiment, module can integrate in one processing unit, can also
To be that each unit physically exists alone, can also be integrated in one unit with two or more units, it is above-mentioned integrated
Unit both can take the form of hardware realization, can also realize in the form of software functional units.In addition, each function list
Member, the specific name of module are also only for convenience of distinguishing each other, the protection scope being not intended to limit this application.Above system
The specific work process of middle unit, module, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in detail or remembers in some embodiment
The part of load may refer to the associated description of other embodiments.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
Scope of the present application.
In embodiment provided herein, it should be understood that disclosed device/terminal device and method, it can be with
It realizes by another way.For example, device described above/terminal device embodiment is only schematical, for example, institute
The division of module or unit is stated, only a kind of logical function partition, there may be another division manner in actual implementation, such as
Multiple units or components can be combined or can be integrated into another system, or some features can be ignored or not executed.Separately
A bit, shown or discussed mutual coupling or direct-coupling or communication connection can be through some interfaces, device
Or the INDIRECT COUPLING or communication connection of unit, it can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated module/unit be realized in the form of SFU software functional unit and as independent product sale or
In use, can store in a computer readable storage medium.Based on this understanding, the application realizes above-mentioned implementation
All or part of the process in example method, can also instruct relevant hardware to complete, the meter by computer program
Calculation machine program can be stored in a computer readable storage medium, the computer program when being executed by processor, it can be achieved that on
The step of stating each embodiment of the method.Wherein, the computer program includes computer program code, the computer program generation
Code can be source code form, object identification code form, executable file or certain intermediate forms etc..The computer-readable medium
It may include: any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic that can carry the computer program code
Dish, CD, computer storage, read-only memory (ROM, Read-Only Memory), random access memory (RAM,
Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It should be noted that described
The content that computer-readable medium includes can carry out increasing appropriate according to the requirement made laws in jurisdiction with patent practice
Subtract, such as does not include electric carrier signal and electricity according to legislation and patent practice, computer-readable medium in certain jurisdictions
Believe signal.
Embodiment described above is only to illustrate the technical solution of the application, rather than its limitations;Although referring to aforementioned reality
Example is applied the application is described in detail, those skilled in the art should understand that: it still can be to aforementioned each
Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified
Or replacement, the spirit and scope of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution should all
Comprising within the scope of protection of this application.
Claims (10)
1. a kind of winding detection method of robot characterized by comprising
Obtain the multi-view image sequence of robot;
The sparse characteristics of image sequence that the multi-view image sequence is extracted from coding SDA network is removed dryness by the stacking-type trained
Column;
N number of nearest-neighbors of each multi-view image in the multi-view image sequence are searched in the tree construction of creation, and calculate institute
Each multi-view image is stated to the distance of N number of nearest-neighbors, the characteristics of image of map is stored in advance in the tree construction, and N is positive
Integer;
Each multi-view image is stored to the distance of N number of nearest-neighbors into sparse difference matrix;
The best match position of each multi-view image is searched for from N number of position of the sparse difference matrix;
If the best match position searched in default bias distance, determines the view at a distance from the position of true winding
Angle image sequence is winding.
2. the winding detection method of robot as described in claim 1, which is characterized in that in the storehouse by having trained
Formula removes dryness sparse from before the characteristics of image sequence that coding SDA network extracts the multi-view image sequence, comprising:
Multi-view image in the multi-view image sequence is divided into the fritter of default size;
Following steps are executed to each multi-view image in the multi-view image sequence: the fritter is examined by sparse key point
Method of determining and calculating detects the maximum top n key point of characteristic response, and N number of key point is handled as N number of image block vector;
Accordingly, it is described by the stacking-type trained remove dryness it is sparse from coding SDA network extract the multi-view image sequence
Characteristics of image sequence, comprising:
Following steps are executed to each multi-view image in the multi-view image sequence: institute is extracted by the SDA network trained
The characteristics of image of N number of image block vector is stated, the image that the characteristics of image of each multi-view image forms the multi-view image sequence is special
Levy sequence.
3. the winding detection method of robot as described in claim 1, which is characterized in that the figure of map is stored in advance in tree construction
As feature is extracted by the SDA network.
4. the winding detection method of robot as described in claim 1, which is characterized in that described from the sparse difference matrix
N number of position in search for the best match position of each multi-view image, comprising:
According to the sequence length of the multi-view image sequence, obtain multi-view image in the multi-view image sequence it is nearest when
Between, the cumulative difference value of the image path of process is calculated, by the corresponding map image sequence of the smallest image path of cumulative difference value
Column are as the best match position for searching for each multi-view image from N number of position of the sparse difference matrix.
5. a kind of winding detection device of robot characterized by comprising
Multi-view image retrieval unit, for obtaining the multi-view image sequence of robot;
Characteristics of image sequence extraction unit removes dryness sparse from described in coding SDA network extraction for the stacking-type by having trained
The characteristics of image sequence of multi-view image sequence;
Nearest-neighbors search unit, for searching for each multi-view image in the multi-view image sequence in the tree construction of creation
N number of nearest-neighbors, and each multi-view image is calculated to the distance of N number of nearest-neighbors, the tree construction is stored in advance
The characteristics of image of map, N are positive integer;
Data storage cell, for storing each multi-view image to the distance of N number of nearest-neighbors to sparse difference square
In battle array;
Best match position search unit, for searching for each multi-view image from N number of position of the sparse difference matrix
Best match position;
Winding judging unit, if the best match position for searching at a distance from the position of true winding default bias away from
From interior, then determine that the multi-view image sequence is winding.
6. the winding detection device of robot as claimed in claim 5, which is characterized in that the winding of the robot detects dress
It sets further include:
Image block unit, for the multi-view image in the multi-view image sequence to be divided into the fritter of default size;
Critical point detection unit, for executing following steps to each multi-view image in the multi-view image sequence: to described
Fritter detects the maximum top n key point of characteristic response by sparse critical point detection algorithm, and will be at N number of key point
Reason is N number of image block vector;
Accordingly, described image characteristic sequence extraction unit is specifically used for:
Following steps are executed to each multi-view image in the multi-view image sequence: institute is extracted by the SDA network trained
The characteristics of image of N number of image block vector is stated, the image that the characteristics of image of each multi-view image forms the multi-view image sequence is special
Levy sequence.
7. the winding detection device of robot as claimed in claim 5, which is characterized in that the figure of map is stored in advance in tree construction
As feature is extracted by the SDA network.
8. the winding detection device of robot as claimed in claim 5, which is characterized in that the best match position search is single
Member is specifically used for:
According to the sequence length of the multi-view image sequence, obtain multi-view image in the multi-view image sequence it is nearest when
Between, the cumulative difference value of the image path of process is calculated, by the corresponding map image sequence of the smallest image path of cumulative difference value
Column are as the best match position for searching for each multi-view image from N number of position of the sparse difference matrix.
9. a kind of robot, including memory, processor and storage can transport in the memory and on the processor
Capable computer program, which is characterized in that the processor realizes that Claims 1-4 such as is appointed when executing the computer program
The step of one the method.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists
In when the computer program is executed by processor the step of any one of such as Claims 1-4 of realization the method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811543605.XA CN109785387A (en) | 2018-12-17 | 2018-12-17 | Winding detection method, device and the robot of robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811543605.XA CN109785387A (en) | 2018-12-17 | 2018-12-17 | Winding detection method, device and the robot of robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109785387A true CN109785387A (en) | 2019-05-21 |
Family
ID=66497413
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811543605.XA Pending CN109785387A (en) | 2018-12-17 | 2018-12-17 | Winding detection method, device and the robot of robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109785387A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111177295A (en) * | 2019-12-28 | 2020-05-19 | 深圳市优必选科技股份有限公司 | Image-building ghost eliminating method and device, computer-readable storage medium and robot |
CN111598149A (en) * | 2020-05-09 | 2020-08-28 | 鹏城实验室 | Loop detection method based on attention mechanism |
CN112070122A (en) * | 2020-08-14 | 2020-12-11 | 五邑大学 | Classification method and device of slam map and storage medium |
CN112085788A (en) * | 2020-08-03 | 2020-12-15 | 深圳市优必选科技股份有限公司 | Loop detection method, loop detection device, computer readable storage medium and mobile device |
CN112348865A (en) * | 2020-10-30 | 2021-02-09 | 深圳市优必选科技股份有限公司 | Loop detection method and device, computer readable storage medium and robot |
US11423555B2 (en) * | 2018-07-20 | 2022-08-23 | Shenzhen University | Methods for generating aerial photographing path for unmanned aerial vehicle, computer devices, and storage mediums |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106127785A (en) * | 2016-06-30 | 2016-11-16 | 重庆大学 | Based on manifold ranking and the image significance detection method of random walk |
CN107845145A (en) * | 2017-11-29 | 2018-03-27 | 电子科技大学 | Three-dimensional reconfiguration system and method under a kind of electron microscopic scene |
CN108594816A (en) * | 2018-04-23 | 2018-09-28 | 长沙学院 | A kind of method and system for realizing positioning and composition by improving ORB-SLAM algorithms |
CN108921893A (en) * | 2018-04-24 | 2018-11-30 | 华南理工大学 | A kind of image cloud computing method and system based on online deep learning SLAM |
-
2018
- 2018-12-17 CN CN201811543605.XA patent/CN109785387A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106127785A (en) * | 2016-06-30 | 2016-11-16 | 重庆大学 | Based on manifold ranking and the image significance detection method of random walk |
CN107845145A (en) * | 2017-11-29 | 2018-03-27 | 电子科技大学 | Three-dimensional reconfiguration system and method under a kind of electron microscopic scene |
CN108594816A (en) * | 2018-04-23 | 2018-09-28 | 长沙学院 | A kind of method and system for realizing positioning and composition by improving ORB-SLAM algorithms |
CN108921893A (en) * | 2018-04-24 | 2018-11-30 | 华南理工大学 | A kind of image cloud computing method and system based on online deep learning SLAM |
Non-Patent Citations (2)
Title |
---|
SAYEM MOHAMMAD SIAM 等: "Fast-SeqSLAM: A fast appearance based place recognition algorithm", 《2017 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA)》 * |
XIANG GAO 等: "Unsupervised learning to detect loops using deep neural networks for visual SLAM system", 《AUTONOMOUS ROBOTS》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11423555B2 (en) * | 2018-07-20 | 2022-08-23 | Shenzhen University | Methods for generating aerial photographing path for unmanned aerial vehicle, computer devices, and storage mediums |
CN111177295A (en) * | 2019-12-28 | 2020-05-19 | 深圳市优必选科技股份有限公司 | Image-building ghost eliminating method and device, computer-readable storage medium and robot |
CN111598149A (en) * | 2020-05-09 | 2020-08-28 | 鹏城实验室 | Loop detection method based on attention mechanism |
CN111598149B (en) * | 2020-05-09 | 2023-10-24 | 鹏城实验室 | Loop detection method based on attention mechanism |
CN112085788A (en) * | 2020-08-03 | 2020-12-15 | 深圳市优必选科技股份有限公司 | Loop detection method, loop detection device, computer readable storage medium and mobile device |
CN112085788B (en) * | 2020-08-03 | 2024-04-19 | 优必康(青岛)科技有限公司 | Loop detection method and device, computer readable storage medium and mobile device |
CN112070122A (en) * | 2020-08-14 | 2020-12-11 | 五邑大学 | Classification method and device of slam map and storage medium |
CN112070122B (en) * | 2020-08-14 | 2023-10-17 | 五邑大学 | Classification method, device and storage medium of slam map |
CN112348865A (en) * | 2020-10-30 | 2021-02-09 | 深圳市优必选科技股份有限公司 | Loop detection method and device, computer readable storage medium and robot |
CN112348865B (en) * | 2020-10-30 | 2023-12-01 | 深圳市优必选科技股份有限公司 | Loop detection method and device, computer readable storage medium and robot |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109785387A (en) | Winding detection method, device and the robot of robot | |
CN107330396B (en) | Pedestrian re-identification method based on multi-attribute and multi-strategy fusion learning | |
Liu et al. | 3DCNN-DQN-RNN: A deep reinforcement learning framework for semantic parsing of large-scale 3D point clouds | |
CN107506740B (en) | Human body behavior identification method based on three-dimensional convolutional neural network and transfer learning model | |
CN107885764B (en) | Rapid Hash vehicle retrieval method based on multitask deep learning | |
CN107657249A (en) | Method, apparatus, storage medium and the processor that Analysis On Multi-scale Features pedestrian identifies again | |
CN108509859A (en) | A kind of non-overlapping region pedestrian tracting method based on deep neural network | |
Yue et al. | Robust loop closure detection based on bag of superpoints and graph verification | |
CN110781838A (en) | Multi-modal trajectory prediction method for pedestrian in complex scene | |
CN107609512A (en) | A kind of video human face method for catching based on neutral net | |
CN104915351A (en) | Picture sorting method and terminal | |
CN109241317A (en) | Based on the pedestrian's Hash search method for measuring loss in deep learning network | |
Liu et al. | Indexing visual features: Real-time loop closure detection using a tree structure | |
CN107133569A (en) | The many granularity mask methods of monitor video based on extensive Multi-label learning | |
CN110222718B (en) | Image processing method and device | |
CN104679818A (en) | Video keyframe extracting method and video keyframe extracting system | |
CN109492589A (en) | The recognition of face working method and intelligent chip merged by binary features with joint stepped construction | |
CN111161315A (en) | Multi-target tracking method and system based on graph neural network | |
CN110390294A (en) | Target tracking method based on bidirectional long-short term memory neural network | |
CN111339917B (en) | Method for detecting glass in real scene | |
TWI747114B (en) | Image feature extraction method, network training method, electronic device and computer readable storage medium | |
CN108108716A (en) | A kind of winding detection method based on depth belief network | |
Wei et al. | Traffic sign detection and recognition using novel center-point estimation and local features | |
CN110533661A (en) | Adaptive real-time closed-loop detection method based on characteristics of image cascade | |
CN110322472A (en) | A kind of multi-object tracking method and terminal device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190521 |