CN105809118A - Three-dimensional object identifying method and apparatus - Google Patents

Three-dimensional object identifying method and apparatus Download PDF

Info

Publication number
CN105809118A
CN105809118A CN201610120409.6A CN201610120409A CN105809118A CN 105809118 A CN105809118 A CN 105809118A CN 201610120409 A CN201610120409 A CN 201610120409A CN 105809118 A CN105809118 A CN 105809118A
Authority
CN
China
Prior art keywords
feature point
fisrt feature
sub
block
binaryzation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610120409.6A
Other languages
Chinese (zh)
Inventor
周曦
李继伟
温浩
周翔
李夏风
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Zhongke Yuncong Technology Co Ltd
Original Assignee
Chongqing Zhongke Yuncong Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Zhongke Yuncong Technology Co Ltd filed Critical Chongqing Zhongke Yuncong Technology Co Ltd
Priority to CN201610120409.6A priority Critical patent/CN105809118A/en
Publication of CN105809118A publication Critical patent/CN105809118A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a three-dimensional object identifying method and an apparatus. The method comprises the following steps: acquiring first characteristic points from a to-be-identified scenario; acquiring a first characteristic vector from each of the first characteristic points by the binaryzation to partial characteristics of the first characteristic points; comparing in a one-to-one manner the first characteristic vectors to second characteristic vectors of second characteristic points preset in a model base; obtaining coupled characteristic vectors by executing the nearest couplings of second characteristic vectors and the first characteristic vectors according to proximities wherein the second characteristic vectors are obtained by the binaryzation to partial characteristics of the preset second characteristic points. When the coupled characteristic vectors are all correlated with the same model in the preset model base, and when the relative space relations of the first characteristic points are consistent with those of second characteristic points corresponding to the coupled characteristic vectors, a three-dimensional object that a model in relation to coupled characteristic vectors corresponds to in a to-be-detected scenario can be identified.

Description

Three-dimensional target recognition method and device
Technical field
The present invention relates to field of machine vision, in particular to a kind of Three-dimensional target recognition method and device.
Background technology
Three-dimensional target recognition is an important research field of machine vision all the time, and its achievement in research has been gradually available for the industries such as consumer entertainment, safety monitoring, advanced manufacture.
The local surfaces at characteristic point place is usually extracted feature by the Three-dimensional target recognition method based on local feature, therefore has good robustness when target is at least partially obscured, and is suitable to solve the Three-dimensional target recognition problem in complex scene.
Current based in the Three-dimensional target recognition method of local feature, the most classical surely belongs to Johnson et al. spin image (SpinImage, SI) method proposed.In the process building SI descriptor, the first two-dimensional coordinate system of construction feature point, and each point in local surfaces is mapped in this two-dimensional coordinate system, generate characteristic vector according to the distribution of point after mapping.SI descriptor is to blocking and complex scene robust, but grid resolution sensitive and descriptor ability is relatively weak.The diameter of Spherical Volume of characteristic point is divided into 32 sub spaces and respectively every sub spaces is built rectangular histogram by Tombari et al., orientation histogram feature (SignatureofHistogramsofOrientations, SHOT) is built by splicing the rectangular histogram of all subspaces.In a noisy environment, local feature description's ability of SHOT method is better than SI method, but SHOT method is comparatively sensitive for the variable density of a cloud.Guo et al. is it is further proposed that rotate projection statistics (RotationalProjectionStatistics, RoPS) method.Local surfaces each point is rotated and to the three of local frame of reference coordinate plane projection, builds RoPS descriptor according to the statistic of subpoint distribution by the method.RoPS descriptor, to noise, grid resolution change robust, all obtains high discrimination on each big standard database.
But, the existing Three-dimensional target recognition method based on local feature commonly uses higher-dimension floating point vector and represents local feature, and feature calculation complexity is high, and characteristic matching speed is slow, seriously constrains the application in practice of this type of method.Therefore, a kind of lightweight type Three-dimensional target recognition method taking into account accuracy of identification, operand, robustness is needed badly.
Summary of the invention
Given this, it is an object of the invention to provide a kind of Three-dimensional target recognition method and device, to improve, the feature calculation complexity caused owing to using higher-dimension floating point vector to represent local feature based on the Three-dimensional target recognition method of local feature in prior art is high, the slow-footed problem of characteristic matching.
To achieve these goals, the technical scheme that the embodiment of the present invention adopts is as follows:
First aspect, embodiments provides Three-dimensional target recognition method, including: obtain fisrt feature point from scene to be identified, each described fisrt feature point is carried out local feature binaryzation and describes the first eigenvector obtaining each described fisrt feature point;Each described first eigenvector is compared one by one with the second feature vector of the second feature point of each model in preset model storehouse, performing, by similarity, the matching characteristic vector that arest neighbors coupling obtains matching in described second feature vector with each described first eigenvector, wherein said second feature vector is described second feature point to carry out local feature binaryzation in advance describe acquisition;When obtained matching characteristic vector is all associated with the same model in described preset model storehouse, and when the relative space relation between described fisrt feature point is consistent with the relative space relation between the second feature point corresponding with described matching characteristic vector, it is determined that identify the objective corresponding with being associated with described matching characteristic vector field homoemorphism type in described scene to be identified.
Second aspect, the embodiment of the present invention additionally provides a kind of Three-dimensional target recognition device, including: acquisition module, for obtaining fisrt feature point from scene to be identified;Binaryzation describing module, for carrying out local feature binaryzation description to each described fisrt feature point, it is thus achieved that the first eigenvector of each described fisrt feature point;Matching module, for each described first eigenvector is compared one by one with the second feature vector of the second feature point of each model in preset model storehouse, performing, by similarity, the matching characteristic vector that arest neighbors coupling obtains matching in described second feature vector with each described first eigenvector, wherein said second feature vector is described second feature point to carry out local feature binaryzation in advance describe acquisition;Identification module, for being all associated with the same model in described preset model storehouse when obtained matching characteristic vector, and when the relative space relation between described fisrt feature point is consistent with the relative space relation between the second feature point corresponding with described matching characteristic vector, it is determined that identify the objective corresponding with being associated with described matching characteristic vector field homoemorphism type in described scene to be identified.
The Three-dimensional target recognition method of embodiment of the present invention offer and device, the each fisrt feature point obtained from scene to be identified is carried out local feature binaryzation and describes the first eigenvector obtaining correspondence, each first eigenvector is performed comparison operation and obtains the matching characteristic vector matched with this first eigenvector in the second feature vector of the second feature point of each model in preset model storehouse, and determine when obtained matching characteristic vector sum first eigenvector meets pre-conditioned in described scene to be identified, to identify the objective corresponding with being associated with described matching characteristic vector field homoemorphism type.Utilize Three-dimensional target recognition method and device that the embodiment of the present invention provides, owing to adopting local feature binaryzation to describe method, each characteristic point is described, the memory space needed for computing can be reduced, contribute to while taking into account accuracy of identification and robustness, reducing feature calculation complexity and improving characteristic matching speed.
For making the above and other purpose of the present invention, feature and advantage to become apparent, preferred embodiment cited below particularly, and coordinate institute's accompanying drawings, it is described in detail below.
Accompanying drawing explanation
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, the accompanying drawing used required in embodiment will be briefly described below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the premise not paying creative work, it is also possible to obtain other accompanying drawing according to these accompanying drawings.Shown in accompanying drawing, above-mentioned and other purpose, feature and the advantage of the present invention will become apparent from.The part that accompanying drawing labelling instruction identical in whole accompanying drawings is identical.Deliberately do not draw accompanying drawing by actual size equal proportion convergent-divergent, it is preferred that emphasis is the purport of the present invention is shown.
Fig. 1 illustrates the structured flowchart of the computing equipment that can be applicable to the embodiment of the present invention;
Fig. 2 illustrates the flow chart of the Three-dimensional target recognition method that first embodiment of the invention provides;
Fig. 3 illustrates the schematic diagram carrying out local feature binaryzation description in the Three-dimensional target recognition method that first embodiment of the invention provides;
Fig. 4 illustrates the schematic diagram of the Three-dimensional target recognition device that second embodiment of the invention provides.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete description, it is clear that described embodiment is only a part of embodiment of the present invention, rather than whole embodiments.Based on the embodiment in the present invention, the every other embodiment that those of ordinary skill in the art obtain under not making creative work premise, broadly fall into the scope of protection of the invention.
It should also be noted that similar label and letter below figure represent similar terms, therefore, once a certain Xiang Yi accompanying drawing is defined, then it need not be carried out definition further and explain in accompanying drawing subsequently.Meanwhile, in describing the invention, term " first ", " second " etc. are only used for distinguishing description, and it is not intended that indicate or hint relative importance.
Fig. 1 illustrates the structured flowchart of a kind of computing equipment 100 that can be applicable in the embodiment of the present invention.Described computing equipment 100 can be the suitable computing equipments such as PC (personalcomputer, PC), panel computer, work station, server.As it is shown in figure 1, described computing equipment 100 can include Three-dimensional target recognition device, memorizer 102, storage control 103, processor 104 and the mixed-media network modules mixed-media 105 that the embodiment of the present invention provides.
Electrically connect directly or indirectly between memorizer 102, storage control 103, processor 104, each element of mixed-media network modules mixed-media 105, to realize the transmission of data or mutual.Such as, one or more communication bus can be passed through between these elements or signal bus realizes electrical connection.Described Three-dimensional target recognition device includes at least one can be stored in the software function module in memorizer 102 with the form of software or firmware (firmware), for instance software function module that described Three-dimensional target recognition device includes or computer program.
Memorizer 102 can store various software program and module, the Three-dimensional target recognition method provided such as the embodiment of the present invention and programmed instruction/module corresponding to device, processor 104 is by running the software program and module stored in the memory 102, thus performing the application of various function and data process, namely realize the Three-dimensional target recognition method in the embodiment of the present invention.Memorizer 102 can include but not limited to random access memory (RandomAccessMemory, RAM), read only memory (ReadOnlyMemory, ROM), programmable read only memory (ProgrammableRead-OnlyMemory, PROM), erasable read-only memory (ErasableProgrammableRead-OnlyMemory, EPROM), electricallyerasable ROM (EEROM) (ElectricErasableProgrammableRead-OnlyMemory, EEPROM) etc..
Processor 104 can be a kind of IC chip, has signal handling capacity.Above-mentioned processor can be general processor, including central processing unit (CentralProcessingUnit is called for short CPU), network processing unit (NetworkProcessor is called for short NP) etc.;Can also is that digital signal processor (DSP), special IC (ASIC), ready-made programmable gate array (FPGA) or other PLDs, discrete gate or transistor logic, discrete hardware components.It can realize or perform the disclosed each method in the embodiment of the present invention, step and logic diagram.The processor etc. that general processor can be microprocessor or this processor can also be any routine.
Mixed-media network modules mixed-media 105 is used for receiving and sending network signal, for outside equipment sending data or receive the data from external equipment.Above-mentioned network signal can include wireless signal or wire signal.
Being appreciated that the structure shown in Fig. 1 is only signal, computing equipment 100 can also include the assembly more or more less than shown in Fig. 1, or has the configuration different from shown in Fig. 1.Each assembly shown in Fig. 1 can adopt hardware, software or its combination to realize.It addition, the computing equipment in the embodiment of the present invention can also include the computing equipment of multiple concrete difference in functionality.
Describe the present invention below by specific embodiment.
First embodiment
Fig. 2 illustrates the flow chart of the Three-dimensional target recognition method that first embodiment of the invention provides.Referring to Fig. 2, the method for the Three-dimensional target recognition that first embodiment provides may include that
Step S11, obtains fisrt feature point from scene to be identified, each described fisrt feature point carries out local feature binaryzation and describes the first eigenvector obtaining each described fisrt feature point.
Preferably, it is possible to by uniform sampling method, scene to be identified is carried out feature point detection, obtain fisrt feature point.Specifically, first the trellis depth image uniform of scene to be identified can being divided into the stereoscopic grid that multiple length of side is R, each stereoscopic grid includes multiple summit Q, wherein R be the grid resolution of this trellis depth image and its can be set by the user.For each stereoscopic grid, when the number of the summit Q in this stereoscopic grid is less than the first predetermined threshold value τ, do not extract the characteristic point of this stereoscopic grid;And when the number of the summit Q in this stereoscopic grid is more than or equal to the first predetermined threshold value τ, the X of all summit Q in this stereoscopic grid, Y, Z coordinate are averaged respectively, obtain the coordinate of the characteristic point P of this stereoscopic grid.It is to say, for each stereoscopic grid more than or equal to the first predetermined threshold value τ of the quantity of wherein summit Q, it is possible to by carrying out formulaCalculate the characteristic point P of i-th stereoscopic gridiCoordinate, wherein QikRepresent the kth summit in i-th stereoscopic grid.The characteristic point P of each stereoscopic grid is the fisrt feature point obtained from scene to be identified.It should be noted that can also pass through other suitable methods obtains fisrt feature point from scene to be identified, the specific embodiment of the invention is not limited thereto.
In a kind of detailed description of the invention, described local feature binaryzation that each described fisrt feature point is carried out describes the first eigenvector obtaining each described fisrt feature point, may include that and build partial 3 d referential at each described fisrt feature point place, and under described partial 3 d referential, carry out the binaryzation description first eigenvector to extract this fisrt feature point.
Specifically, partial 3 d referential is built at each described fisrt feature point place, may include that for each described fisrt feature point P, local surfaces is intercepted from the depth image surface corresponding with described scene to be identified, according to the normal direction of described local surfaces in this fisrt feature point place structure partial 3 d referential with default support radius.Support that radius (SupportRadius) also cries spin picture traverse (ImageWidth), represent when extraction local surfaces feature the scope of point of proximity around selected characteristic point.The partial 3 d coordinate system so built is conducive to extracting descriptive power and the stronger characteristic vector of robustness.Fig. 3 (a) illustrates sentence at fisrt feature point P and preset the schematic local surfaces S and schematic partial 3 d coordinate system supporting radius intercepting.As known to the skilled person, the local surfaces intercepted from depth image surface is three-dimensional.
Specifically, for each described fisrt feature point P, under described partial 3 d referential, carry out binaryzation describe first eigenvector to extract this fisrt feature point P, it is possible to including: obtain first group of test to, second group of test to right with the 3rd group of test according to described local surfaces and described partial 3 d referential;Respectively according to described first group test to, described second group test to described 3rd group test to determining the first two-value bit string, the second two-value bit string and the 3rd two-value bit string;Described first two-value bit string, described second two-value bit string and the 3rd two-value bit string are spliced successively, obtains the first eigenvector of this fisrt feature point.
Wherein, according to described local surfaces and described partial 3 d referential obtain first group test to, second group test to right with the 3rd group of test, may include that and each point (point in three-dimensional point cloud) on described local surfaces S is projected to the first coordinate plane of described partial 3 d referential, the second coordinate plane and the 3rd coordinate plane (such as the X/Y plane of XYZ coordinate system, XZ plane and YZ plane) respectively, obtain the first two-dimensional points cloud S1, the second two-dimensional points cloud S2 and the three two-dimensional points cloud S3, as shown in Fig. 3 (b);Respectively described first two-dimensional points cloud S1, described second two-dimensional points cloud S2 and described 3rd two-dimensional points cloud S3 are divided into N × N number of sub-block by the maximum boundary that subpoint is distributed, and add up the number of subpoint in each sub-block, wherein N is positive integer (value of N is 5 in Fig. 3 (b));For described first two-dimensional points cloud S1, described second two-dimensional points cloud S2 and described 3rd two-dimensional points cloud S3, choose 2m sub-block respectively in an identical manner and formed a pair according to choosing order each two sub-block, with correspondingly build first group test to, second group test to right with the 3rd group of test, wherein often group is tested including m to sub-block, m is positive integer, refers to Fig. 3 (c).
Every pair of sub-block includes the first sub-block and the second sub-block, for instance can set that the sub-block being first selected in every pair of sub-block be the sub-block that the first sub-block is then selected is the second sub-block.Wherein, described right according to described first group of test respectively, described second group of test is tested determining the first two-value bit string with described 3rd group, second two-value bit string and the 3rd two-value bit string, it is right to may include that for described first group of test, described second group of test is tested the m included sub-block with described 3rd group test centering each group, every pair of sub-block is carried out binaryzation to obtain this binaryzation numerical value to sub-block according to following formula (1), afterwards by each group of test to value corresponding to every pair of sub-block splice successively, to respectively obtain the first two-value bit string, second two-value bit string and the 3rd two-value bit string, each described two-value bit string is m dimension.
Wherein, ci(u)、ciV () represents that often group tests the number of subpoint in the first sub-block of centering i-th pair sub-block, the second sub-block respectively.That is, for often organize test centering every pair sub-block, by the first sub-block subpoint the first number and in the second sub-block the second number of subpoint compare, if described first number is more than described second number, it is then 1 by this to the binaryzation setting value of sub-block, is otherwise set as 0.According to such as first group of test, the first two-value bit string obtained can be expressed asSimilar fashion can be adopted to represent the corresponding two-value bit string obtained according to second group of test to the 3rd group of test.
Refer to Fig. 3 (d), it is shown that three groups of tests according to Fig. 3 (c) are to the first two-value bit string f respectively obtained1, the second two-value bit string f2With the 3rd two-value bit string f3.Fig. 3 (e) illustrates by by the first two-value bit string f1, the second two-value bit string f2With the 3rd two-value bit string f3It is sequentially carried out the first eigenvector f of the fisrt feature point P that splicing obtainsp
Step S12, each described first eigenvector is compared one by one with the second feature vector of the second feature point of each model in preset model storehouse, performing, by similarity, the matching characteristic vector that arest neighbors coupling obtains matching in described second feature vector with each described first eigenvector, wherein said second feature vector is described second feature point to carry out local feature binaryzation in advance describe acquisition.
After the first eigenvector that each fisrt feature point obtaining scene to be identified is corresponding, it is possible to search matching characteristic vector the most similar to each fisrt feature point in the second feature vector of the second feature point of each model in preset model storehouse.Wherein, in preset model storehouse, the second feature point of each model can adopt the uniform sampling method described in step sl to obtain, and no longer repeats at this.Additionally, each second feature vector is each second feature point to carry out describe similar local feature binaryzation in advance to the local feature binaryzation described in detail in step S11 describe acquisition, no longer similar operations is repeated equally at this.
It should be noted that choose for each two-dimensional points cloud in the process that each fisrt feature point is carried out local feature binaryzation description 2m sub-block build test to mode should with in the process that each second feature point is carried out local feature binaryzation description for each two-dimensional points cloud choose 2m sub-block build test to mode identical.It is to say, the position of 2m the sub-block chosen for each two-dimensional points cloud in whole program should corresponding and choose order should be identical.Such as, adopt, when can choose 2m sub-block for the two-dimensional points cloud that second feature point is corresponding first in setting up preset model storehouse, the mode randomly selected, afterwards all should be consistent with the selection operation of 2m the sub-block performed first maintenance whenever performing the selection operation of 2m sub-block.It should be noted that and can also choose above-mentioned 2m sub-block by Gauss distribution or other suitable modes, the specific embodiment of the invention is not limited thereto.
Additionally, in preset model storehouse, except storing the second feature point that each model is associated, also storage has the relative space relation between each second feature point being associated with each model.
Step S13, when obtained matching characteristic vector is all associated with the same model in described preset model storehouse, and when the relative space relation between described fisrt feature point is consistent with the relative space relation between the second feature point corresponding with described matching characteristic vector, it is determined that identify the objective corresponding with being associated with described matching characteristic vector field homoemorphism type in described scene to be identified.
Specifically, obtained matching characteristic vector is all associated with the same model in described preset model storehouse, and the matching characteristic vector representing obtained belongs to same model.Furthermore, it is possible to by utilizing Geometrical consistency algorithm to judge whether the relative space relation between described fisrt feature point is consistent with the relative space relation between the second feature point corresponding with described matching characteristic vector.In addition, attitude cluster (PoseClustering), stochastical sampling concordance (RandomSampleConsensus) and three-dimensional Hough ballot (3DHoughVoting) etc. can also be utilized to judge whether the relative space relation between described fisrt feature point is consistent with the relative space relation between the second feature point corresponding with described matching characteristic vector, and the specific embodiment of the invention is not limited thereto.
Further, after identifying described objective, described Three-dimensional target recognition method can also include: utilizes absolute orientation (AbsoluteOrientation) algorithm to determine described objective position in described scene to be identified and attitude.That is, it is possible to utilize absolute orientation algorithm to determine and be associated with described matching characteristic vector field homoemorphism type to the conversion of scene to be identified it is assumed that be namely associated with described matching characteristic vector field homoemorphism type to the spin matrix of correspondence position in scene to be identified and translation vector.
In the Three-dimensional target recognition method that first embodiment of the invention provides, it is preferable that when N be 10, m be 128 time, it is possible to obtain discrimination preferably.In the case, adopt binaryzation local feature description provided by the invention to accord with, each characteristic vector is dimension be 384 two-value bit string, only take up 48 bytes of storage space.But SI, SHOT, RoPS descriptor adopts floating number vector representation, dimension is minimum is 135, at least takies 540 bytes of storage space.As can be seen here, the advantage of this binaryzation local feature description provided by the invention symbol is in that magnitude is light, can save memory space, and the application scenarios for calculated performance and storage resource-constrained also has stronger adaptive capacity.
The Three-dimensional target recognition method that the embodiment of the present invention provides, the each fisrt feature point obtained from scene to be identified is carried out local feature binaryzation and describes the first eigenvector obtaining correspondence, each first eigenvector is performed comparison operation and obtains the matching characteristic vector matched with this first eigenvector in the second feature vector of the second feature point of each model in preset model storehouse, and determine when obtained matching characteristic vector sum first eigenvector meets pre-conditioned in described scene to be identified, to identify the objective corresponding with being associated with described matching characteristic vector field homoemorphism type.Utilize the Three-dimensional target recognition method that the embodiment of the present invention provides, owing to adopting local feature binaryzation to describe method, each characteristic point is described, the memory space needed for computing can be reduced, contribute to while taking into account accuracy of identification and robustness, reducing feature calculation complexity and improving characteristic matching speed.
Second embodiment
Second embodiment of the invention provides a kind of Three-dimensional target recognition device.Referring to Fig. 4, the Three-dimensional target recognition device 200 that second embodiment of the invention provides can include acquisition module 210, binaryzation describing module 220, matching module 230 and identification module 240.
Acquisition module 210, for obtaining fisrt feature point from scene to be identified.
Preferably, scene to be identified can be carried out feature point detection by uniform sampling method by acquisition module 210, obtains fisrt feature point.Specifically, first the trellis depth image uniform of scene to be identified can being divided into the stereoscopic grid that multiple length of side is R, each stereoscopic grid includes multiple summit Q, wherein R be the grid resolution of this trellis depth image and its can be set by the user.For each stereoscopic grid, when the number of the summit Q in this stereoscopic grid is less than the first predetermined threshold value τ, do not extract the characteristic point of this stereoscopic grid;And when the number of the summit Q in this stereoscopic grid is more than or equal to the first predetermined threshold value τ, the X of all summit Q in this stereoscopic grid, Y, Z coordinate are averaged respectively, obtain the coordinate of the characteristic point P of this stereoscopic grid.It is to say, for each stereoscopic grid more than or equal to the first predetermined threshold value τ of the quantity of wherein summit Q, it is possible to by carrying out formulaCalculate the characteristic point P of i-th stereoscopic gridiCoordinate, wherein QikRepresent the kth summit in i-th stereoscopic grid.The characteristic point P of each stereoscopic grid is the fisrt feature point obtained from scene to be identified.
Binaryzation describing module 220, for carrying out local feature binaryzation description to each described fisrt feature point, it is thus achieved that the first eigenvector of each described fisrt feature point.
Specifically, binaryzation describing module 220 may be used for: builds partial 3 d referential at each described fisrt feature point place, and carries out the binaryzation description first eigenvector to extract this fisrt feature point under described partial 3 d referential.
Wherein, about building partial 3 d referential at each described fisrt feature point place, binaryzation describing module 220 can: for each described fisrt feature point P, local surfaces is intercepted from the depth image surface corresponding with described scene to be identified, according to the normal direction of described local surfaces in this fisrt feature point place structure partial 3 d referential with default support radius.
About for each described fisrt feature point P, under described partial 3 d referential, carry out binaryzation describe the first eigenvector to extract this fisrt feature point P, binaryzation describing module 220 can: by each point (point in three-dimensional point cloud) in described local surfaces respectively to the first coordinate plane of described partial 3 d referential, the second coordinate plane and the 3rd coordinate plane projection, it is thus achieved that the first two-dimensional points cloud, the second two-dimensional points cloud and the 3rd two-dimensional points cloud;Respectively by the maximum boundary that subpoint is distributed, described first two-dimensional points cloud, described second two-dimensional points cloud and described 3rd two-dimensional points cloud being divided into N × N number of sub-block, and add up the number of subpoint in each sub-block, wherein N is positive integer;For described first two-dimensional points cloud, described second two-dimensional points cloud and described 3rd two-dimensional points cloud, choose 2m sub-block respectively in an identical manner and formed a pair according to choosing order each two sub-block, with correspondingly build first group test to, second group test to right with the 3rd group of test, wherein often group test is to including m to sub-block, and m is positive integer;Respectively according to described first group test to, described second group test to described 3rd group test to determining the first two-value bit string, the second two-value bit string and the 3rd two-value bit string;Described first two-value bit string, described second two-value bit string and the 3rd two-value bit string are spliced successively, obtains the first eigenvector of this fisrt feature point.
Every pair of sub-block includes the first sub-block and the second sub-block, for instance can set that the sub-block being first selected in every pair of sub-block be the sub-block that the first sub-block is then selected is the second sub-block.Wherein, about right according to described first group of test respectively, described second group of test is tested determining the first two-value bit string with described 3rd group, second two-value bit string and the 3rd two-value bit string, binaryzation describing module 220 can: right for described first group of test, described second group of test is tested the m included sub-block with described 3rd group test centering each group, every pair of sub-block is carried out binaryzation to obtain this binaryzation numerical value to sub-block according to formula (1), afterwards by each group of test to value corresponding to every pair of sub-block splice successively, to respectively obtain the first two-value bit string, second two-value bit string and the 3rd two-value bit string, each described two-value bit string is m dimension.
Matching module 230, for each described first eigenvector is compared one by one with the second feature vector of the second feature point of each model in preset model storehouse, performing, by similarity, the matching characteristic vector that arest neighbors coupling obtains matching in described second feature vector with each described first eigenvector, wherein said second feature vector is described second feature point to carry out local feature binaryzation in advance describe acquisition.
After the first eigenvector that each fisrt feature point obtaining scene to be identified is corresponding, matching module 230 can search matching characteristic vector the most similar to each fisrt feature point in the second feature vector of the second feature point of each model in preset model storehouse.
Identification module 240, for being all associated with the same model in described preset model storehouse when obtained matching characteristic vector, and when the relative space relation between described fisrt feature point is consistent with the relative space relation between the second feature point corresponding with described matching characteristic vector, it is determined that identify the objective corresponding with being associated with described matching characteristic vector field homoemorphism type in described scene to be identified.
Specifically, identification module 240 can by utilizing Geometrical consistency algorithm to judge whether the relative space relation between described fisrt feature point is consistent with the relative space relation between the second feature point corresponding with described matching characteristic vector.In addition, attitude cluster (PoseClustering), stochastical sampling concordance (RandomSampleConsensus) and three-dimensional Hough ballot (3DHoughVoting) etc. can also be utilized to judge whether the relative space relation between described fisrt feature point is consistent with the relative space relation between the second feature point corresponding with described matching characteristic vector, and the specific embodiment of the invention is not limited thereto.
Further, the Three-dimensional target recognition device that second embodiment of the invention provides can also include orientation module 250.Orientation module 250 is for, after identification module 240 identifies described objective, utilizing absolute orientation (AbsoluteOrientation) algorithm to determine described objective position in described scene to be identified and attitude.That is, orientation module 250 can utilize absolute orientation algorithm to determine and be associated with described matching characteristic vector field homoemorphism type to the conversion of scene to be identified it is assumed that be namely associated with described matching characteristic vector field homoemorphism type to the spin matrix of correspondence position in scene to be identified and translation vector.
The present embodiment detailed process to each Implement of Function Module each function of Three-dimensional target recognition device 200, refers to above-mentioned Fig. 1 to the particular content described in embodiment illustrated in fig. 3, repeats no more herein.
It should be noted that each embodiment in this specification all adopts the mode gone forward one by one to describe, what each embodiment stressed is the difference with other embodiments, between each embodiment identical similar part mutually referring to.For device class embodiment, due to itself and embodiment of the method basic simlarity, so what describe is fairly simple, relevant part illustrates referring to the part of embodiment of the method.
In several embodiments provided herein, it should be understood that disclosed apparatus and method, it is also possible to realize by another way.Device embodiment described above is merely schematic, for instance, flow chart and block diagram in accompanying drawing show according to the device of multiple embodiments of the present invention, the architectural framework in the cards of method and computer program product, function and operation.In this, flow chart or each square frame in block diagram can represent a part for a module, program segment or code, and a part for described module, program segment or code comprises the executable instruction of one or more logic function for realizing regulation.It should also be noted that at some as in the implementation replaced, the function marked in square frame can also to be different from the order generation marked in accompanying drawing.Such as, two continuous print square frames can essentially perform substantially in parallel, and they can also perform sometimes in the opposite order, and this determines according to involved function.It will also be noted that, the combination of the square frame in each square frame in block diagram and/or flow chart and block diagram and/or flow chart, can realize by the special hardware based system of the function or action that perform regulation, or can realize with the combination of specialized hardware Yu computer instruction.
It addition, each functional module in each embodiment of the present invention can integrate one independent part of formation, it is also possible to be modules individualism, it is also possible to the integrally formed independent part of two or more modules.
If described function is using the form realization of software function module and as independent production marketing or use, it is possible to be stored in a computer read/write memory medium.Based on such understanding, part or the part of this technical scheme that prior art is contributed by technical scheme substantially in other words can embody with the form of software product, this computer software product is stored in a storage medium, including some instructions with so that a computer equipment (can be personal computer, server, or the network equipment etc.) perform all or part of step of method described in each embodiment of the present invention.And aforesaid storage medium includes: USB flash disk, portable hard drive, read only memory (ROM, Read-OnlyMemory), the various media that can store program code such as random access memory (RAM, RandomAccessMemory), magnetic disc or CD.It should be noted that, in this article, the relational terms of such as first and second or the like is used merely to separate an entity or operation with another entity or operating space, and not necessarily requires or imply the relation that there is any this reality between these entities or operation or sequentially.And, term " includes ", " comprising " or its any other variant are intended to comprising of nonexcludability, so that include the process of a series of key element, method, article or equipment not only include those key elements, but also include other key elements being not expressly set out, or also include the key element intrinsic for this process, method, article or equipment.When there is no more restriction, statement " including ... " key element limited, it is not excluded that there is also other identical element in including the process of described key element, method, article or equipment.
The foregoing is only the preferred embodiments of the present invention, be not limited to the present invention, for a person skilled in the art, the present invention can have various modifications and variations.All within the spirit and principles in the present invention, any amendment of making, equivalent replacement, improvement etc., should be included within protection scope of the present invention.It should also be noted that similar label and letter below figure represent similar terms, therefore, once a certain Xiang Yi accompanying drawing is defined, then it need not be carried out definition further and explain in accompanying drawing subsequently.
The above; being only the specific embodiment of the present invention, but protection scope of the present invention is not limited thereto, any those familiar with the art is in the technical scope that the invention discloses; change can be readily occurred in or replace, all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should described be as the criterion with scope of the claims.

Claims (10)

1. a Three-dimensional target recognition method, it is characterised in that including:
Obtain fisrt feature point from scene to be identified, each described fisrt feature point is carried out local feature binaryzation and describes the first eigenvector obtaining each described fisrt feature point;
Each described first eigenvector is compared one by one with the second feature vector of the second feature point of each model in preset model storehouse, performing, by similarity, the matching characteristic vector that arest neighbors coupling obtains matching in described second feature vector with each described first eigenvector, wherein said second feature vector is described second feature point to carry out local feature binaryzation in advance describe acquisition;
When obtained matching characteristic vector is all associated with the same model in described preset model storehouse, and when the relative space relation between described fisrt feature point is consistent with the relative space relation between the second feature point corresponding with described matching characteristic vector, it is determined that identify the objective corresponding with being associated with described matching characteristic vector field homoemorphism type in described scene to be identified.
2. Three-dimensional target recognition method according to claim 1, it is characterised in that described local feature binaryzation that each described fisrt feature point is carried out describes the first eigenvector obtaining each described fisrt feature point, including:
Build partial 3 d referential at each described fisrt feature point place, and under described partial 3 d referential, carry out the binaryzation description first eigenvector to extract this fisrt feature point.
3. Three-dimensional target recognition method according to claim 2, it is characterised in that described in each described fisrt feature point place structure partial 3 d referential, including:
For each described fisrt feature point, intercept local surfaces with default support radius from the depth image surface corresponding with described scene to be identified, according to the normal direction of described local surfaces in this fisrt feature point place structure partial 3 d referential.
4. Three-dimensional target recognition method according to claim 3, it is characterised in that the described binaryzation that carries out under described partial 3 d referential describes the first eigenvector to extract this fisrt feature point, including:
According to described local surfaces and described partial 3 d referential obtain first group test to, second group test to right with the 3rd group of test;
Respectively according to described first group test to, described second group test to described 3rd group test to determining the first two-value bit string, the second two-value bit string and the 3rd two-value bit string;
Described first two-value bit string, described second two-value bit string and the 3rd two-value bit string are spliced successively, obtains the first eigenvector of this fisrt feature point.
5. Three-dimensional target recognition method according to claim 4, it is characterised in that described according to described local surfaces and described partial 3 d referential obtain first group test to, second group test to right with the 3rd group of test, including:
By each point in described local surfaces respectively to the first coordinate plane of described partial 3 d referential, the second coordinate plane and the 3rd coordinate plane projection, it is thus achieved that the first two-dimensional points cloud, the second two-dimensional points cloud and the 3rd two-dimensional points cloud;
Respectively by the maximum boundary that subpoint is distributed, described first two-dimensional points cloud, described second two-dimensional points cloud and described 3rd two-dimensional points cloud being divided into N × N number of sub-block, and add up the number of subpoint in each sub-block, wherein N is positive integer;
For described first two-dimensional points cloud, described second two-dimensional points cloud and described 3rd two-dimensional points cloud, choose 2m sub-block respectively in an identical manner and formed a pair according to choosing order each two sub-block, with correspondingly build first group test to, second group test to right with the 3rd group of test, wherein often group test is to including m to sub-block, and m is positive integer.
6. Three-dimensional target recognition method according to claim 5, it is characterized in that, every pair of sub-block includes the first sub-block and the second sub-block, described respectively according to described first group test to, described second group test to described 3rd group test to determining the first two-value bit string, the second two-value bit string and the 3rd two-value bit string, including:
Right for described first group of test, described second group of test is tested the m included sub-block with described 3rd group test centering each group, first number of subpoint and the second number of subpoint in described second sub-block in described first sub-block of more every pair of sub-block, if described first number is more than described second number, it is then 1 by this to the binaryzation setting value of sub-block, otherwise it is set as 0, afterwards by each group of test to value corresponding to every pair of sub-block splice successively, to respectively obtain the first two-value bit string, second two-value bit string and the 3rd two-value bit string, each described two-value bit string is m dimension.
7. Three-dimensional target recognition method according to claim 1, it is characterised in that after identifying described objective, described Three-dimensional target recognition method also includes:
Absolute orientation algorithm is utilized to determine described objective position in described scene to be identified and attitude.
8. Three-dimensional target recognition method according to claim 1, it is characterised in that described from scene to be identified acquisition fisrt feature point, including:
By uniform sampling method, scene to be identified is carried out feature point detection, obtain fisrt feature point.
9. a Three-dimensional target recognition device, it is characterised in that including:
Acquisition module, for obtaining fisrt feature point from scene to be identified;
Binaryzation describing module, for carrying out local feature binaryzation description to each described fisrt feature point, it is thus achieved that the first eigenvector of each described fisrt feature point;
Matching module, for each described first eigenvector is compared one by one with the second feature vector of the second feature point of each model in preset model storehouse, performing, by similarity, the matching characteristic vector that arest neighbors coupling obtains matching in described second feature vector with each described first eigenvector, wherein said second feature vector is described second feature point to carry out local feature binaryzation in advance describe acquisition;
Identification module, for being all associated with the same model in described preset model storehouse when obtained matching characteristic vector, and when the relative space relation between described fisrt feature point is consistent with the relative space relation between the second feature point corresponding with described matching characteristic vector, it is determined that identify the objective corresponding with being associated with described matching characteristic vector field homoemorphism type in described scene to be identified.
10. Three-dimensional target recognition device according to claim 9, it is characterized in that, described binaryzation describing module specifically for: build partial 3 d referential at each described fisrt feature point place, and under described partial 3 d referential, carry out binaryzation describe first eigenvector to extract this fisrt feature point.
CN201610120409.6A 2016-03-03 2016-03-03 Three-dimensional object identifying method and apparatus Pending CN105809118A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610120409.6A CN105809118A (en) 2016-03-03 2016-03-03 Three-dimensional object identifying method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610120409.6A CN105809118A (en) 2016-03-03 2016-03-03 Three-dimensional object identifying method and apparatus

Publications (1)

Publication Number Publication Date
CN105809118A true CN105809118A (en) 2016-07-27

Family

ID=56466000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610120409.6A Pending CN105809118A (en) 2016-03-03 2016-03-03 Three-dimensional object identifying method and apparatus

Country Status (1)

Country Link
CN (1) CN105809118A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295710A (en) * 2016-08-18 2017-01-04 晶赞广告(上海)有限公司 Image local feature matching process, device and terminal of based on non-geometric constraint
CN108427927A (en) * 2018-03-16 2018-08-21 深圳市商汤科技有限公司 Target recognition methods and device, electronic equipment, program and storage medium again
CN110009562A (en) * 2019-01-24 2019-07-12 北京航空航天大学 A method of comminuted fracture threedimensional model is spliced using template
CN110516516A (en) * 2018-05-22 2019-11-29 北京京东尚科信息技术有限公司 Robot pose measurement method and device, electronic equipment, storage medium
CN110909766A (en) * 2019-10-29 2020-03-24 北京明略软件系统有限公司 Similarity determination method and device, storage medium and electronic device
WO2020073444A1 (en) * 2018-10-12 2020-04-16 深圳大学 Point cloud data processing method and device based on neural network
CN111539949A (en) * 2020-05-12 2020-08-14 河北工业大学 Point cloud data-based lithium battery pole piece surface defect detection method
WO2020168770A1 (en) * 2019-02-23 2020-08-27 深圳市商汤科技有限公司 Object pose estimation method and apparatus
CN112053427A (en) * 2020-10-15 2020-12-08 珠海格力智能装备有限公司 Point cloud feature extraction method, device, equipment and readable storage medium
CN114898354A (en) * 2022-03-24 2022-08-12 中德(珠海)人工智能研究院有限公司 Measuring method and device based on three-dimensional model, server and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101609507A (en) * 2009-07-28 2009-12-23 中国科学技术大学 Gait recognition method
CN101839692A (en) * 2010-05-27 2010-09-22 西安交通大学 Method for measuring three-dimensional position and stance of object with single camera
CN102208033A (en) * 2011-07-05 2011-10-05 北京航空航天大学 Data clustering-based robust scale invariant feature transform (SIFT) feature matching method
CN104809456A (en) * 2015-05-21 2015-07-29 重庆大学 Three-dimensional target recognition method based on two-value descriptor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101609507A (en) * 2009-07-28 2009-12-23 中国科学技术大学 Gait recognition method
CN101839692A (en) * 2010-05-27 2010-09-22 西安交通大学 Method for measuring three-dimensional position and stance of object with single camera
CN102208033A (en) * 2011-07-05 2011-10-05 北京航空航天大学 Data clustering-based robust scale invariant feature transform (SIFT) feature matching method
CN104809456A (en) * 2015-05-21 2015-07-29 重庆大学 Three-dimensional target recognition method based on two-value descriptor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵季等: "使用特征匹配的三维目标识别方法", 《华中科技大学学报(自然科学版)》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295710B (en) * 2016-08-18 2019-06-14 晶赞广告(上海)有限公司 Image local feature matching process, device and terminal based on non-geometric constraint
CN106295710A (en) * 2016-08-18 2017-01-04 晶赞广告(上海)有限公司 Image local feature matching process, device and terminal of based on non-geometric constraint
CN108427927A (en) * 2018-03-16 2018-08-21 深圳市商汤科技有限公司 Target recognition methods and device, electronic equipment, program and storage medium again
CN110516516A (en) * 2018-05-22 2019-11-29 北京京东尚科信息技术有限公司 Robot pose measurement method and device, electronic equipment, storage medium
WO2020073444A1 (en) * 2018-10-12 2020-04-16 深圳大学 Point cloud data processing method and device based on neural network
US11270519B2 (en) 2018-10-12 2022-03-08 Shenzhen University Method of processing point cloud data based on neural network
CN110009562A (en) * 2019-01-24 2019-07-12 北京航空航天大学 A method of comminuted fracture threedimensional model is spliced using template
WO2020168770A1 (en) * 2019-02-23 2020-08-27 深圳市商汤科技有限公司 Object pose estimation method and apparatus
CN110909766A (en) * 2019-10-29 2020-03-24 北京明略软件系统有限公司 Similarity determination method and device, storage medium and electronic device
CN110909766B (en) * 2019-10-29 2022-11-29 北京明略软件系统有限公司 Similarity determination method and device, storage medium and electronic device
CN111539949A (en) * 2020-05-12 2020-08-14 河北工业大学 Point cloud data-based lithium battery pole piece surface defect detection method
CN111539949B (en) * 2020-05-12 2022-05-13 河北工业大学 Point cloud data-based lithium battery pole piece surface defect detection method
CN112053427A (en) * 2020-10-15 2020-12-08 珠海格力智能装备有限公司 Point cloud feature extraction method, device, equipment and readable storage medium
CN114898354A (en) * 2022-03-24 2022-08-12 中德(珠海)人工智能研究院有限公司 Measuring method and device based on three-dimensional model, server and readable storage medium

Similar Documents

Publication Publication Date Title
CN105809118A (en) Three-dimensional object identifying method and apparatus
Levi et al. LATCH: learned arrangements of three patch codes
CN107273872B (en) Depth discrimination network model method for re-identification of pedestrians in image or video
Torii et al. Visual place recognition with repetitive structures
Poiesi et al. Learning general and distinctive 3D local deep descriptors for point cloud registration
Du et al. Computer-aided plant species identification (CAPSI) based on leaf shape matching technique
Schlegel et al. HBST: A hamming distance embedding binary search tree for feature-based visual place recognition
CN104050709B (en) A kind of three dimensional image processing method and electronic equipment
CN110472652B (en) Small sample classification method based on semantic guidance
CN115455089B (en) Performance evaluation method and system of passive component and storage medium
CN104700033A (en) Virus detection method and virus detection device
US10528844B2 (en) Method and apparatus for distance measurement
Nanni et al. Local phase quantization descriptor for improving shape retrieval/classification
WO2023130717A1 (en) Image positioning method and apparatus, computer device and storage medium
Ma et al. Detection method of insulator based on faster R-CNN
US20220191113A1 (en) Method and apparatus for monitoring abnormal iot device
CN110909804B (en) Method, device, server and storage medium for detecting abnormal data of base station
CN111368867A (en) Archive classification method and system and computer readable storage medium
CN111626360A (en) Method, device, equipment and storage medium for detecting fault type of boiler
CN113657423A (en) Target detection method suitable for small-volume parts and stacked parts and application thereof
US9286217B2 (en) Systems and methods for memory utilization for object detection
Ning et al. Yolov4-object: An efficient model and method for Object Discovery
Wang et al. A Fast and Robust Ellipse‐Detection Method Based on Sorted Merging
CN102968618A (en) Static hand gesture recognition method fused with BoF model and spectral clustering algorithm
CN111488829A (en) Pole tower inspection photo classification method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160727

RJ01 Rejection of invention patent application after publication