CN106529394B - A kind of indoor scene object identifies simultaneously and modeling method - Google Patents

A kind of indoor scene object identifies simultaneously and modeling method Download PDF

Info

Publication number
CN106529394B
CN106529394B CN201610832845.6A CN201610832845A CN106529394B CN 106529394 B CN106529394 B CN 106529394B CN 201610832845 A CN201610832845 A CN 201610832845A CN 106529394 B CN106529394 B CN 106529394B
Authority
CN
China
Prior art keywords
feature
dimensional
similarity
identified
fpfh
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610832845.6A
Other languages
Chinese (zh)
Other versions
CN106529394A (en
Inventor
曾碧
陈佳洲
黄文�
曹军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN201610832845.6A priority Critical patent/CN106529394B/en
Publication of CN106529394A publication Critical patent/CN106529394A/en
Application granted granted Critical
Publication of CN106529394B publication Critical patent/CN106529394B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/653Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of indoor scene object while identification and modeling methods, comprising steps of input RGB-D image;Object segmentation;Extract SIFT feature and FPFH feature;SIFT feature and FPFH Fusion Features;Object identification;Object modeling.Wherein object modeling is the position orientation relation calculated between object parts and sets a threshold value, if pose variation merges two object parts less than threshold value, and a node is used as in view figure, otherwise, two object parts all remain into view figure, as two nodes.Compared with prior art, object online recognition and modeling may be implemented in the present invention, and proposes improved view drawing method, reduces data redundancy, reduces data storage burden, improve recognition efficiency.

Description

A kind of indoor scene object identifies simultaneously and modeling method
Technical field
The present invention relates to field of information processing more particularly to a kind of indoor scene object while identification and modeling methods.
Background technique
Currently, in robot object identification and scene understanding research field, a large amount of algorithm researches all concentrate on passing through by Test picture matches with training dataset or achievees the purpose that identification by training classifier, and does not all take into account that object generally The pose problem of body.As Liang Mingjie et al. proposes the object model representation method based on view figure, and indicated in the object On the basis of provide the probability observation model of object, identification with modeling is finally attributed to probability inference, and estimate by maximum likelihood Optimization while meter realizes identification and modeling;Alvaro Collet Romea proposes the indoor object based on constraint frame Body identification and modeling method, this method can automatically handle entire video data stream, and it was found that object, most important spy Point is to joined constraint frame, certain constraints in daily life can be dissolved into frame by mathematical way, can be with According to the different additions of scene, constraint condition is reduced or changed, increases object identification rate and object identification efficiency.
But accurate posture information, such as robot manipulation can be all used in many object identifications application, path rule It draws, augmented reality etc., and algorithm needs training data, so being unable to reach online recognition effect, has seriously affected machine People's performance in scene indoors.In terms of modeling, robot walks in environment indoors for a long time, the observation portion of indoor object Branch is often acquired, so that the model established generates redundant data, the time of robot ambulation is longer, obtains same The observation data of a object are more, and the object model of foundation is bigger, and redundant data can also increase, this builds not only bad for object Mould and data storage, also have a adverse impact to later period object identification efficiency.
Summary of the invention
In order to overcome the deficiencies of the prior art, object online recognition and modeling are realized, and solves the problems, such as data redundancy, is reduced Memory burden, effectively improves recognition efficiency, and the present invention proposes that a kind of indoor scene object identifies and modeling method simultaneously.
The technical scheme of the present invention is realized as follows:
A kind of indoor scene object identification and modeling method, including step S1 simultaneously: input RGB-D image;
S2: object segmentation;
S3: SIFT feature and FPFH feature are extracted;
S4:SIFT feature and FPFH Fusion Features, object identification;
S5: object modeling calculates the position orientation relation between object parts and sets a threshold value, if pose variation is small In threshold value, then two object parts are merged, and is used as a node, otherwise, two object parts in view figure It all remains into view figure, as two nodes.
Further, the step S2 includes step
S21: planar point cloud is extracted using RANSAC method;
S22: the point cloud unrelated with object data is rejected;
S23: object point cloud is obtained by Euclidean distance clustering algorithm;
S24: point cloud data is mapped back into RGB-D image
Further: step S3 includes step
S31: object two dimension local feature region and Feature Descriptor are extracted using SIFT feature extracting method;
S32: the two-dimentional local feature region is mapped in three-dimensional space, three-dimensional feature point is obtained;
S33: using FPFH algorithm, generates three-dimensional feature description.
Further, step S4 includes step
S41: the match point of some mistakes is rejected using RANSAC algorithm;
S42: the two dimensional character calculated between object to be identified and object model is registrated distance;
S43: it calculates three-dimensional feature between object to be identified and object model and is registrated distance;
S44: fusion two-dimensional signal and three-dimensional information, and save similarity result;
S45: comparing all similarity results, take out the maximum result of similarity as this object identification most Terminate fruit.
The beneficial effects of the present invention are, compared with prior art, object online recognition and modeling may be implemented in the present invention, And improved view drawing method is proposed, data redundancy is reduced, data storage burden is reduced, improves recognition efficiency.
Detailed description of the invention
Fig. 1 is indoor scene object of the present invention while identification and modeling method flow chart.
Fig. 2 is the original point cloud chart of one embodiment of the invention.
Fig. 3 is the filtered point cloud chart of Fig. 2.
Fig. 4 is one embodiment of the invention RANSAC segmentation point cloud chart.
Fig. 5 is cluster segmentation point cloud chart.
Fig. 6 is the object figure before segmentation.
Fig. 7 is object figure of the Fig. 6 after over-segmentation.
Fig. 8 is partial error point registration result schematic diagram.
Fig. 9, which is Fig. 8, rejects registration result schematic diagram after the match points of some mistakes using RANSAC algorithm.
Figure 10 is misregistration result schematic diagram.
Figure 11 is Figure 10 three-dimensional registration result schematic diagram.
The recognizer flow chart of Figure 12 fusion two and three dimensions information.
Figure 13 is view graph model schematic diagram.
Figure 14 is the improved view graph model schematic diagram of Figure 13.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
Referring to Figure 1, a kind of indoor scene object identification and modeling method, including step S1 simultaneously: input RGB-D figure Picture;
S2: object segmentation;
S3: SIFT feature and FPFH feature are extracted;
S4:SIFT feature and FPFH Fusion Features, object identification;
S5: object modeling calculates the position orientation relation between object parts and sets a threshold value, if pose variation is small In threshold value, then two object parts are merged, and is used as a node, otherwise, two object parts in view figure It all remains into view figure, as two nodes.
In step S2, firstly, calculating point cloud data C by RGB image and depth image, as shown in Figure 2.Due to acquisition Point cloud data is too big, has an impact to operations such as later period segmentation, identifications, then passes through voxelization grid method and realizes down-sampling, Point cloud data is reduced, as shown in Figure 3.In order to obtain the object in scene, RANSAC method (random sampling consistency is used herein Algorithm) planar point cloud is extracted, since planar point cloud is generally the point cloud unrelated with object data such as desktop or metope, it is possible to It is rejected, as shown in Figure 4.Remaining cloud includes object and noise as seen from Figure 4, is clustered finally by Euclidean distance Algorithm obtains object point cloud { Cm, m=1 ... M, as shown in Figure 5.Point cloud data maps back RGB and schemes the object after available segmentation Body RGB schemes { Im, m=1 ... M, as shown in Figure 6,7.Then the object after segmentation can be expressed as { Im, Cm, m=1 ... M.
In step S3, a series of object { I can be partitioned into for each frame scene figurem, Cm, we use SIFT (ruler Spend invariant features transformation) feature extracting method RGB scheme ImMiddle extraction object two dimension local feature region and Feature Descriptor { Xmr, Smr, two-dimentional local feature region is mapped in three-dimensional space, obtains three-dimensional feature point, and (quick point feature is straight with FPFH Side's figure) algorithm generation three-dimensional feature description son { Zmr, Fmr, wherein X indicates that two dimensional character point, S indicate two dimensional character description, Z Indicate that three-dimensional feature point, F indicate that three-dimensional feature description, m indicate that m-th of object, r indicate r-th of characteristic point.
In step S4, we identify object by fusion two and three dimensions information, improve object identification Rate.Firstly, using the SIFT feature and Feature Descriptor of object to be identified, using KNN (K arest neighbors) algorithmic match object mould Type, but SIFT feature registration equally can also generate the match point of mistake, as shown in figure 8, producing several couples of wrong spies in figure Sign point a registration result, if be utilized mistake as a result, if can seriously affect later period object pose calculating accuracy, so We reject the match point of some mistakes using RANSAC algorithm, as shown in Figure 9.But sometimes due to two-dimensional image surface can recognize Property it is too weak, textural characteristics are unobvious, and causing SIFT feature registration to generate very big mistake will lead to final knowledge as shown in Figure 10 Not mistake.We have incorporated three-dimensional object structure feature on the basis of two-dimensional, and three-dimensional feature registration result is as shown in figure 11, have The deficiency for compensating for two dimensional character registration of effect.
Recognizer using fusion two and three dimensions information is as follows:
Step1: calculating two dimensional character between object to be identified and object model and be registrated distance, as shown in formula (1):
C1(s, t)=d (SS,St) (1)
Wherein s indicates that object to be identified, t indicate that object model, d () indicate that Euclidean distance, S indicate SIFT feature description Son, C1(s, t) indicates the Euclidean distance between object SIFT feature and object model SIFT feature to be identified, i.e., two-dimentional similar Degree, and be normalized, C is required herein1When (s, t) < 0.5, two articles are similar, execute step2 and otherwise continue to hold Row step1.
Step2: calculating three-dimensional feature between object to be identified and object model and be registrated distance, as shown in formula (2):
C2(s, t)=1-d (FS,Ft) (2)
Wherein s indicates that object to be identified, t indicate that object model, d () indicate that Euclidean distance, F indicate the description of FPFH feature Son, C2(s, t) indicates the Euclidean distance between object FPFH feature and object model FPFH feature to be identified, i.e., three-dimensional similar Degree, and be normalized, C is required herein2When (s, t) > 0.5, two articles are similar, execute step3, otherwise execute step1。
Step3: fusion two-dimensional signal and three-dimensional information, and similarity result is saved, as shown in formula (3):
T (s, t)=C2(s,t)·exp(-k·C1(s,t)) (3)
Wherein T (s, t) indicates the similarity after fusion two and three dimensions information, by C1Index location is placed on mainly to examine Two-dimentional SIFT feature has been considered for object identification with more judgment and meaning, the adjustable weight size of k.If matched Object model is the last one model, then executes step4, otherwise execute step1.
Step4: comparing all similarity results, takes out the maximum result of similarity as this object identification Final result.Algorithm flow chart is as shown in figure 12.
In step S5, by calculating the position orientation relation between object parts, and a threshold value is set, if pose changes Less than threshold value, then two object parts are merged, and is used as a node, otherwise, two object portions in view figure Divide and all remain into view figure, as two nodes in this way, not only remain the function that view figure is easily modified, but also reduce The redundancy of data, improves operational efficiency.
The data that fusion between similar view retains have: (1) model view relative pose, will not for figure after fusion New node is generated, but new side can be generated, so needing record cast view relative pose;(2) two view feature points Position, the characteristic point position of new view, the feature that two views are mutually matched after the characteristic point position of master mould view and transformation Point position only retains once;(3) Feature Descriptor, including SIFT feature description son and FPFH Feature Descriptor, for view it Between matching, the Feature Descriptor that two views are mutually matched only retains once;The point cloud data of (4) two views, master mould The point cloud data of new view after the point cloud data of view and transformation, fused point cloud data is real by voxelization grid method It now samples, reduces point cloud data.The production process for so improving view figure can be by following algorithmic notation:
Wherein V0Indicate original view node of graph, E0Indicate original view figure side, hiIndicate newly generated i-th of view Figure, P (hi) indicate hiThe similarity of view and database object model, pminIndicate similarity threshold, Pangle(hi) indicate and count According to the relative angle between the most similar view in library, PangleminIndicate angle threshold, Pdistance(hi) indicate close with database View between relative distance, PdisminIndicate distance threshold, i.e., if hiBetween view and the most similar view of database Relative angle and relative distance are both less than some threshold value, then merge two views, otherwise without fusion, are applied directly to In view figure, a node is generated.P(hi, hj) indicate that i-th of view is similar between j-th of view to database model Degree illustrates that there are similarity relations between two views, then by i-th of view and jth if this similarity is greater than threshold value Transformation relation between a view is added in database object model, becomes a new side.Figure 13 and Figure 14 is view figure Comparison between model and improved view graph model, wherein dark node indicates that fused node, grayed-out nodes are new add The node entered, dotted line is when expression is newly added or changes, and improved view graph model does not increase section it can be seen from upper figure Point increases a line newly, and view graph model has then newly added a node and three sides, so methods herein can theoretically drop Low data redundancy reaches better object modeling effect.
The above is a preferred embodiment of the present invention, it is noted that for those skilled in the art For, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also considered as Protection scope of the present invention.

Claims (4)

1. a kind of indoor scene object identifies simultaneously and modeling method, which is characterized in that including step
S1: input RGB-D image;
S2: object segmentation;
S3: SIFT feature and FPFH feature are extracted;
S4:SIFT feature and FPFH Fusion Features, object identification;
S5: object modeling calculates the position orientation relation between object parts and sets a threshold value, if pose variation is less than threshold Value, then merge two object parts, and a node is used as in view figure, and otherwise, two object parts are all protected It is left in view figure, as two nodes;
The step 4 includes step
Step1: calculating two dimensional character between object to be identified and object model and be registrated distance, as shown in formula (1):
C1(s, t)=d (SS,St) (1)
Wherein s indicates that object to be identified, t indicate that object model, d () indicate that Euclidean distance, S indicate SIFT feature description, C1 (s, t) indicates the Euclidean distance between object SIFT feature and object model SIFT feature to be identified, i.e. two dimension similarity, and It is normalized, works as C1When (s, t) < 0.5, two articles are similar, execute step2 and otherwise continue to execute step1;
Step2: calculating three-dimensional feature between object to be identified and object model and be registrated distance, as shown in formula (2):
C2(s, t)=1-d (FS,Ft) (2)
Wherein s indicates that object to be identified, t indicate that object model, d () indicate that Euclidean distance, F indicate FPFH Feature Descriptor, C2 (s, t) indicates the Euclidean distance between object FPFH feature and object model FPFH feature to be identified, i.e. three-dimensional similarity, and It is normalized, works as C2When (s, t) > 0.5, two articles are similar, execute step3, otherwise execute step1;
Step3: fusion two-dimensional signal and three-dimensional information, and similarity result is saved, as shown in formula (3):
T (s, t)=C2(s,t)·exp(-k·C1(s,t)) (3)
Wherein T (s, t) indicates the similarity after fusion two and three dimensions information, by C1Index location is placed on mainly to consider Two-dimentional SIFT feature has more judgment and meaning, the adjustable weight size of k, if matched object for object identification Body Model is the last one model, then executes step4, otherwise execute step1;
Step4: comparing all similarity results, takes out the maximum result of similarity as the final of this object identification As a result.
2. indoor scene object as described in claim 1 identifies simultaneously and modeling method, which is characterized in that the step S2 packet Include step
S21: planar point cloud is extracted using RANSAC method;
S22: the point cloud unrelated with object data is rejected;
S23: object point cloud is obtained by Euclidean distance clustering algorithm;
S24: point cloud data is mapped back into RGB-D image.
3. indoor scene object as described in claim 1 identifies simultaneously and modeling method, step S3 include step
S31: object two dimension local feature region and Feature Descriptor are extracted using SIFT feature extracting method;
S32: the two-dimentional local feature region is mapped in three-dimensional space, three-dimensional feature point is obtained;
S33: using FPFH algorithm, generates three-dimensional feature description.
4. indoor scene object as described in claim 1 identifies simultaneously and modeling method, step S4 include step
S41: the match point of some mistakes is rejected using RANSAC algorithm;
S42: the two dimensional character calculated between object to be identified and object model is registrated distance;
S43: it calculates three-dimensional feature between object to be identified and object model and is registrated distance;
S44: fusion two-dimensional signal and three-dimensional information, and save similarity result;
S45: comparing all similarity results, takes out most termination of the maximum result of similarity as this object identification Fruit.
CN201610832845.6A 2016-09-19 2016-09-19 A kind of indoor scene object identifies simultaneously and modeling method Active CN106529394B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610832845.6A CN106529394B (en) 2016-09-19 2016-09-19 A kind of indoor scene object identifies simultaneously and modeling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610832845.6A CN106529394B (en) 2016-09-19 2016-09-19 A kind of indoor scene object identifies simultaneously and modeling method

Publications (2)

Publication Number Publication Date
CN106529394A CN106529394A (en) 2017-03-22
CN106529394B true CN106529394B (en) 2019-07-19

Family

ID=58344928

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610832845.6A Active CN106529394B (en) 2016-09-19 2016-09-19 A kind of indoor scene object identifies simultaneously and modeling method

Country Status (1)

Country Link
CN (1) CN106529394B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107123027B (en) * 2017-04-28 2021-06-01 广东工业大学 Deep learning-based cosmetic recommendation method and system
CN108133181B (en) * 2017-12-12 2022-03-18 北京小米移动软件有限公司 Method for acquiring indication information, AR device and storage medium
CN108537163A (en) * 2018-04-04 2018-09-14 天目爱视(北京)科技有限公司 A kind of biological characteristic 4 D data recognition methods taken pictures based on visible light and system
CN110553849A (en) * 2018-06-01 2019-12-10 上汽通用汽车有限公司 Driving condition evaluation system and method
CN109360174B (en) * 2018-08-29 2020-07-07 清华大学 Three-dimensional scene reconstruction method and system based on camera pose
CN109360234B (en) * 2018-08-29 2020-08-21 清华大学 Three-dimensional scene reconstruction method and system based on total uncertainty
CN109242959B (en) * 2018-08-29 2020-07-21 清华大学 Three-dimensional scene reconstruction method and system
CN109344813B (en) * 2018-11-28 2023-11-28 北醒(北京)光子科技有限公司 RGBD-based target identification and scene modeling method
EP3894788A4 (en) * 2018-12-13 2022-10-05 Continental Automotive GmbH Method and system for generating an environment model for positioning
CN110097598B (en) * 2019-04-11 2021-09-07 暨南大学 Three-dimensional object pose estimation method based on PVFH (geometric spatial gradient frequency) features
CN111797268B (en) * 2020-07-17 2023-12-26 中国海洋大学 RGB-D image retrieval method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271469A (en) * 2008-05-10 2008-09-24 深圳先进技术研究院 Two-dimension image recognition based on three-dimensional model warehouse and object reconstruction method
CN101877143A (en) * 2009-12-09 2010-11-03 中国科学院自动化研究所 Three-dimensional scene reconstruction method of two-dimensional image group
CN102609979A (en) * 2012-01-17 2012-07-25 北京工业大学 Fourier-Mellin domain based two-dimensional/three-dimensional image registration method
CN104715254A (en) * 2015-03-17 2015-06-17 东南大学 Ordinary object recognizing method based on 2D and 3D SIFT feature fusion
CN105261060A (en) * 2015-07-23 2016-01-20 东华大学 Point cloud compression and inertial navigation based mobile context real-time three-dimensional reconstruction method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271469A (en) * 2008-05-10 2008-09-24 深圳先进技术研究院 Two-dimension image recognition based on three-dimensional model warehouse and object reconstruction method
CN101877143A (en) * 2009-12-09 2010-11-03 中国科学院自动化研究所 Three-dimensional scene reconstruction method of two-dimensional image group
CN102609979A (en) * 2012-01-17 2012-07-25 北京工业大学 Fourier-Mellin domain based two-dimensional/three-dimensional image registration method
CN104715254A (en) * 2015-03-17 2015-06-17 东南大学 Ordinary object recognizing method based on 2D and 3D SIFT feature fusion
CN105261060A (en) * 2015-07-23 2016-01-20 东华大学 Point cloud compression and inertial navigation based mobile context real-time three-dimensional reconstruction method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Generic Object Recognition Based on the Fusion of 2D and 3D SIFT Descriptors;Miaomiao Liu 等;《Information Fusion》;20150709;1085-1092
HerbDisc: Towards Lifelong Robotic Object Discovery;Alvaro Collet 等;《Robotics Research archive》;20150131;第34卷(第1期);3-25
Object Recognition Robust to Imperfect Depth Data;David F. Fouhey 等;《Computer Vision》;20121013;第2卷;83-92

Also Published As

Publication number Publication date
CN106529394A (en) 2017-03-22

Similar Documents

Publication Publication Date Title
CN106529394B (en) A kind of indoor scene object identifies simultaneously and modeling method
WO2016110005A1 (en) Gray level and depth information based multi-layer fusion multi-modal face recognition device and method
CN107886129B (en) Mobile robot map closed-loop detection method based on visual word bag
US9940577B2 (en) Finding semantic parts in images
CN107424161B (en) Coarse-to-fine indoor scene image layout estimation method
CN106407958B (en) Face feature detection method based on double-layer cascade
JP5394959B2 (en) Discriminator generating apparatus and method, and program
Xiao et al. Joint affinity propagation for multiple view segmentation
CN110866934B (en) Normative coding-based complex point cloud segmentation method and system
CN103530894B (en) A kind of video object method for tracing based on multiple dimensioned piece of rarefaction representation and system thereof
JP2016018538A (en) Image recognition device and method and program
CN106815824B (en) A kind of image neighbour&#39;s optimization method improving extensive three-dimensional reconstruction efficiency
CN109492589A (en) The recognition of face working method and intelligent chip merged by binary features with joint stepped construction
CN112562081B (en) Visual map construction method for visual layered positioning
WO2023151237A1 (en) Face pose estimation method and apparatus, electronic device, and storage medium
CN109101864A (en) The upper half of human body action identification method returned based on key frame and random forest
CN107291936A (en) The hypergraph hashing image retrieval of a kind of view-based access control model feature and sign label realizes that Lung neoplasm sign knows method for distinguishing
CN104834894B (en) A kind of gesture identification method of combination binary coding and class-Hausdorff distances
Liu et al. Regularization based iterative point match weighting for accurate rigid transformation estimation
CN104732247B (en) A kind of human face characteristic positioning method
Du High-precision portrait classification based on mtcnn and its application on similarity judgement
CN110942453A (en) CT image lung lobe identification method based on neural network
WO2020232697A1 (en) Online face clustering method and system
CN113887509B (en) Rapid multi-modal video face recognition method based on image set
CN114445649A (en) Method for detecting RGB-D single image shadow by multi-scale super-pixel fusion

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant