CN106529394A - Indoor scene and object simultaneous recognition and modeling method - Google Patents

Indoor scene and object simultaneous recognition and modeling method Download PDF

Info

Publication number
CN106529394A
CN106529394A CN201610832845.6A CN201610832845A CN106529394A CN 106529394 A CN106529394 A CN 106529394A CN 201610832845 A CN201610832845 A CN 201610832845A CN 106529394 A CN106529394 A CN 106529394A
Authority
CN
China
Prior art keywords
feature
dimensional
modeling
indoor scene
modeling method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610832845.6A
Other languages
Chinese (zh)
Other versions
CN106529394B (en
Inventor
曾碧
陈佳洲
黄文�
曹军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN201610832845.6A priority Critical patent/CN106529394B/en
Publication of CN106529394A publication Critical patent/CN106529394A/en
Application granted granted Critical
Publication of CN106529394B publication Critical patent/CN106529394B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/653Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an indoor scene and object simultaneous recognition and modeling method, and the method comprises the steps: inputting an RGB-D image; carrying out the segmentation; extracting SIFT features and FPFH features; carrying out the fusion of the SIFT features and FPFH features; carrying out the object recognition; and carrying out the object modeling. The object modeling comprises the steps: calculating the posture relation among object parts, setting a threshold value, carrying out the fusion of two object parts and taking the two object parts after fusion as a node in a view image if the posture change is less than a threshold value, or else enabling the two object parts to remain in the view image, i.e., taking the two object parts as two nodes. Compared with the prior art, the method can achieve the online recognition and modeling of the object, proposes an improved view image method, reduces the redundant data, reduces the data storage burden, and improves the recognition efficiency.

Description

A kind of indoor scene object is recognized and modeling method simultaneously
Technical field
The present invention relates to field of information processing, more particularly to a kind of indoor scene object is while recognize and modeling method.
Background technology
At present, in robot object identification and scene understand research field, a large amount of algorithm researches are all concentrated on by inciting somebody to action Test pictures are matched with training dataset or reach the purpose of identification by training grader, and general all without in view of thing The pose problem of body.As Liang Mingjie et al. proposes the object model method for expressing based on view figure, and represent in the object On the basis of provide the probability observation model of object, identification is attributed to into probability inference with modeling finally, and is estimated by maximum likelihood Meter realizes identification with optimization while modeling;Alvaro Collet Romea propose the indoor thing based on constraint framework Body recognizes and modeling method that this method can automatically process whole video data stream, and it was found that object, topmost spy Point is to add constraint framework, the constraint of some of daily life can be dissolved in framework by mathematical way, can be with Different additions, reduction or change constraints according to scene, increase object identification rate and object identification efficiency.
But, accurate attitude information, such as robot manipulation, path rule can be all used in many object identification applications Draw, augmented reality etc., and algorithm needs training data, so being unable to reach ONLINE RECOGNITION effect, has had a strong impact on machine People's performance indoors in scene.In terms of modeling, robot long-time is walked in environment indoors, the observation portion of indoor object Branch Jing is often acquired, so that the model set up produces redundant data, the time of robot ambulation is longer, obtains same The observed data of individual object is more, and the object model of foundation is bigger, and redundant data can also increase, and this is built not only bad for object Mould and data storage, also exert an adverse impact to later stage object identification efficiency.
The content of the invention
For overcoming the deficiencies in the prior art, object ONLINE RECOGNITION and modeling is realized, and solves the problems, such as data redundancy, reduced Memorizer is born, and effectively improves recognition efficiency, and the present invention proposes that a kind of indoor scene object is recognized and modeling method simultaneously.
The technical scheme is that what is be achieved in that:
A kind of indoor scene object is recognized and modeling method simultaneously, including step
S1:Input RGB-D images;
S2:Object segmentation;
S3:Extract SIFT feature and FPFH features;
S4:SIFT feature and FPFH Feature Fusion;
S5:Object identification;
S6:Object modeling, calculates the position orientation relation between object parts and sets a threshold value, if pose change is little In threshold value, then two object parts are merged, and as a node, otherwise, two object parts in view figure All remain in view figure, as two nodes.
Further, step S2 includes step
S21:Planar point cloud is extracted using RANSAC methods;
S22:Reject the point cloud unrelated with object data;
S23:Object point cloud is obtained by Euclidean distance clustering algorithm;
S24:Cloud data is mapped back into RGB-D images.
Further:Step S3 includes step
S31:Object two dimension local feature region and Feature Descriptor are extracted using SIFT feature extracting method;
S32:The two-dimentional local feature region is mapped in three dimensions, three-dimensional feature point is obtained;
S33:Using FPFH algorithms, three-dimensional feature description is generated.
Further, step S4 includes step
S41:Some wrong match points are rejected using RANSAC algorithms.
Further, step S5 includes step
S51:Calculate the registering distance of two dimensional character between object to be identified and object model;
S52:Calculate the registering distance of three-dimensional feature between object to be identified and object model;
S53:Fusion two-dimensional signal and three-dimensional information, and preserve similarity result;
S54:All of similarity result is compared, the maximum result of similarity is taken out as this object identification most Termination fruit.
The beneficial effects of the present invention is, compared with prior art, the present invention can realize object ONLINE RECOGNITION and modeling, And improved view drawing method is proposed, data redundancy is reduced, data storage burden is reduced, is improve recognition efficiency.
Description of the drawings
Fig. 1 is indoor scene object of the present invention while identification and modeling method flow chart.
Fig. 2 is the original point cloud chart of one embodiment of the invention.
Fig. 3 is the filtered point cloud charts of Fig. 2.
Fig. 4 is one embodiment of the invention RANSAC segmentation point cloud chart.
Fig. 5 is cluster segmentation point cloud chart.
Fig. 6 is the object figure before segmentation.
Fig. 7 is object figures of the Fig. 6 after segmentation.
Fig. 8 is partial error point registration result schematic diagram.
Fig. 9 is that Fig. 8 rejects registration result schematic diagram after some wrong match points using RANSAC algorithms.
Figure 10 is misregistration result schematic diagram.
Figure 11 is Figure 10 three-dimensional registration result schematic diagrams.
Figure 12 merges the recognizer flow chart of two and three dimensions information.
Figure 13 is view graph model schematic diagram.
Figure 14 is the improved view graph model schematic diagrams of Figure 13.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than the embodiment of whole.It is based on Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under the premise of creative work is not made Embodiment, belongs to the scope of protection of the invention.
Fig. 1 is referred to, a kind of indoor scene object is recognized and modeling method simultaneously, including step
S1:Input RGB-D images;
S2:Object segmentation;
S3:Extract SIFT feature and FPFH features;
S4:SIFT feature and FPFH Feature Fusion;
S5:Object identification;
S6:Object modeling, calculates the position orientation relation between object parts and sets a threshold value, if pose change is little In threshold value, then two object parts are merged, and as a node, otherwise, two object parts in view figure All remain in view figure, as two nodes.
In step S2, first, cloud data C is calculated by RGB image and depth image, as shown in Figure 2.Due to what is obtained Cloud data is too big, produces impact to operations such as later stage segmentation, identifications, then passes through voxelization grid method and realize down-sampling, Cloud data is reduced, as shown in Figure 3.In order to obtain the object in scene, herein using RANSAC methods (stochastic sampling concordance Algorithm) planar point cloud is extracted, as planar point cloud is generally the point cloud unrelated with object data such as desktop or metope, it is possible to Rejected, as shown in Figure 4.Remaining cloud includes object and noise as seen from Figure 4, clusters finally by Euclidean distance Algorithm obtains object point cloud { Cm, m=1 ... M, as shown in Figure 5.Cloud data maps back the thing after RGB figures can be split Body RGB schemes { Im, m=1 ... M, as shown in Figure 6,7.In can be that the object after segmentation is expressed as { Im, Cm, m=1 ... M.
In step S3, for each frame scene graph can be partitioned into a series of object { Im, Cm, we adopt SIFT (chis Degree invariant features conversion) feature extracting method RGB scheme ImMiddle extraction object two dimension local feature region and Feature Descriptor { Xmr, Smr, two-dimentional local feature region is mapped in three dimensions, three-dimensional feature point is obtained, and (quick point feature is straight with FPFH Scheme side) algorithm generation three-dimensional feature description son { Zmr, Fmr, wherein X represents two dimensional character point, and S represents two dimensional character description, Z Three-dimensional feature point is represented, F represents three-dimensional feature description, and m represents m-th object, and r represents r-th characteristic point.
In step S4, we are identified to object by fusion two and three dimensions information, improve object identification Rate.First, using the SIFT feature point and Feature Descriptor of object to be identified, using KNN (K arest neighbors) algorithmic match object mould Type, but SIFT feature registration equally can also produce the match point of mistake, as shown in figure 8, generating several spies to mistake in figure A registration result is levied, if make use of the result of mistake, the accuracy of later stage object pose calculating can be badly influenced, so We reject some wrong match points using RANSAC algorithms, as shown in Figure 9.But sometimes due to two-dimensional image surface is recognizable Property it is too weak, textural characteristics are not obvious, cause SIFT feature registration to produce very big mistake, as shown in Figure 10, can cause final knowledge Not other mistake.We have incorporated three-dimensional object structure feature on the basis of two dimension, and three-dimensional feature registration result as shown in figure 11, has The deficiency that compensate for two dimensional character registration of effect.
In step S5, the recognizer using fusion two and three dimensions information is as follows:
Step1:The registering distance of two dimensional character between object to be identified and object model is calculated, as shown in formula (1):
C1(s, t)=d (SS,St) (1)
Wherein s represents object to be identified, and t represents object model, and d () represents Euclidean distance, and S represents that SIFT feature is described Son, C1(s, t) represents the Euclidean distance between object SIFT feature to be identified and object model SIFT feature, i.e., two-dimentional similar Degree, and be normalized, C is required herein1(s,t)<When 0.5, two articles are similar, perform step2, otherwise, continue to hold Row step1.
Step2:The registering distance of three-dimensional feature between object to be identified and object model is calculated, as shown in formula (2):
C2(s, t)=1-d (FS,Ft) (2)
Wherein s represents object to be identified, and t represents object model, and d () represents Euclidean distance, and F represents FPFH feature descriptions Son, C2(s, t) represents the Euclidean distance between object FPFH features to be identified and object model FPFH features, i.e., three-dimensional similar Degree, and be normalized, C is required herein2(s,t)>When 0.5, two articles are similar, perform step3, otherwise perform step1。
Step3:Fusion two-dimensional signal and three-dimensional information, and similarity result is preserved, as shown in formula (3):
T (s, t)=C2(s,t)·exp(-k·C1(s,t)) (3)
Wherein T (s, t) represents the similarity after fusion two and three dimensions information, by C1It is placed on index location mainly to examine Two-dimentional SIFT feature has been considered for object identification with more judgment and meaning, k can adjust weight size.If matching Object model is last model, then perform step4, otherwise perform step1.
Step4:All of similarity result is compared, and the maximum result of similarity is taken out as this object identification Final result.Algorithm flow chart is as shown in figure 12.
In step S6, by calculating the position orientation relation between object parts, and a threshold value is set, if pose change Less than threshold value, then two object parts are merged, and as a node, otherwise, two object portions in view figure Divide and all remain in view figure, as two nodes, consequently, it is possible to both remain the function of view figure easily modification, are reduced again The redundancy of data, improves operational efficiency.
The data that fusion between similar view retains have:(1) model view relative pose, will not for figure after fusion New node is produced, but new side can be produced, so needing record cast view relative pose;(2) two view feature points Position, the characteristic point position of new view, the feature that two views are mutually matched after the characteristic point position of master mould view and conversion Point position only retains once;(3) Feature Descriptor, including SIFT feature description son and FPFH Feature Descriptors, for view it Between matching, the Feature Descriptor that two views are mutually matched only retains once;The cloud data of (4) two views, master mould After the cloud data of view and conversion, the cloud data after fusion is passed through voxelization grid method reality by the cloud data of new view Now sample, reduce cloud data.The production process for so improving view figure can be by following algorithmic notation:
Wherein V0Represent original view node of graph, E0Represent original view figure side, hiRepresent that new i-th for producing is regarded Figure, P (hi) represent hiThe similarity of view and data base's object model, pminRepresent similarity threshold, Pangle(hi) represent and number According to the relative angle between the most close view in storehouse, PangleminRepresent angle threshold, Pdistance(hi) represent close with data base View between relative distance, PdisminDistance threshold is represented, if i.e. hiBetween the most close view of view and data base Relative angle and relative distance are both less than certain threshold value, then merge two views, otherwise do not merged, be applied directly to In view figure, a node is produced.P(hi, hj) represent that i-th view is similar between j-th view to database model Degree, if this similarity is more than threshold value, illustrates there is similarity relation between two views, then by i-th view and jth Transformation relation between individual view is added in data base's object model, becomes a new side.Figure 13 and Figure 14 is view figure Contrast between model and improved view graph model, wherein dark node represent the node after fusion, and grayed-out nodes add for new The node for entering, dotted line while represent it is new add or change while, can be seen that improved view graph model by upper figure does not increase section Point, increases a line newly, and view graph model has then newly added a node and three sides, so methods herein can drop in theory Low data redundancy, reaches more preferable object modeling effect.
The above is the preferred embodiment of the present invention, it is noted that for those skilled in the art For, under the premise without departing from the principles of the invention, some improvements and modifications can also be made, these improvements and modifications are also considered as Protection scope of the present invention.

Claims (5)

1. a kind of indoor scene object is recognized and modeling method simultaneously, it is characterised in that including step
S1:Input RGB-D images;
S2:Object segmentation;
S3:Extract SIFT feature and FPFH features;
S4:SIFT feature and FPFH Feature Fusion;
S5:Object identification;
S6:Object modeling, calculates the position orientation relation between object parts and sets a threshold value, if pose change is less than threshold Value, then merged two object parts, and as a node in view figure, otherwise, two object parts all protected It is left in view figure, as two nodes.
2. indoor scene object as claimed in claim 1 is recognized and modeling method simultaneously, it is characterised in that the step S2 bag Include step
S21:Planar point cloud is extracted using RANSAC methods;
S22:Reject the point cloud unrelated with object data;
S23:Object point cloud is obtained by Euclidean distance clustering algorithm;
S24:Cloud data is mapped back into RGB-D images.
3. indoor scene object as claimed in claim 1 is recognized and modeling method simultaneously, and step S3 includes step
S31:Object two dimension local feature region and Feature Descriptor are extracted using SIFT feature extracting method;
S32:The two-dimentional local feature region is mapped in three dimensions, three-dimensional feature point is obtained;
S33:Using FPFH algorithms, three-dimensional feature description is generated.
4. indoor scene object as claimed in claim 1 is recognized and modeling method simultaneously, and step S4 includes step
S41:Some wrong match points are rejected using RANSAC algorithms.
5. indoor scene object as claimed in claim 1 is recognized and modeling method simultaneously, and step S5 includes step
S51:Calculate the registering distance of two dimensional character between object to be identified and object model;
S52:Calculate the registering distance of three-dimensional feature between object to be identified and object model;
S53:Fusion two-dimensional signal and three-dimensional information, and preserve similarity result;
S54:All of similarity result is compared, and the maximum result of similarity is taken out as the most termination of this object identification Really.
CN201610832845.6A 2016-09-19 2016-09-19 A kind of indoor scene object identifies simultaneously and modeling method Active CN106529394B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610832845.6A CN106529394B (en) 2016-09-19 2016-09-19 A kind of indoor scene object identifies simultaneously and modeling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610832845.6A CN106529394B (en) 2016-09-19 2016-09-19 A kind of indoor scene object identifies simultaneously and modeling method

Publications (2)

Publication Number Publication Date
CN106529394A true CN106529394A (en) 2017-03-22
CN106529394B CN106529394B (en) 2019-07-19

Family

ID=58344928

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610832845.6A Active CN106529394B (en) 2016-09-19 2016-09-19 A kind of indoor scene object identifies simultaneously and modeling method

Country Status (1)

Country Link
CN (1) CN106529394B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107123027A (en) * 2017-04-28 2017-09-01 广东工业大学 A kind of cosmetics based on deep learning recommend method and system
CN108133181A (en) * 2017-12-12 2018-06-08 北京小米移动软件有限公司 Obtain method, AR equipment and the storage medium of instruction information
CN108537163A (en) * 2018-04-04 2018-09-14 天目爱视(北京)科技有限公司 A kind of biological characteristic 4 D data recognition methods taken pictures based on visible light and system
CN109242959A (en) * 2018-08-29 2019-01-18 清华大学 Method for reconstructing three-dimensional scene and system
CN109344813A (en) * 2018-11-28 2019-02-15 北醒(北京)光子科技有限公司 A kind of target identification and scene modeling method and device based on RGBD
CN109360234A (en) * 2018-08-29 2019-02-19 清华大学 Method for reconstructing three-dimensional scene and system based on overall uncertainty
CN109360174A (en) * 2018-08-29 2019-02-19 清华大学 Method for reconstructing three-dimensional scene and system based on camera pose
CN110097598A (en) * 2019-04-11 2019-08-06 暨南大学 A kind of three-dimension object position and orientation estimation method based on PVFH feature
CN110553849A (en) * 2018-06-01 2019-12-10 上汽通用汽车有限公司 Driving condition evaluation system and method
CN111797268A (en) * 2020-07-17 2020-10-20 中国海洋大学 RGB-D image retrieval method
CN113227713A (en) * 2018-12-13 2021-08-06 大陆汽车有限责任公司 Method and system for generating environment model for positioning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271469A (en) * 2008-05-10 2008-09-24 深圳先进技术研究院 Two-dimension image recognition based on three-dimensional model warehouse and object reconstruction method
CN101877143A (en) * 2009-12-09 2010-11-03 中国科学院自动化研究所 Three-dimensional scene reconstruction method of two-dimensional image group
CN102609979A (en) * 2012-01-17 2012-07-25 北京工业大学 Fourier-Mellin domain based two-dimensional/three-dimensional image registration method
CN104715254A (en) * 2015-03-17 2015-06-17 东南大学 Ordinary object recognizing method based on 2D and 3D SIFT feature fusion
CN105261060A (en) * 2015-07-23 2016-01-20 东华大学 Point cloud compression and inertial navigation based mobile context real-time three-dimensional reconstruction method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271469A (en) * 2008-05-10 2008-09-24 深圳先进技术研究院 Two-dimension image recognition based on three-dimensional model warehouse and object reconstruction method
CN101877143A (en) * 2009-12-09 2010-11-03 中国科学院自动化研究所 Three-dimensional scene reconstruction method of two-dimensional image group
CN102609979A (en) * 2012-01-17 2012-07-25 北京工业大学 Fourier-Mellin domain based two-dimensional/three-dimensional image registration method
CN104715254A (en) * 2015-03-17 2015-06-17 东南大学 Ordinary object recognizing method based on 2D and 3D SIFT feature fusion
CN105261060A (en) * 2015-07-23 2016-01-20 东华大学 Point cloud compression and inertial navigation based mobile context real-time three-dimensional reconstruction method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ALVARO COLLET 等: "HerbDisc: Towards Lifelong Robotic Object Discovery", 《ROBOTICS RESEARCH ARCHIVE》 *
DAVID F. FOUHEY 等: "Object Recognition Robust to Imperfect Depth Data", 《COMPUTER VISION》 *
MIAOMIAO LIU 等: "Generic Object Recognition Based on the Fusion of 2D and 3D SIFT Descriptors", 《INFORMATION FUSION》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107123027A (en) * 2017-04-28 2017-09-01 广东工业大学 A kind of cosmetics based on deep learning recommend method and system
CN108133181A (en) * 2017-12-12 2018-06-08 北京小米移动软件有限公司 Obtain method, AR equipment and the storage medium of instruction information
CN108133181B (en) * 2017-12-12 2022-03-18 北京小米移动软件有限公司 Method for acquiring indication information, AR device and storage medium
CN108537163A (en) * 2018-04-04 2018-09-14 天目爱视(北京)科技有限公司 A kind of biological characteristic 4 D data recognition methods taken pictures based on visible light and system
CN110553849A (en) * 2018-06-01 2019-12-10 上汽通用汽车有限公司 Driving condition evaluation system and method
CN109360234B (en) * 2018-08-29 2020-08-21 清华大学 Three-dimensional scene reconstruction method and system based on total uncertainty
CN109242959A (en) * 2018-08-29 2019-01-18 清华大学 Method for reconstructing three-dimensional scene and system
CN109360234A (en) * 2018-08-29 2019-02-19 清华大学 Method for reconstructing three-dimensional scene and system based on overall uncertainty
CN109360174A (en) * 2018-08-29 2019-02-19 清华大学 Method for reconstructing three-dimensional scene and system based on camera pose
CN109360174B (en) * 2018-08-29 2020-07-07 清华大学 Three-dimensional scene reconstruction method and system based on camera pose
CN109242959B (en) * 2018-08-29 2020-07-21 清华大学 Three-dimensional scene reconstruction method and system
CN109344813A (en) * 2018-11-28 2019-02-15 北醒(北京)光子科技有限公司 A kind of target identification and scene modeling method and device based on RGBD
CN109344813B (en) * 2018-11-28 2023-11-28 北醒(北京)光子科技有限公司 RGBD-based target identification and scene modeling method
CN113227713A (en) * 2018-12-13 2021-08-06 大陆汽车有限责任公司 Method and system for generating environment model for positioning
CN110097598B (en) * 2019-04-11 2021-09-07 暨南大学 Three-dimensional object pose estimation method based on PVFH (geometric spatial gradient frequency) features
CN110097598A (en) * 2019-04-11 2019-08-06 暨南大学 A kind of three-dimension object position and orientation estimation method based on PVFH feature
CN111797268A (en) * 2020-07-17 2020-10-20 中国海洋大学 RGB-D image retrieval method
CN111797268B (en) * 2020-07-17 2023-12-26 中国海洋大学 RGB-D image retrieval method

Also Published As

Publication number Publication date
CN106529394B (en) 2019-07-19

Similar Documents

Publication Publication Date Title
CN106529394A (en) Indoor scene and object simultaneous recognition and modeling method
Song et al. Region-based quality estimation network for large-scale person re-identification
CN108960140B (en) Pedestrian re-identification method based on multi-region feature extraction and fusion
CN109583482B (en) Infrared human body target image identification method based on multi-feature fusion and multi-kernel transfer learning
Lynen et al. Placeless place-recognition
CN105740899B (en) A kind of detection of machine vision image characteristic point and match compound optimization method
CN108038420B (en) Human behavior recognition method based on depth video
CN110674874B (en) Fine-grained image identification method based on target fine component detection
CN109063649B (en) Pedestrian re-identification method based on twin pedestrian alignment residual error network
WO2016110005A1 (en) Gray level and depth information based multi-layer fusion multi-modal face recognition device and method
CN107424161B (en) Coarse-to-fine indoor scene image layout estimation method
CN106407958B (en) Face feature detection method based on double-layer cascade
CN113408492B (en) Pedestrian re-identification method based on global-local feature dynamic alignment
US11392787B2 (en) Method for grasping texture-less metal parts based on bold image matching
WO2023151237A1 (en) Face pose estimation method and apparatus, electronic device, and storage medium
CN107291936A (en) The hypergraph hashing image retrieval of a kind of view-based access control model feature and sign label realizes that Lung neoplasm sign knows method for distinguishing
Uddin et al. Human Activity Recognition via 3-D joint angle features and Hidden Markov models
CN104063701B (en) Fast electric television stations TV station symbol recognition system and its implementation based on SURF words trees and template matches
CN112507778B (en) Loop detection method of improved bag-of-words model based on line characteristics
CN112562081A (en) Visual map construction method for visual layered positioning
CN101986295A (en) Image clustering method based on manifold sparse coding
CN104732247B (en) A kind of human face characteristic positioning method
CN112836566A (en) Multitask neural network face key point detection method for edge equipment
CN109829459B (en) Visual positioning method based on improved RANSAC
CN106980845B (en) Face key point positioning method based on structured modeling

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant