CN104657985B - Static vision target occlusion bypassing method based on depth image block information - Google Patents

Static vision target occlusion bypassing method based on depth image block information Download PDF

Info

Publication number
CN104657985B
CN104657985B CN201510053316.1A CN201510053316A CN104657985B CN 104657985 B CN104657985 B CN 104657985B CN 201510053316 A CN201510053316 A CN 201510053316A CN 104657985 B CN104657985 B CN 104657985B
Authority
CN
China
Prior art keywords
candidate
observed direction
triangle
small section
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510053316.1A
Other languages
Chinese (zh)
Other versions
CN104657985A (en
Inventor
张世辉
桑榆
刘建新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HENGYE CENTURY SECURITY TECHNOLOGY Co.,Ltd.
Original Assignee
Yanshan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yanshan University filed Critical Yanshan University
Priority to CN201510053316.1A priority Critical patent/CN104657985B/en
Publication of CN104657985A publication Critical patent/CN104657985A/en
Application granted granted Critical
Publication of CN104657985B publication Critical patent/CN104657985B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Abstract

The invention discloses a kind of static vision target occlusion bypassing methods based on depth image block information, obtain an amplitude deepness image of sensation target from initial observation orientation first, and block relevant information according to occlusion detection algorithm acquisition depth image;Start with from the triangle small section in triangle grid model, candidate observed direction set is determined using the normal vector of the subregion of triangle small section composition, these candidate observed directions and the angle information of each triangle small section normal vector is recycled to determine the visible space of each candidate observed direction, and then next best observed bearing is calculated, evade so as to achieve the purpose that block.The present invention according to block information to occlusion area founding mathematical models, without obtaining the priori of sensation target;To the shape of sensation target without particular/special requirement, suitable for the sensation target in different shaped face.

Description

Static vision target occlusion bypassing method based on depth image block information
Technical field
The invention belongs to computer vision field, more particularly, to a kind of static vision based on depth image block information Target occlusion bypassing method.
Background technology
It is always that the fields such as automatic assembling, target identification, three-dimensional reconstruction, robot navigation are important and tired to block bypassing method One of difficult research topic, it is the block information arrived according to Current observation, determine video camera next observed direction and Position so that can maximumlly obtain the unknown message of scene in the direction and position.
There are mainly two types of the existing image informations blocked bypassing method processing eclipse phenomena and be based on:Luminance picture and depth Image.Block that bypassing method is relatively fewer, and due to luminance picture of the depth image of 2.5D than 2D for luminance picture It is more advantageous to obtaining the three-dimensional information of scene.Therefore, the current bypassing method that blocks is mostly based on depth image realization.Li Y F With Liu Z G in article " Information entropy-based viewpoint planning for 3-D object reconstruction.IEEE Transactions on Robotics,2005,21(3):Video camera is limited in 324-337 " A fixation surface (such as spherome surface, periphery) is scheduled on, the versatility of method is restricted.Scott W R are in text Chapter " Model-based view planning.Machine Vision and Applications, 2009,20 (1):47- The method carried in 69 " needs to obtain scene information in advance, is not suitable for unknown scene.M.Krainin, B.Curless and D.Fox is in article " Autonomous generation of complete 3D object models using next best view manipulation planning,in:Proc.Of the IEEE International Conference on Robotics&Automation(ICRA),2011:Institute's extracting method needs to obtain object exterior contour in 5031-5037 ", and And very rely on profile acquiring technology.Benjamin Adler and Xiao J H are in article " Finding Next Best Views for Autonomous UAV Mapping through GPU-Accelerated Particle Simulation.IEEE/ RSJ International Conference on Intelligent Robots and Systems (IROS), in 2013 " Institute's extracting method depends on specific equipment.
Invention content
For of the existing technology problematic, the object of the present invention is to provide one kind to be based on depth image block information Static vision target occlusion bypassing method, by carrying out appropriate modeling to occlusion area, avoid to the pre- of occlusion area First understand, by the operation to triangle small section, determine rational next best observed bearing, block what is evaded so as to reach Purpose.
Goal of the invention is realized in order to solve above-mentioned technical problem, and the present invention is to be achieved through the following technical solutions 's:
A kind of static vision target occlusion bypassing method based on depth image block information, content include following step Suddenly:
(1) depth image of sensation target is obtained, and obtains its Ouluding boundary and camera interior and exterior parameter;
(2) the lower adjacent boundary point of each Ouluding boundary point in depth image is extracted, and determines each pixel in image Three-dimensional coordinate;
(3) external surface modeling is carried out to occlusion area according to Ouluding boundary and lower adjacent boundary information:
3.1) to every section of Ouluding boundary, the three-dimensional coordinate according to its Ouluding boundary point and lower adjacent boundary point obtains its correspondence Occlusion area, and to occlusion area carry out triangulation obtain triangle grid model;
3.2) triangle grid model based on acquired occlusion area calculates the normal vector knead dough of each triangle small section Product;
(4) angle point of Ouluding boundary is extracted, and determines candidate observed direction set:
4.1) angle point on the boundary is extracted for every section of Ouluding boundary application corner detection operator;
4.2) occlusion area is divided into several subregions, and determine the time of all subregion according to the angle point information obtained Select observed direction;
(5) next best observed bearing is determined:
5.1) appoint from candidate observed direction set and take a candidate observed direction, calculate candidate's observed direction and each sub-district The angle of each triangle small section normal vector in domain determines the corresponding visual sky of candidate's observed direction according to angle information Between;
5.2) whole candidate observed directions in candidate observed direction set are traversed according to step 5.1), calculates each time Select the corresponding visible space of observed direction;
5.3) weights of each candidate observed direction are calculated, and next best sight is determined using the candidate observed direction of weighting Survey direction VNBVWith observation central point Pview
5.4) according to the next best observed direction and observation central point being obtained, cameras view position P is determinedcamera
The present invention obtains an amplitude deepness image of sensation target from initial observation orientation first, and according to occlusion detection algorithm Obtain depth image blocks relevant information;Start with from the triangle small section in triangle grid model, utilize triangle small section group Into the normal vector of subregion determine candidate observed direction set, recycle these candidate observed directions and each triangle small section The angle information of normal vector determines the visible space of each candidate observed direction, and then calculates next best observed bearing, Evade so as to achieve the purpose that block.
Due to the adoption of the above technical scheme, a kind of static vision mesh based on depth image block information provided by the invention Mark blocks bypassing method, compared with prior art with such advantageous effect:
(1) according to block information to occlusion area founding mathematical models, without obtaining the priori of sensation target;
(2) to the shape of sensation target without particular/special requirement, suitable for the sensation target in different shaped face;
(3) candidate observed direction and visible space are determined using triangle small section normal vector and area, based on triangle gridding The solution that model has been carried out to blocking evasion.
Description of the drawings
Fig. 1 is the flow chart of the static vision target occlusion bypassing method the present invention is based on depth image block information;
Fig. 2 is lower adjacent boundary point schematic diagram;
Fig. 3 is that subregion merges and normal vector is added schematic diagram;
Fig. 4 is cameras view position view.
Specific embodiment
It is clear to be more clear technical solution of the present invention, the present invention is described in further details below in conjunction with attached drawing.
Fig. 1 is a kind of flow of the static vision target occlusion bypassing method based on depth image block information of the present invention Figure, this method include the following steps:
1st, the depth image of sensation target is obtained, and obtains its Ouluding boundary and camera interior and exterior parameter;
The depth obtained using occlusion detection method to existing depth image or using depth camera (such as Kinect) Image carries out occlusion detection and obtains blocking relevant information, while record camera interior and exterior parameter at this time.
2nd, the lower adjacent boundary point of each Ouluding boundary point in depth image is extracted, and determines each pixel in image Three-dimensional coordinate;Its specific steps includes:
2.1st, the Ouluding boundary point coordinates in Ouluding boundary is set as (i, j), is taken in its eight neighborhood a bit, coordinate is (x, y), Their depth value corresponds to Depth (i, j) and Depth (x, y), traverses the point in its eight neighborhood successively, calculates depth difference The point of value maximum is denoted as the lower adjacent boundary point of the Ouluding boundary point, and calculation formula is as follows:
Lower adjacent boundary point schematic diagram is as shown in Figure 2;
2.2nd, for acquired depth image, using camera interior and exterior parameter, according to projective transformation principle, to the depth Image carries out back project, obtains three-dimensional coordinate of each pixel under world coordinate system in depth image.
3rd, external surface modeling is carried out to occlusion area according to Ouluding boundary and lower adjacent boundary information:
3.1st, to every section of Ouluding boundary, by Ouluding boundary point all in it and the corresponding lower adjacent boundary of Ouluding boundary point The three-dimensional coordinate of point is added in a vertex set, and carrying out triangulation according to triangulation opposite vertexes set obtains three Angle small section;Whole Ouluding boundaries is handled according to the method described above successively, so as to obtain complete triangle grid model, such as Fig. 3 (a) shown in;
3.2nd, based on acquired triangle grid model, if (xa,ya,za)、(xb,yb,zb)、(xc,yc,zc) it is respectively three The coordinate of angle small section vertex A, B, C, VABAnd VACThe vector that respectively vertex A and B and vertex A and C are formed;Due to Obtain two vector V of triangle small sectionABAnd VAC, therefore according to the definition of two vectorial multiplication crosses, VABAnd VACIt can after multiplication cross Obtain a new vectorial Normali, NormaliFor VABAnd VACThe normal vector of the triangle small section at place;And three vertex are sat Mark the triangle area Squa in for three vertex of half of the determinant formedi;Therefore triangle small section is calculated The formula of normal vector and area is as follows:
The normal vector Normal of all triangle small sections that will be calculatediIt is added in normal direction duration set Normal, by institute There is the area Squa of triangle small sectioniIt is added in area set Square.
4th, the angle point of Ouluding boundary is extracted, and determines candidate observed direction set:
4.1st, for the corresponding triangle grid model of every piece of occlusion area (one section Ouluding boundary correspond to one piece of occlusion area), Its corresponding Ouluding boundary is obtained, the angle point on the boundary is extracted using corner detection operator;
4.2nd, occlusion area is divided into several subregions, and determine the time of all subregion according to the angle point information obtained Select observed direction;It is comprised the concrete steps that:
After 4.2a completes Corner Detection, the angle point detected is first depending on, under any in each angle point and its eight neighborhood The line of adjacent boundary point in three dimensions is line of demarcation, which is divided into several subregions, such as Fig. 3 (b) institutes Show, and respectively by the normal vector Normal of the triangle small section in all subregioniVector V after being addedjMethod as the subregion Vector, calculation formula are as follows:
Wherein triangle is the set of subregion intermediate cam small section, VjNormal vector for the subregion;
Secondly 4.2b, traverses each section of Ouluding boundary, detect it with the presence or absence of angle point, if there is no angle points, this is hidden All triangle small sections are merged into a sub-regions in the corresponding occlusion area of rib circle, and according to step 4.2a to subregion It is handled, while using the negative direction of subregion normal vector obtained after processing as candidate observed direction VjIt is added to candidate sight Survey direction set VcandidateIn;
If there are angle point on 4.2c Ouluding boundaries, by the line of adjacent boundary point lower in the angle point eight neighborhood selected with it The normal vector V for two sub-regions being divided tojAnd Vj+1It is added, obtains a new vectorial V at this timeadd, as shown in Fig. 3 (c), by this The reversed of vector is added to candidate observed direction set V together with the negative direction of the normal vector of the two subregionscanidateIn, according to It is secondary to have handled all angle points, and to candidate observed direction set VcanidateDuplicate removal processing is carried out, that is, completes and candidate is observed Direction set VcanidateDetermine;
5th, next best observed bearing is determined:
5.1, for candidate observed direction set VcanidateMiddle j-th candidates observed direction Vj, it is calculated according to formula (5) With i-th of triangle small section normal vector Normal in subregioniAngle α, wherein VjAnd NormaliCoordinate be respectively (nj, mj,tj) and (xi,yi,zi),
Judge whether candidate observed direction can observe normal vector Normal according to the size of αiCorresponding triangle is small to be cutd open Face;If α≤90 °, candidate observed direction VjIt can observe normal vector NormaliCorresponding triangle small section, at this time by triangle The area Squa of small sectioniIt is added to candidate observed direction VjVisible space SjIn;If 90 ° of α >, candidate observed direction Vj It cannot observe normal vector NormaliCorresponding triangle small section, at this time candidate observed direction VjVisible space it is constant, calculate Formula is as follows:
Wherein Sumi-1For current candidate observed direction to a upper triangle small section observe after area and;According to formula (5) and (6) traverse triangle small section in all subregions, can obtain candidate observed direction VjVisible space Sj
5.2nd, according to step 5.1, candidate observed direction set V is traversedcanidateIn each candidate observed direction, can obtain Obtain each corresponding visible space of candidate observed direction;
5.3rd, the visible space S of each candidate observed direction is calculatedjAfterwards, due to it is each it is candidate observed direction is corresponding can Visual space reflects the observation effect of candidate's observed direction, therefore is determining next best observed direction VNBVWhen, it be according to every The visible space of a candidate's observed direction distributes corresponding weight w for each candidate observed directionj, the weights of candidate observed direction Calculation formula is as follows:
Wherein n is the number of candidate observed direction;It will each candidate observed direction VjCorresponding weight wjIt is multiplied, then The candidate observed direction obtained after these are weighted is added, and can obtain next best observed direction VNBVIf (xNBV,yNBV, zNBV) it is next best observed direction VNBVCoordinate, calculate next best observed direction VNBVFormula is as follows:
According to the above-mentioned information having calculated that, calculating observation central point PviewIf PviewCoordinate be (xview,yview, zview), N is the number of triangle small section, and the coordinate of the central point Mid of each small triangle is (xmid,ymid,zmid), then it calculates PviewFormula is as follows:
The coordinate of its intermediate cam small section central point Mid is (xmid,ymid,zmid) computational methods it is as follows:
5.4th, current camera position is set to the distance of observation central point as d, and video camera and observation central point PviewReally Fixed vector will be parallel to next best observed direction VNBV, further according to the property of parallel vector, camera shooting is determined using formula (12) Machine observation position Pcamera, camera position is as shown in Figure 4;
WhereinFor cameras view position PcameraCoordinate, d arrived for current camera position Observe the distance of central point.

Claims (6)

  1. A kind of 1. static vision target occlusion bypassing method based on depth image block information, which is characterized in that this method master It comprises the steps of:
    (1) depth image of sensation target is obtained, and obtains its Ouluding boundary and camera interior and exterior parameter;
    (2) the lower adjacent boundary point of each Ouluding boundary point in depth image is extracted, and determines three of each pixel in image Dimension coordinate;
    (3) external surface modeling is carried out to occlusion area according to Ouluding boundary and lower adjacent boundary information:
    3a) to every section of Ouluding boundary, the three-dimensional coordinate according to its Ouluding boundary point and lower adjacent boundary point obtains its corresponding screening Region is kept off, and triangulation is carried out to occlusion area and obtains triangle grid model;
    Triangle grid model 3b) based on acquired occlusion area calculates the normal vector and area of each triangle small section;
    (4) angle point of Ouluding boundary is extracted, and determines candidate observed direction set:
    The angle point on the boundary 4a) is extracted for every section of Ouluding boundary application corner detection operator;
    Occlusion area 4b) is divided into several subregions according to the angle point information obtained, and determines that the candidate of all subregion is seen Survey direction;
    (5) next best observed bearing is determined:
    5a) appoint from candidate observed direction set and take a candidate observed direction, calculate in candidate's observed direction and all subregion The angle of each triangle small section normal vector determines the corresponding visible space of candidate's observed direction according to angle information;
    5b) according to step 5a) whole candidate observed directions in candidate observed direction set are traversed, it calculates each candidate and observes The corresponding visible space in direction;
    The weights of each candidate observed direction 5c) are calculated, and next best observation is determined using the candidate observed direction of weighting Direction VNBVWith observation central point Pview
    5d) according to the next best observed direction and observation central point being obtained, cameras view position P is determinedcamera
  2. 2. a kind of static vision target occlusion bypassing method based on depth image block information according to claim 1, It is characterized in that extracting the lower adjacent boundary point of each Ouluding boundary point in depth image described in step (2), and determine image In each pixel three-dimensional coordinate, specific steps include:
    The lower adjacent boundary point of each Ouluding boundary point in depth image 2a) is extracted, calculation formula is as follows:
    The wherein coordinate of (i, j) for Ouluding boundary point, the coordinate of (x, y) for pixel adjacent thereto in eight neighborhood, Depth (i, j) is depth value of Ouluding boundary point (i, j), and Depth (x, y) is the depth value of a bit (x, y) in eight neighborhood;
    2b) using camera interior and exterior parameter, back project is carried out to each pixel, obtains its three-dimensional coordinate.
  3. 3. a kind of static vision target occlusion bypassing method based on depth image block information according to claim 2, It is characterized in that in step 3b) in, the triangle grid model based on acquired occlusion area calculates that each triangle is small to be cutd open The normal vector and area in face, calculation formula are as follows:
    Normal in formulaiAnd SquaiIt is the normal vector and area of triangle small section respectively, (xa,ya,za)、(xb,yb,zb)、(xc, yc,zc) be respectively triangle small section vertex A, B, C coordinate, VABAnd VACWhat respectively vertex A and B and vertex A and C were formed Vector.
  4. 4. a kind of static vision target occlusion bypassing method based on depth image block information according to claim 3, It is characterized in that in step 4b) in, occlusion area is divided into several subregions, and really by the angle point information that the foundation is obtained Determine the candidate observed direction of all subregion, be as follows:
    4b1) using the line of any lower adjacent boundary point in three dimensions in each angle point and its eight neighborhood as line of demarcation, by this Occlusion area is divided into several subregions, and respectively by the normal vector normal vector after being added of the triangle small section in all subregion As the normal vector of the subregion, calculation formula is as follows:
    Normal in formulaiIt is the normal vector of the triangle small section in subregion, triangle is the collection of subregion intermediate cam small section It closes, VjNormal vector for subregion;
    Each section of Ouluding boundary 4b2) is traversed, detects it with the presence or absence of angle point, if there is no angle point, by institute in the Ouluding boundary Some triangle small sections are merged into a sub-regions, and according to step 4b1) subregion is handled, while will be obtained after processing To the negative direction of subregion normal vector be added in candidate observed direction set as candidate observed direction;
    If 4b3) there are angle points on Ouluding boundary, the line of adjacent boundary point lower in the angle point eight neighborhood selected with it is divided The normal vectors of two sub-regions be added, a new vector is obtained at this time, by negative direction and the two subregions of this vector Normal vector negative direction, be added to together in candidate observed direction set, handled all angle points successively, and candidate is seen The identical element surveyed in the set of direction carries out duplicate removal processing, that is, completes and candidate observed direction set is determined.
  5. 5. a kind of static vision target occlusion bypassing method based on depth image block information according to claim 4, It is characterized in that in step 5a) in, it is described appoint from candidate observed direction set take a candidate observed direction, calculate candidate and see Direction and the angle of triangle small section normal vector each in all subregion are surveyed, candidate's observed direction is determined according to angle information Corresponding visible space, is as follows:
    5a1) remember VjAnd NormaliCoordinate be respectively (nj,mj,tj) and (xi,yi,zi), in candidate observed direction set J-th candidates observed direction calculates candidate's observed direction and the angle α of the normal vector of each triangle small section respectively, calculates Formula is as follows:
    5a2) remember SquaiFor the area of i-th of triangle small section in subregion, Sumi-1It is current candidate observed direction to upper one Area after the observation of triangle small section and, candidate observed direction V is determined according to the α angle value calculatedjCorresponding visible space Sj, calculation formula is as follows:
    5a3) according to step 5a1) and 5a2), traverse the triangle small section in all subregions, you can obtain the candidate observation side To corresponding visible space.
  6. 6. a kind of static vision target occlusion bypassing method based on depth image block information according to claim 5, It is characterized in that in step 5c) in, it is described to calculate the weights of each candidate observed direction, and utilize the candidate observed direction of weighting Determine next best observed direction VNBVWith observation central point Pview, it is as follows:
    5c1) calculate the weight w of each observed directionj, calculation formula is as follows:
    Wherein n represents the number of candidate observed direction;
    5c2) determine next best observed direction VNBV, calculation formula is as follows:
    5c3) according to step 5c1) and the information that 5c2) has calculated that, calculating observation central point PviewIf PviewCoordinate be (xview,yview,zview), N is the number of triangle small section, and the coordinate of the central point Mid of each triangle small section is (xmid, ymid,zmid), then calculate PviewFormula is as follows:
    Coordinate (the x of its intermediate cam small section central point Midmid,ymid,zmid) computational methods it is as follows:
CN201510053316.1A 2015-02-02 2015-02-02 Static vision target occlusion bypassing method based on depth image block information Active CN104657985B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510053316.1A CN104657985B (en) 2015-02-02 2015-02-02 Static vision target occlusion bypassing method based on depth image block information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510053316.1A CN104657985B (en) 2015-02-02 2015-02-02 Static vision target occlusion bypassing method based on depth image block information

Publications (2)

Publication Number Publication Date
CN104657985A CN104657985A (en) 2015-05-27
CN104657985B true CN104657985B (en) 2018-07-03

Family

ID=53249061

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510053316.1A Active CN104657985B (en) 2015-02-02 2015-02-02 Static vision target occlusion bypassing method based on depth image block information

Country Status (1)

Country Link
CN (1) CN104657985B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110415294A (en) * 2018-04-28 2019-11-05 中移(苏州)软件技术有限公司 A kind of method and device determining next best observed bearing
CN110954013B (en) * 2018-09-26 2023-02-07 深圳中科飞测科技股份有限公司 Detection method and detection system thereof
CN109900272B (en) * 2019-02-25 2021-07-13 浙江大学 Visual positioning and mapping method and device and electronic equipment
CN112747818B (en) * 2019-11-11 2022-11-04 中建大成绿色智能科技(北京)有限责任公司 Blocked visual angle measuring platform, method and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102496184A (en) * 2011-12-12 2012-06-13 南京大学 Increment three-dimensional reconstruction method based on bayes and facial model
CN103810700A (en) * 2014-01-14 2014-05-21 燕山大学 Method for determining next optimal observation orientation by occlusion information based on depth image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2392229B1 (en) * 2010-08-27 2013-10-16 Telefónica, S.A. METHOD OF GENERATING A MODEL OF A FLAT OBJECT FROM VIEWS OF THE OBJECT.

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102496184A (en) * 2011-12-12 2012-06-13 南京大学 Increment three-dimensional reconstruction method based on bayes and facial model
CN103810700A (en) * 2014-01-14 2014-05-21 燕山大学 Method for determining next optimal observation orientation by occlusion information based on depth image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
一种基于深度图像的自遮挡检测方法;张世辉, 张煜婕, 孔令富;《小型微型计算机系统》;20100531;第31卷(第5期);第964-968页 *
一种基于遮挡区域的最佳观测方位算法;曾丽丽,田青;《四川兵工学报》;20100228;第31卷(第2期);第126页 *
基于深度图像利用随机森林实现遮挡检测;张世辉,刘建新,孔令富;《光学学报》;20140930;第34卷(第9期);第1-12页 *

Also Published As

Publication number Publication date
CN104657985A (en) 2015-05-27

Similar Documents

Publication Publication Date Title
CN109272537B (en) Panoramic point cloud registration method based on structured light
CN106826833B (en) Autonomous navigation robot system based on 3D (three-dimensional) stereoscopic perception technology
Tomasi et al. Shape and motion from image streams: a factorization method.
CN106940704A (en) A kind of localization method and device based on grating map
CN104657985B (en) Static vision target occlusion bypassing method based on depth image block information
Xiao et al. 3D point cloud registration based on planar surfaces
Yue et al. Fast 3D modeling in complex environments using a single Kinect sensor
CN104463899A (en) Target object detecting and monitoring method and device
CN105005964A (en) Video sequence image based method for rapidly generating panorama of geographic scene
CN109410330A (en) One kind being based on BIM technology unmanned plane modeling method
CN109523595A (en) A kind of architectural engineering straight line corner angle spacing vision measuring method
CN101794459A (en) Seamless integration method of stereoscopic vision image and three-dimensional virtual object
CN108133496A (en) A kind of dense map creating method based on g2o Yu random fern
CN112102342B (en) Plane contour recognition method, plane contour recognition device, computer equipment and storage medium
CN107860390A (en) The nonholonomic mobile robot of view-based access control model ROS systems remotely pinpoints auto-navigation method
Pi et al. Stereo visual SLAM system in underwater environment
Dani et al. Image moments for higher-level feature based navigation
EP3825804A1 (en) Map construction method, apparatus, storage medium and electronic device
Boerner et al. Brute force matching between camera shots and synthetic images from point clouds
CN115578460A (en) Robot grabbing method and system based on multi-modal feature extraction and dense prediction
CN103810700A (en) Method for determining next optimal observation orientation by occlusion information based on depth image
JP3512919B2 (en) Apparatus and method for restoring object shape / camera viewpoint movement
CN106959101A (en) A kind of indoor orientation method based on optical flow method
Kitayama et al. 3D map construction based on structure from motion using stereo vision
Wang et al. RGB-D visual odometry with point and line features in dynamic environment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210210

Address after: 066000 82 Longhai Road, Qinhuangdao Economic and Technological Development Zone, Hebei Province

Patentee after: HENGYE CENTURY SECURITY TECHNOLOGY Co.,Ltd.

Address before: 066004 No. 438 west section of Hebei Avenue, seaport District, Hebei, Qinhuangdao

Patentee before: Yanshan University

TR01 Transfer of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Static visual object occlusion avoidance method based on depth image occlusion information

Effective date of registration: 20220321

Granted publication date: 20180703

Pledgee: Cangzhou Bank Co.,Ltd. Qinhuangdao branch

Pledgor: HENGYE CENTURY SECURITY TECHNOLOGY CO.,LTD.

Registration number: Y2022980002848

PE01 Entry into force of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20230314

Granted publication date: 20180703

Pledgee: Cangzhou Bank Co.,Ltd. Qinhuangdao branch

Pledgor: HENGYE CENTURY SECURITY TECHNOLOGY CO.,LTD.

Registration number: Y2022980002848

PC01 Cancellation of the registration of the contract for pledge of patent right