CN105741271A - Method for detecting object in depth image - Google Patents

Method for detecting object in depth image Download PDF

Info

Publication number
CN105741271A
CN105741271A CN201610049077.7A CN201610049077A CN105741271A CN 105741271 A CN105741271 A CN 105741271A CN 201610049077 A CN201610049077 A CN 201610049077A CN 105741271 A CN105741271 A CN 105741271A
Authority
CN
China
Prior art keywords
node
pixel
data
judged
child
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610049077.7A
Other languages
Chinese (zh)
Other versions
CN105741271B (en
Inventor
曲磊
邱云周
皮家甜
王康如
张力
杨旭光
王营冠
郑春雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI INTERNET OF THINGS CO Ltd
Original Assignee
SHANGHAI INTERNET OF THINGS CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI INTERNET OF THINGS CO Ltd filed Critical SHANGHAI INTERNET OF THINGS CO Ltd
Priority to CN201610049077.7A priority Critical patent/CN105741271B/en
Publication of CN105741271A publication Critical patent/CN105741271A/en
Application granted granted Critical
Publication of CN105741271B publication Critical patent/CN105741271B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The present invention relates to a method for detecting an object in a depth image. The method comprises the following steps: for an input depth image, establishing a depth tree model based on a multi-way tree data structure and a mapping diagram of a depth tree node and a depth image pixel; and traversing each node of the depth tree model, making statistics and analysis on node data, and determining a node attribute, so as to obtain a rectangular region where an object is located. According to the method provided by the provided by the present invention, the rectangular region of the object can be rapidly and accurately obtained, accuracy of object detection in the depth image can be improved, so that a detection result can not only meet human visual requirements but also relatively well meet the demands for speed and precision in practical application.

Description

Object detecting method in a kind of depth image
Technical field
The present invention relates to the object detection technology in computer vision field, particularly relate to object detecting method in a kind of depth image.
Background technology
Object detection is always up a basic problem of computer vision field, is also a difficulties, it is thus achieved that the least possible and unrelated with object classification subject area is the core of object detection simultaneously.Traditional method is many based on rectangle frame, the frame namely compacted as far as possible with series of rectangular frame publish picture as in object.This kind of method is widely used in conventional two-dimensional image field, achieves good accuracy of detection while taking into account speed, but this method is as the same, and to there is the alternative frame of generation more, the problems such as alternative frame degree of compacting is low.Another kind of method is based on region segmentation, is namely partitioned in image to have the subject area of particular real-world meaning, and the precision of this kind of method is general higher, but algorithm complex is high, and speed is generally slower.Along with the development of depth transducer and technique of binocular stereoscopic vision, utilizing depth image to promote object detection speed becomes study hotspot with precision, for the application demand in reality, needs object detecting method in a kind of depth image taking into account speed and precision badly.
Summary of the invention
The technical problem to be solved is to provide object detecting method in a kind of depth image, object rectangular area can be obtained fast and accurately, promote the accuracy of object detection in depth image so that testing result can meet human eye vision requirement can better meet again in real world applications the demand to speed Yu precision.
The technical solution adopted for the present invention to solve the technical problems is: provides object detecting method in a kind of depth image, comprises the following steps:
(1) input depth image is set up the mapping graph of the deep tree model based on multi-branch tree data structure and deep tree node and depth image pixel;
(2) the traversal each node of deep tree model, statistics and analysis node data, adjudicate nodal community, thus obtaining rectangular area, object place.
Described step (1) specifically includes following sub-step:
(11) create and initialize deep tree root node and mapping graph, root node position is saved in mapping graph;
(12) input depth image is compared pixel-by-pixel, it is judged that whether current pixel depth value is maximum point in half neighborhood;
(13) if current pixel is maximum point, then new node is created, otherwise in insertion half neighborhood in the node of a certain pixel;
(14) node location is saved in mapping graph, and repeats step (12).
In described step (12) to input depth image from left to right, bottom-up compare pixel-by-pixel;Time relatively, if this pixel is depth image lower-left angle point and depth value is minima, then directly it is stored into root node, and this node location is stored in mapping graph, be otherwise judged as in half neighborhood maximum point;If this pixel is the down contour point of depth image, then compare with left pixel, it may be judged whether be maximum point in half neighborhood;If this pixel is the left hand edge point of depth image, then compare with lower pixel and bottom right pixel point, it may be judged whether be maximum point in half neighborhood;If this pixel is other points of depth image, then compare with left pixel, bottom left pixel point, lower pixel and bottom right pixel point respectively, it may be judged whether be maximum point in half neighborhood.
Described step (13) creates new node and includes following sub-step: other pixel place subtrees in half neighborhood are merged between two;Pixel place node bigger for depth value in half neighborhood is confirmed as the father node being inserted into node, create new leaf node, adjust father node, the brotgher of node and child nodes, and preserve location of pixels value, initialize the data for statistical analysis, return to the position of the bigger pixel place node of this depth value.
Described step (13) is inserted the node of a certain pixel in half neighborhood and includes following sub-step: other pixel place subtrees in half neighborhood are merged between two;If this pixel depth value pixel depth value a certain with in half neighborhood is identical, then the node of this pixel is confirmed as and be inserted into node, preserve location of pixels value, update the data for statistical analysis, return the node location of this pixel;If the parent one depth value of this pixel depth value pixel a certain with in half neighborhood is equal, then this father node is confirmed as and be inserted into node, preserve location of pixels value, update the data for statistical analysis, return the node location of this pixel;If between parent one and the child node thereof of this pixel depth value a certain pixel in half neighborhood, then this father node is confirmed as the father node being inserted into node, create new child node, adjust father node, the brotgher of node and child nodes, and preserve location of pixels value, initialize the data for statistical analysis, return the node location of this pixel.
If described other pixel place subtrees in half neighborhood are carried out merges between two particularly as follows: two pixel place nodes are same nodes, then merge;If two pixel place nodes are not same node but two pixel depth value are identical, then merge the point set of two pixel place nodes, adjust the father node of two pixel place nodes, the brotgher of node and child node, adjust the data for statistical analysis, adjust mapping graph, delete deprecated objects, then merged;If two pixel place nodes are not same node and two pixel depth value difference, and the depth value of the father node of the bigger node of depth value still greater than or equal to another depth value relatively minor node, then to the parent one of the bigger node of depth value and relatively minor node perform combining step;If two pixel place nodes are not same node and two pixel depth value difference, and the depth value of the father node of the bigger node of depth value is less than another depth value relatively minor node, then adjust the father node of two nodes, the brotgher of node and child nodes, then merged.
Described step (2) specifically includes following sub-step:
(21) if present node is leaf node, then updating present node data, present node is judged to non-object, returns present node data;
(22) if present node is not leaf node, then travel through child node, it is thus achieved that son node number evidence, update present node data, present node data are carried out statistical analysis;
(23) if branch's number is less than or equal to the ratio of threshold value or child's number and branch's number more than threshold value in present node data, then present node is judged to non-object, returns present node data;
(24) if in present node data branch's number more than the ratio of threshold value and child's number and branch's number less than or equal to rectangle frame length-width ratio in threshold value and present node data more than threshold value, then present node is judged to object, this node is subtree root node location, in data, rectangle frame is rectangular area, object place, returns present node data;
(25) if branch's number less than or equal to threshold value, then searches for whether its child node has the node being judged to object or comprising object less than or equal to rectangle frame length-width ratio in threshold value and present node data more than the ratio of threshold value and child's number and branch's number in present node data;
(26) if child node is without being judged to object or the node comprising object, then present node is judged to non-object, returns present node data;
(27) if child node has the node being judged to comprise object without being judged to the node of object, then present node is judged to comprise object, returns present node data;
(28) if the region of rectangle frame is overlapped during child node has the node being judged to object and is judged to the node data of object, then present node is judged to comprise object, create new child node, present node is defined as being inserted into the father node of node, it is judged to that the node that in the node of object and node data, rectangle frame region is overlapped is defined as being inserted into the child node of node, adjust father node, the brotgher of node and child nodes, adjust the node data for statistical analysis, this new child node is judged to object, this node is subtree root node location, in data, rectangle frame is rectangular area, object place, return his father's node data;
(29) if the region of rectangle frame is not overlapping during child node has the node being judged to object and is judged to the node data of object, then present node is judged to comprise object, returns present node data.
Beneficial effect
Owing to have employed above-mentioned technical scheme, the present invention is compared with prior art, have the following advantages that and good effect: the present invention is by object detection in multiway tree model use to depth image, object detection in depth image is converted to deep tree interior joint analysis, propose a kind of brand-new integrated use method based on rectangle frame Yu domain decomposition technique, provide new solution for object detection quick in depth image;Utilization that tree therein is intact and save position and the structural information of object in scene, depth image scene conversion is become computer it will be appreciated that statistical data, statistics that each node is intact and save the relevant information of its child node, such that it is able to comprehensive utilization global and local information carries out logic decision;The present invention is relatively low to the prescription of depth image, thus reducing in binocular stereo vision the required precision to Stereo matching, adaptability is wider with application prospect;By the traversal analysis of deep tree data, efficiently can detect multiple object fast and accurately, improve object detection speed and precision in depth image so that it is human eye vision requirement can be met and can better meet again in real world applications the demand to speed Yu precision.
Accompanying drawing explanation
Fig. 1 is the flow chart of the present invention;
Fig. 2 is the flow chart setting up the deep tree model based on multi-branch tree data structure and deep tree node in the present invention with the mapping graph of image pixel;
Fig. 3 is that the foundation of the present invention is based on the flow chart creating node in the deep tree model of multi-branch tree data structure and the mapping graph of deep tree node and image pixel;
The foundation that Fig. 4 is the present invention inserts node flow chart based in the deep tree model of multi-branch tree data structure and the mapping graph of deep tree node and image pixel;
Fig. 5 is the flow chart that other pixel place subtrees in half neighborhood are merged between two of the present invention;
Fig. 6 is each node of traversal deep tree of the present invention, statistics and analysis node data, adjudicates nodal community, thus obtaining rectangular area, object place;
Fig. 7, Fig. 8 and Fig. 9 are the experimental result schematic diagram of the present invention, and wherein, (a) is original image, and (b) is testing result.
Detailed description of the invention
Below in conjunction with specific embodiment, the present invention is expanded on further.Should be understood that these embodiments are merely to illustrate the present invention rather than restriction the scope of the present invention.In addition, it is to be understood that after having read the content that the present invention lectures, the present invention can be made various changes or modifications by those skilled in the art, and these equivalent form of values fall within the application appended claims limited range equally.
Embodiments of the present invention relate to object detecting method in a kind of depth image, as it is shown in figure 1, comprise the following steps:
(1) input depth image is set up the mapping graph of the deep tree model based on multi-branch tree data structure and deep tree node and depth image pixel;
(2) the traversal each node of deep tree, statistics and analysis node data, adjudicate nodal community, thus obtaining rectangular area, object place.
As in figure 2 it is shown, described step (1) also includes following sub-step:
(21) create and initialize deep tree root node and mapping graph, this node location is saved in mapping graph;
(22) to input depth image from left to right, bottom-up compare pixel-by-pixel, it is judged that whether current pixel depth value is maximum point in half neighborhood;
(23) if current pixel is maximum point, then new node is created;
(24) if current pixel is not maximum point, then in insertion half neighborhood in the node of a certain pixel;
(25) this node location is saved in mapping graph, and repeats step (22).
Wherein, described step (22) also includes following sub-step:
(31) if this pixel is depth image lower-left angle point and depth value is minima, then directly it is stored into root node, and performs step (25), be otherwise judged as maximum point in half neighborhood;
(32) if this pixel is the down contour point of depth image, then compare with left pixel, it may be judged whether be maximum point in half neighborhood;
(33) if this pixel is the left hand edge point of depth image, then compare with lower pixel, bottom right pixel point, it may be judged whether be maximum point in half neighborhood;
(34) if this pixel is other points of depth image, then compare with left pixel, bottom left pixel point, lower pixel, bottom right pixel point respectively, it may be judged whether be maximum point in half neighborhood.
As it is shown on figure 3, described step (23) also includes following sub-step:
(41) other pixel place subtrees in half neighborhood are merged between two;
(42) pixel place node bigger for depth value in half neighborhood is confirmed as the father node being inserted into node, create new leaf node, adjust father node, the brotgher of node and child nodes, and preserve location of pixels value, initialize the data for statistical analysis, return this node location.
As shown in Figure 4, described step (24) also includes following sub-step:
(51) other pixel place subtrees in half neighborhood are merged between two;
(52) if this pixel depth value pixel depth value a certain with in half neighborhood is identical, then the node of this pixel is confirmed as and be inserted into node, preserve location of pixels value, update the data for statistical analysis, return this node location;
(53) if the parent one depth value of this pixel depth value pixel a certain with in half neighborhood is equal, then this father node is confirmed as and be inserted into node, preserve location of pixels value, update the data for statistical analysis, return this node location;
(54) if between parent one and the child node thereof of this pixel depth value a certain pixel in half neighborhood, then this father node is confirmed as the father node being inserted into node, create new child node, adjust father node, the brotgher of node and child nodes, and preserve location of pixels value, initialize the data for statistical analysis, return this node location.
As it is shown in figure 5, described step (41) and (51) also include following sub-step:
(61) if two pixel place nodes are same nodes, then merged;
(62) if two pixel place nodes are not same node but two pixel depth value are identical, then merge the point set of two pixel place nodes, adjust the father node of two pixel place nodes, the brotgher of node and child node, adjust the data for statistical analysis, adjust mapping graph, delete deprecated objects, then merged;
(63) if two pixel place nodes are not same node and two pixel depth value difference, and the depth value of the father node of the bigger node of depth value still greater than or equal to another depth value relatively minor node, then to the parent one of the bigger node of depth value and relatively minor node perform combining step (41);
(64) if two pixel place nodes are not same node and two pixel depth value difference, and the depth value of the father node of the bigger node of depth value is less than another depth value relatively minor node, then adjust the father node of two nodes, the brotgher of node and child nodes, then merged.
As shown in Figure 6, described step (2) also includes following sub-step:
(71) if present node is leaf node, then updating present node data, present node is judged to non-object, returns present node data;
(72) if present node is not leaf node, then travel through child node, it is thus achieved that son node number evidence, update present node data, present node data are carried out statistical analysis;
(73) if in present node data branch's number less than or equal to 100 or the ratio of child's number and branch's number more than 3, then present node is judged to non-object, returns present node data;
(74) if in present node data branch's number more than 100 and the ratio of child's number and branch's number less than or equal to 3 and in present node data rectangle frame length-width ratio more than 1, then present node is judged to object, this node is subtree root node location, in data, rectangle frame is rectangular area, object place, returns present node data;
(75) if in present node data branch's number more than 100 and the ratio of child's number and branch's number less than or equal to 3 and in present node data rectangle frame length-width ratio less than or equal to 1, then search for whether its child node has the node being judged to object or comprising object.
(76) if child node is without being judged to object or the node comprising object, then present node is judged to non-object, returns present node data;
(77) if child node has the node being judged to comprise object without being judged to the node of object, then present node is judged to comprise object, returns present node data;
(78) if the region of rectangle frame is overlapped during child node has the node being judged to object and is judged to the node data of object, then present node is judged to comprise object, create new child node, present node is defined as being inserted into the father node of node, it is judged to that the node that in the node of object and node data, rectangle frame region is overlapped is defined as being inserted into the child node of node, adjust father node, the brotgher of node and child nodes, adjust the node data for statistical analysis, this new child node is judged to object, this node is subtree root node location, in data, rectangle frame is rectangular area, object place, return his father's node data;
(79) if the region of rectangle frame is not overlapping during child node has the node being judged to object and is judged to the node data of object, then present node is judged to comprise object, returns present node data
This experiment carries out emulation experiment, experimental result as shown in Figure 7, Figure 8 and Figure 9, the accuracy of practicality and algorithm in order to the present invention to be described.This emulation experiment is at IntelCore (TM) 2DuoCPUE75002.93GHz, internal memory 4GB PC test platform on C/C++ programming realization, under the premise not using any multithreading, processing speed can reach 150fps.Wherein Fig. 7 (a), Fig. 8 (a) and Fig. 9 (a) are original images, and Fig. 7 (b), 8 (b) and Fig. 9 (b) are testing results.

Claims (7)

1. object detecting method in a depth image, it is characterised in that comprise the following steps:
(1) input depth image is set up the mapping graph of the deep tree model based on multi-branch tree data structure and deep tree node and depth image pixel;
(2) the traversal each node of deep tree model, statistics and analysis node data, adjudicate nodal community, thus obtaining rectangular area, object place.
2. object detecting method in depth image according to claim 1, it is characterised in that described step (1) specifically includes following sub-step:
(11) create and initialize deep tree root node and mapping graph, root node position is saved in mapping graph;
(12) input depth image is compared pixel-by-pixel, it is judged that whether current pixel depth value is maximum point in half neighborhood;
(13) if current pixel is maximum point, then new node is created, otherwise in insertion half neighborhood in the node of a certain pixel;
(14) node location is saved in mapping graph, and repeats step (12).
3. object detecting method in depth image according to claim 1, it is characterised in that in described step (12) to input depth image from left to right, bottom-up compare pixel-by-pixel;Time relatively, if this pixel is depth image lower-left angle point and depth value is minima, then directly it is stored into root node, and this node location is stored in mapping graph, be otherwise judged as in half neighborhood maximum point;If this pixel is the down contour point of depth image, then compare with left pixel, it may be judged whether be maximum point in half neighborhood;If this pixel is the left hand edge point of depth image, then compare with lower pixel and bottom right pixel point, it may be judged whether be maximum point in half neighborhood;If this pixel is other points of depth image, then compare with left pixel, bottom left pixel point, lower pixel and bottom right pixel point respectively, it may be judged whether be maximum point in half neighborhood.
4. object detecting method in depth image according to claim 1, it is characterised in that create new node in described step (13) and include following sub-step: other pixel place subtrees in half neighborhood are merged between two;Pixel place node bigger for depth value in half neighborhood is confirmed as the father node being inserted into node, create new leaf node, adjust father node, the brotgher of node and child nodes, and preserve location of pixels value, initialize the data for statistical analysis, return to the position of the bigger pixel place node of this depth value.
5. object detecting method in depth image according to claim 1, it is characterized in that, described step (13) is inserted the node of a certain pixel in half neighborhood and includes following sub-step: other pixel place subtrees in half neighborhood are merged between two;If this pixel depth value pixel depth value a certain with in half neighborhood is identical, then the node of this pixel is confirmed as and be inserted into node, preserve location of pixels value, update the data for statistical analysis, return the node location of this pixel;If the parent one depth value of this pixel depth value pixel a certain with in half neighborhood is equal, then this father node is confirmed as and be inserted into node, preserve location of pixels value, update the data for statistical analysis, return the node location of this pixel;If between parent one and the child node thereof of this pixel depth value a certain pixel in half neighborhood, then this father node is confirmed as the father node being inserted into node, create new child node, adjust father node, the brotgher of node and child nodes, and preserve location of pixels value, initialize the data for statistical analysis, return the node location of this pixel.
6. object detecting method in the depth image according to claim 4 or 5, it is characterised in that if described other pixel place subtrees in half neighborhood are carried out merges between two particularly as follows: two pixel place nodes are same nodes, then merged;If two pixel place nodes are not same node but two pixel depth value are identical, then merge the point set of two pixel place nodes, adjust the father node of two pixel place nodes, the brotgher of node and child node, adjust the data for statistical analysis, adjust mapping graph, delete deprecated objects, then merged;If two pixel place nodes are not same node and two pixel depth value difference, and the depth value of the father node of the bigger node of depth value still greater than or equal to another depth value relatively minor node, then to the parent one of the bigger node of depth value and relatively minor node perform combining step;If two pixel place nodes are not same node and two pixel depth value difference, and the depth value of the father node of the bigger node of depth value is less than another depth value relatively minor node, then adjust the father node of two nodes, the brotgher of node and child nodes, then merged.
7. object detecting method in depth image according to claim 1, it is characterised in that described step (2) specifically includes following sub-step:
(21) if present node is leaf node, then updating present node data, present node is judged to non-object, returns present node data;
(22) if present node is not leaf node, then travel through child node, it is thus achieved that son node number evidence, update present node data, present node data are carried out statistical analysis;
(23) if branch's number is less than or equal to the ratio of threshold value or child's number and branch's number more than threshold value in present node data, then present node is judged to non-object, returns present node data;
(24) if in present node data branch's number more than the ratio of threshold value and child's number and branch's number less than or equal to rectangle frame length-width ratio in threshold value and present node data more than threshold value, then present node is judged to object, this node is subtree root node location, in data, rectangle frame is rectangular area, object place, returns present node data;
(25) if branch's number less than or equal to threshold value, then searches for whether its child node has the node being judged to object or comprising object less than or equal to rectangle frame length-width ratio in threshold value and present node data more than the ratio of threshold value and child's number and branch's number in present node data;
(26) if child node is without being judged to object or the node comprising object, then present node is judged to non-object, returns present node data;
(27) if child node has the node being judged to comprise object without being judged to the node of object, then present node is judged to comprise object, returns present node data;
(28) if the region of rectangle frame is overlapped during child node has the node being judged to object and is judged to the node data of object, then present node is judged to comprise object, create new child node, present node is defined as being inserted into the father node of node, it is judged to that the node that in the node of object and node data, rectangle frame region is overlapped is defined as being inserted into the child node of node, adjust father node, the brotgher of node and child nodes, adjust the node data for statistical analysis, this new child node is judged to object, this node is subtree root node location, in data, rectangle frame is rectangular area, object place, return his father's node data;
(29) if the region of rectangle frame is not overlapping during child node has the node being judged to object and is judged to the node data of object, then present node is judged to comprise object, returns present node data.
CN201610049077.7A 2016-01-25 2016-01-25 Method for detecting object in depth image Active CN105741271B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610049077.7A CN105741271B (en) 2016-01-25 2016-01-25 Method for detecting object in depth image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610049077.7A CN105741271B (en) 2016-01-25 2016-01-25 Method for detecting object in depth image

Publications (2)

Publication Number Publication Date
CN105741271A true CN105741271A (en) 2016-07-06
CN105741271B CN105741271B (en) 2021-11-16

Family

ID=56246532

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610049077.7A Active CN105741271B (en) 2016-01-25 2016-01-25 Method for detecting object in depth image

Country Status (1)

Country Link
CN (1) CN105741271B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110732138A (en) * 2019-10-17 2020-01-31 腾讯科技(深圳)有限公司 Virtual object control method and device, readable storage medium and computer equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130106852A1 (en) * 2011-11-02 2013-05-02 Ben Woodhouse Mesh generation from depth images
CN103971103A (en) * 2014-05-23 2014-08-06 西安电子科技大学宁波信息技术研究院 People counting system
CN103971380A (en) * 2014-05-05 2014-08-06 中国民航大学 Pedestrian trailing detection method based on RGB-D
CN104915952A (en) * 2015-05-15 2015-09-16 中国科学院上海微系统与信息技术研究所 Method for extracting local salient objects in depth image based on multi-way tree

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130106852A1 (en) * 2011-11-02 2013-05-02 Ben Woodhouse Mesh generation from depth images
CN103971380A (en) * 2014-05-05 2014-08-06 中国民航大学 Pedestrian trailing detection method based on RGB-D
CN103971103A (en) * 2014-05-23 2014-08-06 西安电子科技大学宁波信息技术研究院 People counting system
CN104915952A (en) * 2015-05-15 2015-09-16 中国科学院上海微系统与信息技术研究所 Method for extracting local salient objects in depth image based on multi-way tree

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110732138A (en) * 2019-10-17 2020-01-31 腾讯科技(深圳)有限公司 Virtual object control method and device, readable storage medium and computer equipment
CN110732138B (en) * 2019-10-17 2023-09-22 腾讯科技(深圳)有限公司 Virtual object control method, device, readable storage medium and computer equipment

Also Published As

Publication number Publication date
CN105741271B (en) 2021-11-16

Similar Documents

Publication Publication Date Title
Sun et al. Neural 3d reconstruction in the wild
CN110796031A (en) Table identification method and device based on artificial intelligence and electronic equipment
CN105631426B (en) The method and device of text detection is carried out to picture
Yu et al. Mvimgnet: A large-scale dataset of multi-view images
CN109272467B (en) Hierarchical image segmentation method based on multi-scale edge clue
CN104156693B (en) A kind of action identification method based on the fusion of multi-modal sequence
Liu et al. Partslip: Low-shot part segmentation for 3d point clouds via pretrained image-language models
CN104167000B (en) Affine-invariant wide-baseline image dense matching method
CN104915952B (en) Local protrusion object extraction method based on multiway tree in a kind of depth image
Xue et al. Boundary-induced and scene-aggregated network for monocular depth prediction
Wang et al. Tc-sfm: Robust track-community-based structure-from-motion
CN104504692A (en) Method for extracting obvious object in image on basis of region contrast
CN105741271A (en) Method for detecting object in depth image
Ranade et al. Novel single view constraints for manhattan 3d line reconstruction
Li et al. Graph-based saliency fusion with superpixel-level belief propagation for 3D fixation prediction
CN105205161A (en) Simultaneous target searching and dividing method based on Internet images
Patakin et al. Single-stage 3d geometry-preserving depth estimation model training on dataset mixtures with uncalibrated stereo data
CN114821055A (en) House model construction method and device, readable storage medium and electronic equipment
CN103679806B (en) Self adaptation visual shell generates method and device
Ji et al. Stereo matching algorithm based on binocular vision
CN116228850A (en) Object posture estimation method, device, electronic equipment and readable storage medium
Upadhyay et al. Enhancing diffusion models with 3d perspective geometry constraints
Lei et al. Depth-assisted joint detection network for monocular 3D object detection
CN106921856B (en) Processing method, detection dividing method and the relevant apparatus and equipment of stereo-picture
Ren et al. Single outdoor image depth map generation based on scene classification and object detection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant