CN110738665B - Object contact identification method based on depth image information - Google Patents

Object contact identification method based on depth image information Download PDF

Info

Publication number
CN110738665B
CN110738665B CN201910875363.2A CN201910875363A CN110738665B CN 110738665 B CN110738665 B CN 110738665B CN 201910875363 A CN201910875363 A CN 201910875363A CN 110738665 B CN110738665 B CN 110738665B
Authority
CN
China
Prior art keywords
detected
static
depth image
moving
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910875363.2A
Other languages
Chinese (zh)
Other versions
CN110738665A (en
Inventor
陈衍
俞云松
许杭
袁玉华
周建仓
王海萍
孙璐
姜胜男
朱飞腾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201910875363.2A priority Critical patent/CN110738665B/en
Publication of CN110738665A publication Critical patent/CN110738665A/en
Application granted granted Critical
Publication of CN110738665B publication Critical patent/CN110738665B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an object contact identification method based on depth image information. And analyzing the connected domain and depth information of different moving objects or between the moving object and a static object. Thereby definitely determining whether a contact relation between the objects occurs. The method overcomes the defects that the two-dimensional information of the common camera shooting information is limited and the contact relation of the object cannot be accurately judged; the method provided by the invention is suitable for contact recognition between moving and static objects, and particularly provides a quick and practical discrimination method under the condition of partial and complete shielding. By using less operation cost, whether the contact relation exists between the moving object and the static object can be accurately judged under the shielding condition.

Description

Object contact identification method based on depth image information
Technical Field
The invention relates to an object contact identification method in an image, in particular to an object contact identification method based on depth image information.
Background
In the field of video surveillance artificial intelligence applications, core problems that often need to be dealt with include segmentation, tracking, recognition, and behavior analysis of objects. Among the behavioral analysis requirements, the most fundamental analysis requirement is to determine the direct relative position and contact relationship between an object and an object. Contact relationships are often considered more central recognition decision points. Many common intelligent application requirements, such as limb contact discrimination, intrusion into alert zones, strict inhibition of touch object monitoring, etc., require contact relationship discrimination as a basis.
The common single camera monitoring video is planar two-dimensional data without depth information, although advanced artificial intelligence technology can perform object segmentation and position relation judgment in a certain range. However, in practical applications, due to the natural two-dimensional information limitation of a single camera, even if the plane relations between the object objects are completely adjacent, it is difficult to determine whether the object objects are at the same depth position, so that it is impossible to accurately distinguish whether the contact relation between the objects actually exists. Meanwhile, when the judged object is completely shielded, whether the judged object has a real contact relation with the shielding object is more difficult to judge. Even if the problem can be partially solved by applying a more complex algorithm for a specific scene, the high algorithm design cost is required. The root cause problem is the lack of depth information.
Disclosure of Invention
The invention aims to provide an object contact identification method based on depth image information, aiming at the defects of the prior art.
The purpose of the invention is realized by the following technical scheme:
an object contact identification method based on depth image information comprises the following steps:
(1) completing depth image information acquisition: and acquiring original depth image information by a depth image camera.
(2) Object segmentation: dividing the object into a moving object and a static object and respectively segmenting the moving object and the static object, wherein the method comprises the following substeps:
(2.1) moving object segmentation: and (4) extracting a foreground moving object by using a background removal algorithm.
(2.2) segmentation of stationary objects: and (4) performing segmentation extraction on the static object by using a watershed algorithm of the depth image.
(3) Identifying an object to be detected: and (3) identifying the object to be detected from the objects segmented in the step (2) through artificial intelligence.
(4) The contact identification of the object to be detected is divided into the following two conditions:
and (4.1) when the object to be detected is partially shielded or not shielded, directly utilizing a connected domain analysis method to carry out contact identification on the object to be detected and other objects.
(4.2) when the object to be detected is completely shielded, judging the contact relation of the moving object to be detected when the moving object to be detected is completely shielded by utilizing the contact identification result of the partially shielded or unshielded state of the object before the object is completely shielded; and the depth value of the static object to be detected is subtracted from the depth value of the moving object for shielding the static object to be detected by the static object to be detected when the static object to be detected is not shielded, so that the depth deviation value is obtained. And when the depth deviation amount is smaller than the shielding judgment threshold value, judging that the static object to be detected and a moving object shielding the static object to be detected have a contact relation. The shielding judgment threshold value is about 10 times of the minimum resolution unit of the depth camera.
Further, in the step (1), the depth image camera includes a binocular camera and a structured light camera.
Further, in the step (2.1), the background removal algorithm is a gaussian mixture model algorithm, a kernel density estimation algorithm, a ViBe algorithm, a background subtraction method, or the like.
Further, in the step (3), since the position of the stationary object in the stationary scene is fixed, the stationary object can also be identified and located by an active pre-marking method.
The invention has the beneficial effects that: the method overcomes the defects that the two-dimensional information of the common camera shooting information is limited and the contact relation of the object cannot be accurately judged; the method provided by the invention is suitable for contact recognition between moving and static objects, and particularly provides a quick and practical discrimination method under the condition of partial and complete shielding. By using less operation cost, whether the contact relation exists between the moving object and the static object can be accurately judged under the shielding condition.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of a LeNet neural network architecture;
fig. 3 is a stationary object occlusion determination flow.
Detailed Description
The invention is described in further detail below with reference to the figures and examples.
As shown in fig. 1, an object contact recognition method based on depth image information includes the following steps:
(1) completing depth image information acquisition: and acquiring original depth image information by a depth image camera.
(2) Object segmentation: dividing the object into a moving object and a static object and respectively segmenting the moving object and the static object, wherein the method comprises the following substeps:
(2.1) moving object segmentation: and (4) extracting a foreground moving object by using a background removal algorithm.
(2.2) segmentation of stationary objects: and (4) performing segmentation extraction on the static object by using a watershed algorithm of the depth image.
(3) Identifying an object to be detected: in step (2), the objects in the scene are segmented one by one. But they are not necessarily the objects to be measured which we need, so the objects need to be identified one by one. Identification of the object to be tested is typically accomplished using artificial intelligence. The mainstream intelligent object recognition algorithm at present takes a convolutional neural network algorithm (CNN) as a core. Taking conventional LeNet as an example, as shown in fig. 2, the network structure is composed of a convolutional layer, a pooling layer, and a full connection layer. The connections between local pixels in an image are relatively close, while pixels at greater distances are relatively weak. Therefore, each neuron does not need to sense the image overall situation, only needs to sense local information, and then integrates the local information at a higher layer to obtain the overall information. The convolution layer operation is to realize local perception field, and the convolution operation can share weight value, so the parameter quantity is reduced. The pooling layer is used for reducing the size of an input image, reducing pixel information and only keeping important information, and is mainly used for reducing the calculation amount. Mainly comprises maximum pooling and mean pooling. The full-connection layer plays a role of a classifier in the whole convolutional neural network to obtain a final object classification result. While the layers incorporate non-linearity using an activation function. Common activation functions are sigmod, tanh, relu, which are commonly used in fully-connected layers, and relu is commonly used in convolutional layers.
(4) The contact identification of the object to be detected is divided into the following two conditions:
(4.1) when the object to be detected is partially shielded or not shielded: according to the characteristics of the depth image information, when partial shielding or non-shielding exists between two objects, a connected point of the depth information inevitably exists between the two objects, so that the contact identification of the object to be detected and other objects is directly carried out by using a connected domain analysis method.
And (4.2) when the object to be detected is completely shielded, the object to be detected does not appear in the scene. Judging the contact relation of the moving object to be detected when the moving object to be detected is completely shielded by utilizing the contact identification result of the partially shielded or unshielded state of the object before the object is completely shielded; the depth information of the static object to be measured is theoretically fixed, so when the static object to be measured is completely shielded, the depth value of the moving object shielding the static object to be measured can be used for subtracting the depth value of the position where the static object to be measured is located when the static object to be measured is not shielded, and the depth deviation value is obtained, as shown in fig. 2. And when the depth deviation amount is smaller than the shielding judgment threshold value, judging that the static object to be detected and a moving object shielding the static object to be detected have a contact relation. The occlusion determination threshold is a preset small depth deviation amount, and is generally set according to an actual detection scene determination scale, and is generally set to be about 10 times of a minimum resolution unit of a depth camera.
In addition, in the step (1), the depth image camera comprises a binocular camera and a structured light camera. In the step (2.1), the background removing algorithm is a Gaussian mixture model algorithm, a kernel density estimation algorithm, a ViBe algorithm, a background difference method and the like.
Further, in the step (3), since the position of the stationary object in the stationary scene is fixed, the stationary object can also be identified and located by an active pre-marking method.
The method overcomes the defects that the two-dimensional information of the common camera shooting information is limited and the contact relation of the object cannot be accurately judged; the method provided by the invention is suitable for contact recognition between moving and static objects, and particularly provides a quick and practical discrimination method under the condition of partial and complete shielding. By using less operation cost, whether the contact relation exists between the moving object and the static object can be accurately judged under the shielding condition.

Claims (4)

1. An object contact identification method based on depth image information is characterized by comprising the following steps:
(1) completing depth image information acquisition: acquiring original depth image information by a depth image camera;
(2) object segmentation: dividing the object into a moving object and a static object and respectively segmenting the moving object and the static object, wherein the method comprises the following substeps:
(2.1) moving object segmentation: extracting foreground moving objects by using a background removal algorithm;
(2.2) segmentation of stationary objects: performing segmentation extraction on the static object by using a watershed algorithm of the depth image;
(3) identifying an object to be detected: identifying an object to be detected from the objects segmented in the step (2) through artificial intelligence;
(4) the contact identification of the object to be detected is divided into the following two conditions:
(4.1) when the object to be detected is partially shielded or not shielded, directly utilizing a connected domain analysis method to carry out contact identification on the object to be detected and other objects;
(4.2) when the object to be detected is completely shielded, judging the contact relation of the moving object to be detected when the moving object to be detected is completely shielded by utilizing the contact identification result of the partially shielded or unshielded state of the object before the object is completely shielded; subtracting the depth value of the position of the static object to be detected when the static object to be detected is not shielded from the depth value of the moving object shielding the static object to be detected by the static object to be detected to obtain a depth deviation value; when the depth deviation amount is smaller than the shielding judgment threshold value, judging that the static object to be detected and a moving object shielding the static object to be detected have a contact relation; the shielding judgment threshold value is 10 times of the minimum resolution unit of the depth image camera.
2. The method of claim 1, wherein in step (1), the depth image camera comprises a binocular camera, a structured light camera.
3. The method according to claim 1, wherein in step (2.1), the background removal algorithm is a Gaussian mixture model algorithm, a kernel density estimation algorithm, a ViBe algorithm or a background subtraction method.
4. The method according to claim 1, wherein in the step (3), the stationary object is further identified and located by an active pre-marking method.
CN201910875363.2A 2019-09-17 2019-09-17 Object contact identification method based on depth image information Active CN110738665B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910875363.2A CN110738665B (en) 2019-09-17 2019-09-17 Object contact identification method based on depth image information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910875363.2A CN110738665B (en) 2019-09-17 2019-09-17 Object contact identification method based on depth image information

Publications (2)

Publication Number Publication Date
CN110738665A CN110738665A (en) 2020-01-31
CN110738665B true CN110738665B (en) 2021-10-29

Family

ID=69267984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910875363.2A Active CN110738665B (en) 2019-09-17 2019-09-17 Object contact identification method based on depth image information

Country Status (1)

Country Link
CN (1) CN110738665B (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102347248B1 (en) * 2014-11-26 2022-01-04 삼성전자주식회사 Method and apparatus for recognizing touch gesture
CN107357414B (en) * 2016-05-09 2020-01-14 株式会社理光 Click action recognition method and device
CN106096554A (en) * 2016-06-13 2016-11-09 北京精英智通科技股份有限公司 Decision method and system are blocked in a kind of parking stall
CN108629782B (en) * 2018-04-28 2021-09-28 合肥工业大学 Road target depth estimation method based on ground clue propagation
CN108898676B (en) * 2018-06-19 2022-05-13 青岛理工大学 Method and system for detecting collision and shielding between virtual and real objects
CN109584347B (en) * 2018-12-18 2023-02-21 重庆邮电大学 Augmented reality virtual and real occlusion processing method based on active appearance model

Also Published As

Publication number Publication date
CN110738665A (en) 2020-01-31

Similar Documents

Publication Publication Date Title
CN107123131B (en) Moving target detection method based on deep learning
US9158985B2 (en) Method and apparatus for processing image of scene of interest
CN110544258A (en) Image segmentation method and device, electronic equipment and storage medium
US20100290710A1 (en) System and method for motion detection in a surveillance video
CN101986348A (en) Visual target identification and tracking method
Fan et al. A novel automatic dam crack detection algorithm based on local-global clustering
WO2005048191A2 (en) Object detection in images
CN112364865B (en) Method for detecting small moving target in complex scene
CN110598613B (en) Expressway agglomerate fog monitoring method
Sakpal et al. Adaptive background subtraction in images
Jodoin et al. Background subtraction based on local shape
CN111383244A (en) Target detection tracking method
Tiwari et al. A survey on shadow detection and removal in images and video sequences
Huang et al. Obstacle distance measurement under varying illumination conditions based on monocular vision using a cable inspection robot
CN111695373A (en) Zebra crossing positioning method, system, medium and device
KR20120129301A (en) Method and apparatus for extracting and tracking moving objects
KR101690050B1 (en) Intelligent video security system
Panda et al. Adaptive spatio‐temporal background subtraction using improved Wronskian change detection scheme in Gaussian mixture model framework
CN110858392A (en) Monitoring target positioning method based on fusion background model
Yuanbin et al. An improved VIBE based on Gaussian pyramid
Gong et al. An improved Canny algorithm based on adaptive 2D-Otsu and Newton Iterative
CN110738665B (en) Object contact identification method based on depth image information
KR20210008574A (en) A Real-Time Object Detection Method for Multiple Camera Images Using Frame Segmentation and Intelligent Detection POOL
CN112329572B (en) Rapid static living body detection method and device based on frame and flash point
CN110580706A (en) Method and device for extracting video background model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant