WO2017181892A1 - Procédé et dispositif de segmentation de premier plan - Google Patents

Procédé et dispositif de segmentation de premier plan Download PDF

Info

Publication number
WO2017181892A1
WO2017181892A1 PCT/CN2017/080274 CN2017080274W WO2017181892A1 WO 2017181892 A1 WO2017181892 A1 WO 2017181892A1 CN 2017080274 W CN2017080274 W CN 2017080274W WO 2017181892 A1 WO2017181892 A1 WO 2017181892A1
Authority
WO
WIPO (PCT)
Prior art keywords
points
point
matching
image
foreground
Prior art date
Application number
PCT/CN2017/080274
Other languages
English (en)
Chinese (zh)
Inventor
邓硕
马华东
罗圣美
傅慧源
刘培业
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2017181892A1 publication Critical patent/WO2017181892A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • This document relates to, but is not limited to, image processing technology, and in particular to a foreground segmentation method and apparatus.
  • Foreground Extraction refers to extracting foreground objects of arbitrary shape from a static image or a burst of video images.
  • the foreground extraction technique in the related art requires the user to mark foreground pixels or regions, and by analyzing the pixels of the region, Get a rough outline of the target in the image.
  • the saliency map model of the image By extracting the saliency map model of the image from the global features such as color, brightness, and direction of the image, it is possible to reflect the region of the image that is most likely to cause user interest and best represent the image content.
  • the problem of significant feature detection comes from computer simulation of human vision in order to achieve the ability of the human eye to select objects.
  • Low-level vision plays an important role in the saliency detection model, such as color, direction, brightness, texture, and edges.
  • the human eye is more sensitive to the color information of the image than other visual features, so the statistics of the color features are especially important in computer vision.
  • Two methods for color feature calculation are widely used in saliency detection: the first is to create a color histogram and then compare the difference between the histograms; the second is to block the image and put each image The internal color average is compared with other color patches to obtain color saliency; brightness is also the most basic visual feature in the image.
  • the luminance feature is calculated, the luminance component of the local feature region is extracted.
  • Statistical values are used to represent the overall brightness characteristics of the region, and then the brightness of the image is obtained by comparison with other regions; the directional features reflect the essential features of the surface of the object, and the directional feature calculation in the saliency detection of the image is mainly Gabor energy method. It can well simulate the multi-channel and multi-resolution features of the human visual system.
  • the salient feature is based on the global features of the image, which can well simulate the features of the human eye's region of interest, but has the following disadvantages: First, the selection of salient regions is very subjective, due to the needs of different users, the same image is of interest. The region may have greater differences; secondly, significant The global feature based on the image is less robust to local changes in the target. And in the application, the method needs to manually intervene to mark the global feature block of the target area. In the case of a simple image processing, the method has a practical space. However, with the development of search engines and networks, the capacity of data has exploded, and a small number of image processing methods are far from meeting the urgent needs of users, and it is difficult to show qualified results in a huge image database because of manual intervention.
  • the motion regions in the image are typically extracted using a difference between adjacent frame images in the sequence of images.
  • the image sequence of two adjacent frames is grayed out, and then corrected in the same coordinate system.
  • the difference operation is performed, the background portion where the gradation does not change will be clipped off. Since the region of interest is mostly a moving target, the contour of the region where the gradation changes, that is, the approximate contour of the region of interest, can be obtained through a difference operation. Thereby determining the foreground image.
  • the adjacent frame difference method can well solve the foreground extraction problem in the simple scene video sequence, but since the adjacent frame difference method requires input of consecutive video adjacent frame sequences, it is difficult to apply to the processing of still images. in. Secondly, for complex backgrounds or changing backgrounds, the frame difference method is less robust.
  • the method for acquiring the approximate foreground region based on the image salient features proposed for the still image utilizes the global feature of the image, and cannot take into account the local details of the image, and the robustness is poor. Due to the complexity of the background, the similarity of the image, etc., the foreground contour of the object may have small flaws, so it is necessary to improve the accuracy of the algorithm again.
  • the embodiment of the invention provides a foreground segmentation method and device, which can improve the accuracy of automatic foreground segmentation of image matching.
  • the embodiment of the invention discloses a foreground segmentation method, which comprises:
  • the image segmentation algorithm is used to obtain the foreground target in the image.
  • extracting local feature information of the two input images includes:
  • the two images input by the user are grayed out, and the local feature information of the image is extracted by using the accelerated robust feature SURF algorithm.
  • performing key point matching according to the extracted local feature information includes:
  • the incorrect matching points are filtered out from the matching points of the obtained key points, and all correct matching points are obtained:
  • the ratio of s n to s n ' is a scale ratio of a key point in the first input image to a matching point of the key point in the first input image in the second input image, and a logarithm is obtained to obtain a scale ratio; n and ⁇ n, the difference of the first input key image and key image point of the first input matching points in the second direction of the input image difference.
  • using cluster analysis to derive feature point groups on the foreground target from all the correct matching points includes:
  • the following algorithm is used to randomly select the cluster centroids of the k clusters as
  • the distance to the k seed points is calculated, and the point closest to the seed point ⁇ n belongs to the ⁇ n point group, wherein the Euclidean distance in the 128-dimensional SIFT feature space is calculated according to the following formula:
  • S i represents a one-dimensional feature of the SIFT feature
  • R n represents that the heart points of the selected K clusters belong to a set of n clusters randomly taken from the point set.
  • the embodiment of the invention further discloses a foreground segmentation device, comprising:
  • the first unit is configured to separately extract local feature information of the two input images, and perform matching of the key points according to the extracted local feature information;
  • the second unit is configured to filter out the wrong matching points from the matching points of the obtained key points to obtain all correct matching points;
  • the third unit is configured to use cluster analysis to derive feature point groups on the foreground target from all the correct matching points;
  • the fourth unit is configured to obtain a foreground target in the image by using an image segmentation algorithm according to the obtained feature point group.
  • the first unit is configured to extract local feature information of the two input images, including:
  • the two images input by the user are grayed out, and the local feature information of the image is extracted by using the accelerated robust feature SURF algorithm.
  • the first unit is configured to perform key point matching according to the extracted local feature information, including:
  • Determining the first input of the two input images using a neighbor algorithm based on the extracted local feature information The key points in the image are the corresponding matching points in the second input image.
  • the second unit is configured to filter out the wrong matching points from the matching points of the obtained key points, and obtain all correct matching points including:
  • the third unit is configured to use the cluster analysis to derive the feature point group on the foreground target from all the correct matching points, including:
  • the following algorithm is used to randomly select the cluster centroids of the k clusters as
  • the distance to the k seed points is calculated, and the point closest to the seed point ⁇ n belongs to the ⁇ n point group, wherein the Euclidean distance in the 128-dimensional SIFT feature space is calculated according to the following formula:
  • each ⁇ n seed point is repeatedly calculated until the center of each class is gradually stabilized, and the front spot group and the background seed point group are obtained, and the obtained front spot group and background seed point group are taken as the feature point group.
  • the technical solution provided by the embodiment of the present invention includes: separately extracting local feature information of two input images, performing key point matching according to the extracted local feature information; and screening from the matching points of the obtained key points. Get all the correct matching points except the wrong matching points; use poly
  • the class analysis derives the feature point group on the foreground target from all the correct matching points; according to the obtained feature point group, the image segmentation algorithm is used to obtain the foreground target in the image.
  • the embodiment of the invention improves the accuracy of the foreground object in the image, reduces the time of foreground processing, and improves the efficiency of image processing.
  • the technical solution of the embodiment of the invention can objectively obtain the foreground target in the image, so that the result is more accurate and intuitive, can replace the traditional human-computer interaction method, reduces the overall time, improves the efficiency, and can be obtained in the experimental data set. Good experimental results.
  • the problem of local feature information loss in the image is solved, and the robustness of the method is improved.
  • the accuracy of the foreground segmentation contour is improved.
  • FIG. 1 is a flowchart of a foreground segmentation method according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of extracting local feature information of an image according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of an image obtained after cluster analysis processing according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram showing a separation of a foreground object and a background according to an embodiment of the present invention
  • FIG. 5 is a schematic diagram of test images and foreground segmentation results using an embodiment of the present invention.
  • FIG. 6 is a structural block diagram of a foreground segmentation apparatus according to an embodiment of the present invention.
  • the embodiment of the present invention proposes that the dual image joint automatic foreground extraction method can be adopted, that is, the local features of the image are extracted, the foreground region is obtained through feature point matching and cluster analysis, and then the image segmentation algorithm is used to implement the segmentation method for the foreground image.
  • the local features include: some features that appear locally, which can be stably present and have some points that are well distinguishable. Different from global features such as variance and color, local features can better summarize the information carried by the image and reduce the amount of calculation. Improve the anti-interference ability of the algorithm.
  • the embodiment provides a foreground segmentation method, as shown in FIG. 1 , including:
  • Step 100 Extract local feature information of two input images respectively, and perform matching of key points according to the extracted local feature information
  • extracting local feature information of the two input images includes:
  • the two images input by the user are grayed out, and the local feature information of the image is extracted using the accelerated robust feature (SURF) algorithm.
  • SURF accelerated robust feature
  • performing key point matching according to the extracted local feature information includes:
  • Step 200 Screen out the wrong matching points from the matching points of the obtained key points to obtain all correct matching points
  • the wrong matching points are filtered out from the matching points of the obtained key points, and all correct matching points are obtained:
  • the ratio of s n to s n ' is a scale ratio of a key point in the first input image to a matching point of the key point in the first input image in the second input image, and a logarithm is obtained to obtain a scale ratio; n and ⁇ n, the difference of the first input key image and key image point of the first input matching points in the second direction of the input image difference.
  • Step 300 using cluster analysis to derive features on the foreground target from all correct matching points Point group
  • using cluster analysis to derive the feature point groups on the foreground target from all the correct matching points includes:
  • S i represents a one-dimensional feature of the SIFT feature
  • R n represents that the heart points of the selected K clusters belong to a set of n clusters randomly taken from the point set. That is, R n indicates that the heart points of the K clusters are selected, and all belong to the clusters clustered by randomly taking n from a large point set.
  • the distance to the k seed points is calculated, and the point closest to the seed point ⁇ n belongs to the ⁇ n point group, wherein the Euclidean in the 128-dimensional scale-invariant feature transform SIFT feature space is calculated according to the following formula distance:
  • each ⁇ n seed point is repeatedly calculated until the center of each class is gradually stabilized, and the front spot group and the background seed point group are obtained, and the obtained front spot group and background seed point group are taken as the feature point group.
  • Step 400 Obtain a foreground target in the image by using an image segmentation algorithm according to the obtained feature point group.
  • step 100 can be summarized as feature matching; comprising two parts operation: 1. extracting local feature information of two input images respectively; and performing key point matching according to the extracted local features;
  • the local features of the image involved in this embodiment are different from the global features of the image, and are features that appear locally.
  • some feature points that are still stable can easily and accurately describe the features of the clothing image, such as Harris, SIFT, SURF, FAST (the image matching method existing in the related art).
  • Step 200 can be summarized as a matching point screening
  • the wrong matching point can be filtered according to the scale ratio of the matching point and the rotation direction ratio
  • Step 300 can be summarized as foreground image extraction.
  • cluster analysis is first used to obtain a group of feature points on the foreground target
  • the image segmentation algorithm is used to obtain the foreground target in the image.
  • the cluster analysis involved in the embodiment refers to a process of classifying data into different classes or clusters, and the data in the same cluster has a high similarity; and between different clusters, The data is very different. It is an unsupervised learning that does not rely on pre-defined classes or labeled training examples, such as k-means (already existing in the related art).
  • the image segmentation involved in the present embodiment is a technique and process for dividing an image into a plurality of specific regions having unique properties and proposing objects of interest. Such as based on threshold segmentation, region-based segmentation, edge-based segmentation, and segmentation based on specific theory.
  • the cluster analysis is used in this embodiment because the analysis can convert the abstract key point information into the foreground area, thereby providing support for the next image segmentation technology, and the joint application of the image matching technology and the segmentation technology to achieve the The traditional artificial interaction of image segmentation technology has improved.
  • a clustering algorithm is performed on the original features of the original input image to obtain a suggested region of the foreground object, and finally the overall foreground segmentation of the image is performed by the graph cut method.
  • Step 1 Enter image feature matching; includes:
  • FIG. 2 is a schematic diagram of extracting local feature information of the image according to an embodiment of the present invention. As shown in FIG. 2, the image input by the user is grayed out and used. SURF feature extraction obtains local feature information of the image.
  • KNN K-Nearest Neighbor
  • the second step screening of matching points
  • the embodiment of the present invention screens the results obtained in the first step to obtain better results.
  • the foreground target matches the point area.
  • the matching points are filtered by the constructed two-dimensional data.
  • the region with large distribution of the two-dimensional array can be obtained.
  • the interference point of the background is removed in this way.
  • the third step foreground image extraction
  • This step is a core step of the embodiment of the present invention.
  • the method of the embodiment of the present invention applies the method of data clustering analysis to the homogeneity analysis of the key points, and the image feature matching method and the image segmentation method can be well organic. Combination of.
  • the combination c (i) of matching key points in image A is obtained. Due to the complexity of the image background, it is highly probable that the matching key points contain interference matching points similar to the foreground target key points.
  • the embodiment of the present invention uses the K-means clustering analysis algorithm to group the key points obtained in the previous step to obtain the key points of the foreground target and improve the image segmentation. The degree of accuracy.
  • the clustering method of the embodiment of the present invention does not use the 128-dimensional sift feature of the key point according to the distance feature of the point, and analyzes the Euclidean distance of the key point in the SIFT feature space.
  • the method of the embodiment of the invention can better analyze the same attribute of the feature points, thereby obtaining a more accurate foreground suggestion area.
  • the K-means algorithm clusters the samples x (i) into k clusters, and the clusters belong to unsupervised learning. The user does not need to provide the category labeling of the samples.
  • the algorithm is described as follows:
  • the distance to the k seed points is calculated. If the point c (n) is closest to the seed point ⁇ n , then c (n) belongs to the ⁇ n point group. In the present invention, it is necessary to calculate the Euclidean distance in the 128-dimensional SIFT feature space:
  • Sn is the scale information of the matching point.
  • FIG. 3 is a schematic diagram of an image obtained after cluster analysis processing according to an embodiment of the present invention. As shown in FIG. 3, the front attraction group and the background seed point group are used to mark the foreground area and the background area in the image, respectively.
  • the embodiment of the present invention performs foreground extraction.
  • the image segmentation algorithm in the related art is used to cut and extract the target contour of the image with the foreground and background regions.
  • V and E are respectively a set of vertex and edge.
  • edges and vertices there are two types of edges and vertices: the first is a common fixed point for each pixel in the image. Fixed points for every two fields (corresponding to two neighborhood images in the figure) The connection of the prime is an edge, which is n-links.
  • S source: source point
  • T sink point
  • Such vertices have connections to each of the normal vertices, which are called t-links.
  • E(L) ⁇ R(L)+B(L), where R(L) is the region term, B( L) is the boundary term.
  • E(L) represents the weight, also called the energy function.
  • the goal of image segmentation is to optimize the energy function to reach the minimum value.
  • the weights of the regional items are as follows:
  • the item weight of the area represents the weight of the t-links edge. The higher the probability that the point belongs to S or T, the greater its weight, and vice versa.
  • the boundary term represents the weight of the n-links edge. When the similarity of two adjacent pixels is higher, the weights of the edges connected by the two points are higher.
  • FIG. 4 is a schematic diagram of the foreground target and the background separated according to an embodiment of the present invention. As shown in Figure 4, after assigning weights to each edge, the smallest edges are found, and the edges are broken so that the target and background are separated.
  • the paired images can be randomly selected from the dataset of CMU-Cornell as the test set of the method, and because the dataset of the target contained in the image is open sourced by CMU-Cornell, so as to provide The truth contour map is used as a test set for method accuracy.
  • Experimental Results Experimental Results As shown in FIG. 5, with the embodiment of the present invention, the foreground target and the background segmentation process are realized, and the embodiment of the present invention can obtain the approximate outline of the foreground image.
  • This embodiment provides a foreground segmentation device, as shown in FIG. 6, including:
  • the first unit is configured to separately extract local feature information of the two input images, and perform matching of the key points according to the extracted local feature information;
  • the first unit is configured to extract local feature information of the two input images, including:
  • the two images input by the user are grayed out, and the local feature information of the image is extracted by using the accelerated robust feature SURF algorithm.
  • the first unit is configured to perform key point matching according to the extracted local feature information, including:
  • the second unit is configured to filter out the wrong matching points from the matching points of the obtained key points to obtain all correct matching points;
  • the second unit is configured to filter out the wrong matching points from the matching points of the obtained key points, and obtaining all correct matching points includes:
  • the third unit is configured to use cluster analysis to derive feature point groups on the foreground target from all the correct matching points;
  • the third unit uses the cluster analysis to derive the feature point groups on the foreground target from all the correct matching points, including:
  • the following algorithm is used to randomly select the cluster centroids of the k clusters as
  • the distance to the k seed points is calculated, and the point closest to the seed point ⁇ n belongs to the ⁇ n point group, wherein the Euclidean distance in the 128-dimensional SIFT feature space is calculated according to the following formula:
  • each ⁇ n seed point is repeatedly calculated until the center of each class is gradually stabilized, and the front spot group and the background seed point group are obtained, and the obtained front spot group and background seed point group are taken as the feature point group.
  • the fourth unit is configured to obtain a foreground target in the image by using an image segmentation algorithm according to the obtained feature point group.
  • Embodiment 1 the method of the above-mentioned Embodiment 1 can be implemented.
  • the other operations of the device in the foregoing device refer to the corresponding content of Embodiment 1, and details are not described herein again.
  • the technical solution of the present application utilizes image features and is applied to the core problem of automatic foreground extraction of still images.
  • the feature points of the two images are proposed.
  • the contour of the region of interest is obtained through cluster analysis.
  • the image segmentation algorithm is used to automatically extract the foreground target of the still image. Especially suitable for still image data, with high accuracy.
  • the embodiment of the invention further provides a computer storage medium, wherein the computer storage medium stores the meter
  • the computer executable instructions are used to execute the foreground segmentation method described above.
  • An embodiment of the present invention further provides a foreground segmentation apparatus, including: a memory and a processor; wherein
  • the processor is configured to execute program instructions in the memory
  • the image segmentation algorithm is used to obtain the foreground target in the image.
  • each module/unit in the foregoing embodiment may be implemented in the form of hardware, for example, by implementing an integrated circuit to implement its corresponding function, or may be implemented in the form of a software function module, for example, being executed by a processor and stored in a memory. Programs/instructions to implement their respective functions.
  • the invention is not limited to any specific form of combination of hardware and software.
  • the above technical solution improves the accuracy of the foreground target in the image, reduces the time of foreground processing, and improves the efficiency of image processing.

Abstract

L'invention concerne un procédé et un dispositif de segmentation de premier plan, le procédé comprenant : l'extraction respective d'informations de caractéristiques locales relatives à deux images d'entrée, et la mise en correspondance de points clés en fonction des informations de caractéristiques locales extraites ; la suppression d'un point de non-correspondance dans des points de correspondance parmi des points clés obtenus, pour obtenir tous les points de correspondance corrects ; la réalisation d'une analyse de grappe pour obtenir, à partir de tous les points de correspondance corrects, un groupe de points caractéristiques sur un objet de premier plan ; et, selon le groupe de points caractéristiques obtenu, l'utilisation d'un procédé de segmentation d'image pour obtenir l'objet de premier plan dans l'image. Les modes de réalisation de la présente invention améliorent la précision d'un objet de premier plan dans une image, réduisent le temps de traitement de premier plan, et rendent le traitement d'image plus efficace.
PCT/CN2017/080274 2016-04-19 2017-04-12 Procédé et dispositif de segmentation de premier plan WO2017181892A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610244730.5A CN107305691A (zh) 2016-04-19 2016-04-19 基于图像匹配的前景分割方法及装置
CN201610244730.5 2016-04-19

Publications (1)

Publication Number Publication Date
WO2017181892A1 true WO2017181892A1 (fr) 2017-10-26

Family

ID=60115618

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/080274 WO2017181892A1 (fr) 2016-04-19 2017-04-12 Procédé et dispositif de segmentation de premier plan

Country Status (2)

Country Link
CN (1) CN107305691A (fr)
WO (1) WO2017181892A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109977809A (zh) * 2019-03-08 2019-07-05 上海电力学院 一种自适应人群分群检测方法
CN110555444A (zh) * 2018-06-01 2019-12-10 中国科学院沈阳计算技术研究所有限公司 一种基于局部聚类的特征匹配筛选算法
CN112601029A (zh) * 2020-11-25 2021-04-02 上海卫莎网络科技有限公司 一种已知背景先验信息的视频分割方法、终端和存储介质
CN117692649A (zh) * 2024-02-02 2024-03-12 广州中海电信有限公司 基于图像特征匹配的船舶远程监控视频高效传输方法

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112862829B (zh) * 2019-11-27 2024-03-12 武汉Tcl集团工业研究院有限公司 标签图片的分割方法、装置及存储介质
CN111612824A (zh) * 2020-05-26 2020-09-01 天津市微卡科技有限公司 一种用于机器人控制的意识跟踪识别算法
CN112001939B (zh) * 2020-08-10 2021-03-16 浙江大学 基于边缘知识转化的图像前景分割算法
CN112150512B (zh) * 2020-09-30 2023-12-15 中国科学院上海微系统与信息技术研究所 融合背景差分法和聚类法的弹着点定位方法
CN112287193B (zh) * 2020-10-30 2022-10-04 腾讯科技(深圳)有限公司 一种图像分割方法、装置、计算机设备及存储介质
CN112347899B (zh) * 2020-11-03 2023-09-19 广州杰赛科技股份有限公司 一种运动目标图像提取方法、装置、设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100098331A1 (en) * 2008-09-26 2010-04-22 Sony Corporation System and method for segmenting foreground and background in a video
CN101859436A (zh) * 2010-06-09 2010-10-13 王巍 一种大幅规律运动背景智能分析管控系统
CN102184550A (zh) * 2011-05-04 2011-09-14 华中科技大学 一种动平台地面运动目标检测方法
CN102663776A (zh) * 2012-03-31 2012-09-12 北京智安邦科技有限公司 基于特征点分析的剧烈运动检测的方法及装置
CN102708370A (zh) * 2012-05-17 2012-10-03 北京交通大学 一种多视角图像前景目标提取方法及装置
CN102819835A (zh) * 2012-07-26 2012-12-12 中国航天科工集团第三研究院第八三五七研究所 一种图像拼接中筛选特征点匹配对的方法
CN103714544A (zh) * 2013-12-27 2014-04-09 苏州盛景空间信息技术有限公司 一种基于sift特征点匹配的优化方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100098331A1 (en) * 2008-09-26 2010-04-22 Sony Corporation System and method for segmenting foreground and background in a video
CN101859436A (zh) * 2010-06-09 2010-10-13 王巍 一种大幅规律运动背景智能分析管控系统
CN102184550A (zh) * 2011-05-04 2011-09-14 华中科技大学 一种动平台地面运动目标检测方法
CN102663776A (zh) * 2012-03-31 2012-09-12 北京智安邦科技有限公司 基于特征点分析的剧烈运动检测的方法及装置
CN102708370A (zh) * 2012-05-17 2012-10-03 北京交通大学 一种多视角图像前景目标提取方法及装置
CN102819835A (zh) * 2012-07-26 2012-12-12 中国航天科工集团第三研究院第八三五七研究所 一种图像拼接中筛选特征点匹配对的方法
CN103714544A (zh) * 2013-12-27 2014-04-09 苏州盛景空间信息技术有限公司 一种基于sift特征点匹配的优化方法

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110555444A (zh) * 2018-06-01 2019-12-10 中国科学院沈阳计算技术研究所有限公司 一种基于局部聚类的特征匹配筛选算法
CN110555444B (zh) * 2018-06-01 2022-09-20 中国科学院沈阳计算技术研究所有限公司 一种基于局部聚类的特征匹配筛选算法
CN109977809A (zh) * 2019-03-08 2019-07-05 上海电力学院 一种自适应人群分群检测方法
CN112601029A (zh) * 2020-11-25 2021-04-02 上海卫莎网络科技有限公司 一种已知背景先验信息的视频分割方法、终端和存储介质
CN112601029B (zh) * 2020-11-25 2023-01-03 上海卫莎网络科技有限公司 一种已知背景先验信息的视频分割方法、终端和存储介质
CN117692649A (zh) * 2024-02-02 2024-03-12 广州中海电信有限公司 基于图像特征匹配的船舶远程监控视频高效传输方法
CN117692649B (zh) * 2024-02-02 2024-04-19 广州中海电信有限公司 基于图像特征匹配的船舶远程监控视频高效传输方法

Also Published As

Publication number Publication date
CN107305691A (zh) 2017-10-31

Similar Documents

Publication Publication Date Title
WO2017181892A1 (fr) Procédé et dispositif de segmentation de premier plan
US11681418B2 (en) Multi-sample whole slide image processing in digital pathology via multi-resolution registration and machine learning
WO2019218824A1 (fr) Procédé d'acquisition de piste de mouvement et dispositif associé, support de stockage et terminal
Jia et al. Category-independent object-level saliency detection
Peng et al. RGBD salient object detection: A benchmark and algorithms
Cheng et al. Efficient salient region detection with soft image abstraction
US20160026899A1 (en) Text line detection in images
US9519660B2 (en) Information processing apparatus, clustering method, and recording medium storing clustering program
US9626585B2 (en) Composition modeling for photo retrieval through geometric image segmentation
Tian et al. Learning complementary saliency priors for foreground object segmentation in complex scenes
AU2014277853A1 (en) Object re-identification using self-dissimilarity
Manfredi et al. A complete system for garment segmentation and color classification
WO2019197021A1 (fr) Dispositif et procédé de segmentation de niveau d'instance d'une image
Wu et al. Scene text detection using adaptive color reduction, adjacent character model and hybrid verification strategy
Zhang et al. Salient object detection via compactness and objectness cues
Bai et al. Principal pixel analysis and SVM for automatic image segmentation
CN107610136B (zh) 基于凸包结构中心查询点排序的显著目标检测方法
Chen et al. Visual saliency detection based on homology similarity and an experimental evaluation
Lecca et al. Comprehensive evaluation of image enhancement for unsupervised image description and matching
CN108664968B (zh) 一种基于文本选取模型的无监督文本定位方法
Zeeshan et al. A newly developed ground truth dataset for visual saliency in videos
Lu et al. Spectral segmentation via midlevel cues integrating geodesic and intensity
Hati et al. An image texture insensitive method for saliency detection
Zhou et al. Modeling perspective effects in photographic composition
Liu et al. Color topographical map segmentation algorithm based on linear element features

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17785377

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17785377

Country of ref document: EP

Kind code of ref document: A1