WO2019128460A1 - 图像显著性检测方法和装置 - Google Patents
图像显著性检测方法和装置 Download PDFInfo
- Publication number
- WO2019128460A1 WO2019128460A1 PCT/CN2018/113429 CN2018113429W WO2019128460A1 WO 2019128460 A1 WO2019128460 A1 WO 2019128460A1 CN 2018113429 W CN2018113429 W CN 2018113429W WO 2019128460 A1 WO2019128460 A1 WO 2019128460A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- saliency
- foreground
- background
- initial
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Definitions
- the present invention relates to the field of image processing, and in particular to an image saliency detection method and apparatus.
- the task of saliency detection is to determine the most important and informative part of a scene. It can be applied to a wide range of computer vision applications, including image retrieval, image compression, content-aware image editing, and object recognition.
- the saliency detection method can usually be divided into a bottom-up model and a top-down model.
- the bottom-up method is data-driven, without pre-training, while the top-down method is task-driven, usually using a
- the annotated data is pre-trained.
- the purpose of the salient object detection model is to highlight prominent objects with sharp boundaries, which is useful for many high-level visual tasks.
- the application prospects can be used to extract the salient objects in the image.
- This a priori has been widely used in the research results of the past few years, but relying on it alone can not highlight the entire significant object.
- Another effective saliency object detection model is to use a background prior in the image to implicitly detect significant objects from it. By assuming that the narrow boundary of most of the image is the background area, the background prior information can be used to calculate the significance map. But it also creates problems because image elements that are different from the boundary area are not always significant objects.
- the existing image saliency object detection method is not accurate when detecting significant objects, the method is not robust enough, and it is easy to cause false detection, missed detection, etc. It is difficult to obtain an accurate image saliency detection. As a result, not only the misdetection of the significant object itself is caused, but also a certain error is caused to the application using the significance detection result.
- Embodiments of the present invention provide an image saliency detection method and apparatus to at least solve the technical problem that the saliency detection result of an image in the prior art is not accurate enough.
- an image saliency detection method including: performing a foreground a priori saliency calculation on an initial image to obtain a foreground saliency image; and performing a background prior saliency calculation on the initial image Obtaining a background saliency image; merging the foreground saliency image and the background saliency image to obtain an initial saliency image.
- an image saliency detecting apparatus including: a first calculating module, configured to perform a foreground a priori saliency calculation on the initial image to obtain a foreground saliency image; The image is subjected to background saliency saliency calculation to obtain a background saliency image; a fusion module is used to fuse the foreground saliency image and the background saliency image to obtain an initial saliency image.
- a storage medium comprising a stored program, wherein the device in which the storage medium is located is controlled to execute the image saliency detecting method described above while the program is running.
- a computer device comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, wherein the image saliency detection is implemented when the processor executes the program method.
- a foreground significant image is obtained by performing a foreground a priori saliency calculation on the initial image; a background prior saliency calculation is performed on the initial image to obtain a background saliency image; a fusion foreground saliency image and The background saliency image is obtained, and the initial saliency image is obtained.
- the invention simultaneously utilizes the foreground and the background prior to perform the significant object detection, thereby achieving the accuracy of detecting the saliency object detection, enhancing the robustness of the saliency detection, and making the image
- the saliency area is more accurately displayed, providing accurate and useful information for later applications such as target recognition and classification, and is applicable to more complex scenes, using a wider range of technical effects to solve the prior art.
- the technical problem of the saliency detection result of the image is not accurate enough.
- FIG. 1 is a schematic diagram of an image saliency detecting method according to an embodiment of the present invention.
- FIG. 2 is a schematic diagram of an image saliency detecting apparatus according to an embodiment of the present invention.
- a method embodiment of an image saliency detection method is provided, it being noted that the steps illustrated in the flowchart of the accompanying drawings may be performed in a computer system such as a set of computer executable instructions. Also, although logical sequences are shown in the flowcharts, in some cases the steps shown or described may be performed in a different order than the ones described herein.
- FIG. 1 is a method for detecting image saliency according to an embodiment of the present invention. As shown in FIG. 1, the method includes the following steps:
- Step S102 performing a foreground a priori saliency calculation on the initial image to obtain a foreground saliency image; performing a background prior saliency calculation on the initial image to obtain a background saliency image;
- Step S104 the foreground significant image and the background significant image are merged to obtain an initial saliency image.
- two significant images based on the background and the foreground are respectively obtained by calculating the saliency values, and then the fusion is performed.
- the image saliency object detection algorithm based on the foreground prior and the background prior fusion is used in the embodiment. More accurate and more robust detection of significant objects. It should be noted that the saliency calculation of the foreground image in the initial image and the saliency calculation of the background image in the initial image may be performed synchronously or asynchronously, and the asynchronous sequence is not limited. .
- a foreground significant image is obtained by performing a foreground a priori saliency calculation on the initial image; a background prior saliency calculation is performed on the initial image to obtain a background saliency image; a fusion foreground saliency image and The background saliency image is obtained, and the initial saliency image is obtained.
- the invention simultaneously utilizes the foreground and the background prior to perform the significant object detection, thereby achieving the accuracy of detecting the saliency object detection, enhancing the robustness of the saliency detection, and making the image
- the saliency area is more accurately displayed, providing accurate and useful information for later applications such as target recognition and classification, and is applicable to more complex scenes, using a wider range of technical effects to solve the prior art.
- the technical problem of the saliency detection result of the image is not accurate enough.
- the method before performing the foreground a priori saliency calculation on the initial image in step S102, the method further includes: step S202, performing superpixel decomposition on the initial image to obtain an exploded image; and initializing in step S102
- the image performs a foreground a priori saliency calculation to obtain a foreground saliency image, comprising: step S302, performing a foreground a priori saliency calculation on the decomposed image to obtain a foreground saliency image; and performing a background prior on the initial image in step S102
- the saliency calculation results in a background saliency image, including: step S402, performing a background prior saliency calculation on the decomposed image to obtain a background saliency image.
- the initial image may be depixel-decomposed and decomposed into a set of supers before performing the foreground a priori saliency calculation and the background prior saliency calculation on the initial image. Pixels, followed by foreground a priori saliency calculations and background prior saliency calculations are performed at the superpixel level.
- performing superpixel decomposing on the initial image in step S202 includes: step S502, performing superpixel decomposition on the initial image using a simple linear iterative clustering method.
- the SLIC simple linear iterative clustering, simple linear iterative cluster abbreviation
- performing a foreground a priori saliency calculation on the decomposed image in step S302 to obtain a foreground saliency image including:
- Step S602 calculating a surrounding value of each super pixel in the decomposition image
- Step S604 defining a foreground seed set according to the enclosing value of each super pixel, where the foreground seed set includes a strong foreground seed set and a weak foreground seed set;
- Step S606 sorting each image element according to the correlation between each image element and the foreground seed set in the initial image, to obtain a first sorting result, wherein the initial image is represented by an image matrix, and the image matrix is composed of image elements;
- Step S608 obtaining a foreground saliency image according to the first sorting result.
- the foreground seed may be calculated, and the enclosing cues may be used to mine the foreground information.
- the binary segmentation technique may be used, and the image may be fully utilized in the decomposition image.
- BMS Boolean Map based Saliency model
- the algorithm generates a bounding graph, and the pixel values in the surrounding graph represent the degree of enclosing, and the surrounding value of each super pixel is defined by averaging the values of all pixels inside thereof, and each super in the decomposed image is calculated in step S602.
- the foreground seed set is defined according to the enclosing value of each super pixel in step S604, two seed elements, a strong foreground seed and a weak foreground seed, a strong foreground seed constitute a strong foreground seed set, and a weak foreground seed constitutes a weak foreground seed set,
- the probability that a strong foreground seed belongs to the foreground is high, and the probability that the weak foreground seed belongs to the foreground is relatively low.
- the foreground seed it can be selected by the following formulas 1 and 2:
- each image element is sorted according to the correlation between each image element and the foreground seed set in the initial image, to obtain a first sorting result, wherein the initial image is represented by an image matrix, and the image matrix is composed of image elements, that is,
- the intrinsic manifold structure of the data can be used to sort the graph markers, and the correlation between each image element and a given seed set can be sorted.
- the node in a given graph is a superpixel generated by the SLIC algorithm, and the weight value of the edge E is determined by
- the parameter can be taken as 0.3.
- the weight between the two nodes can be as shown in Equation 4 below:
- the saliency calculation of the background image is performed on the decomposition image in step S402, and the background saliency image is obtained, including:
- Step S702 calculating an Euclidean distance of each feature vector and an average feature vector in the initial image, wherein the initial image is represented by an image matrix, the image matrix is composed of image elements, and the feature vector is a feature vector of a set of image elements located at the boundary, The average feature vector is a feature vector of the average of all image elements located at the boundary;
- Step S704 defining a background seed set according to the Euclidean distance, where the background seed set includes a strong background seed set and a weak background seed set;
- Step S706 sorting each image element according to the correlation between each image element and the background seed set in the initial image, to obtain a second sorting result
- Step S708 obtaining a background saliency image according to the second sorting result.
- the background seed may be calculated, and the background prior may be extracted from the boundary region.
- each feature vector and average in the initial image may be calculated.
- the Euclidean distance of the feature vector where the initial image is represented by an image matrix, the image matrix is composed of image elements, the feature vector is a feature vector of a set of image elements located at the boundary, and the average feature vector is the average of all image elements located at the boundary Feature vector, where the i-th feature vector can be represented by c, and the average feature vector can be used Representing that the Euclidean distance between the i-th eigenvector and the average eigenvector can be expressed as
- the background seed set is defined according to the Euclidean distance in step S704, two seed elements, a strong background seed and a weak background seed, a strong background seed constitute a strong background seed set, a weak background seed constitutes a weak background seed set, and a strong background seed belongs to the background.
- the probability is very high, the probability that the weak background seed belongs to the background is relatively low.
- the background seed it can be selected by the following formulas 5 and 6:
- step S104 the foreground saliency image and the background saliency image are merged to obtain an initial saliency image.
- the two significant images may be merged into one, wherein the fusion manner may be: selecting the image element value in the foreground significant image and the foreground significant image respectively to be larger than the
- the image elements of the graph mean are used as significant elements and combined into a set.
- the image elements in the set are used to reorder the seeds to obtain an initial saliency image, wherein the initial saliency image can be represented by S com .
- the method further includes: step S106, according to the geodesic distance between every two super pixels in the initial saliency image, the two super pixels The weights are adjusted to obtain the final saliency map.
- the weight of the super pixel in the image is sensitive to the geodesic distance, so the initial saliency image can be optimized by using the geodesic distance.
- the posterior probability can be expressed as S com ( j), therefore the significance value of the qth superpixel is represented by the geodesic distance as shown in the following Equation 8:
- N is the total number of superpixels
- ⁇ qj is the weight based on the geodesic distance between the qth and jth superpixels
- the given graph G is constructed based on the foreground a priori portion
- the qth and jthths can be geodesic distance between superpixel d g (p, i) is defined as the accumulation of the edge weights on the image on the shortest path weight of q and j superpixel shortest path in the graph G
- the cumulative calculated weight value of the edge weight d g (p, i) is specifically as shown in the following formula 9:
- a k ... a k+1 represents the position of each pixel on the image
- d c (a k , a k+1 ) represents the Euclidean distance between the two pixels
- ⁇ c is a deviation of all Euclidean distances d c .
- the fused image is optimized by the refinement operation based on the geodesic distance, so that the saliency object can be more uniformly highlighted, so that the display result is more accurate and robust.
- FIG. 2 is an image saliency detecting apparatus according to an embodiment of the present invention.
- the apparatus includes a first calculating module and a fusion.
- a module wherein the first calculation module is configured to perform a foreground a priori saliency calculation on the initial image to obtain a foreground saliency image; perform a background prior saliency calculation on the initial image to obtain a background saliency image; and a fusion module, Used to fuse foreground saliency images and background saliency images to obtain an initial saliency image.
- the first calculation module performs a foreground a priori saliency calculation on the initial image to obtain a foreground saliency image; and performs a background prior saliency calculation on the initial image to obtain a background saliency image; the fusion module
- the present invention simultaneously utilizes foreground and background priors for significant object detection, thereby achieving an accuracy of detecting significant objects and enhancing saliency detection.
- the foregoing first computing module and the merging module correspond to the steps S102 to S104 in the first embodiment, and the foregoing modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the foregoing embodiment 1.
- the above modules may be implemented as part of a device in a computer system such as a set of computer executable instructions.
- the apparatus further includes: a first decomposition module, configured to perform superpixel decomposition on the initial image to obtain an exploded image before the first calculation module performs a foreground a priori saliency calculation on the initial image.
- the first calculation module further includes a second calculation module and a third calculation module, wherein the second calculation module is configured to perform a foreground a priori saliency calculation on the decomposed image to obtain a foreground saliency image; and a third calculation module, A background-significant saliency calculation is performed on the decomposed image to obtain a background saliency image.
- the first decomposition module, the second calculation module, and the third calculation module correspond to the steps S202, S302, and S402 in Embodiment 1, and the examples and applications implemented by the foregoing modules and corresponding steps.
- the scene is the same, but is not limited to the content disclosed in the above embodiment 1.
- the above modules may be implemented as part of a device in a computer system such as a set of computer executable instructions.
- the first decomposition module includes: a second decomposition module, configured to perform superpixel decomposition on the initial image using a method of simple linear iterative clustering.
- the foregoing second decomposition module corresponds to step S502 in Embodiment 1, and the foregoing modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the contents disclosed in Embodiment 1 above. It should be noted that the above modules may be implemented as part of a device in a computer system such as a set of computer executable instructions.
- the second calculating module includes a fourth calculating module, a first defining module, a first sorting module, and a first generating module, wherein the fourth calculating module is configured to calculate each of the exploded images a super-pixel enclosing value; a first defining module, configured to define a foreground seed set according to a surrounding value of each super pixel, wherein the foreground seed set includes a strong foreground seed set and a weak foreground seed set; and the first sorting module is configured to The correlation between each image element and the foreground seed set in the initial image sorts each image element to obtain a first sorting result, wherein the initial image is represented by an image matrix, and the image matrix is composed of image elements; the first generation module uses Obtaining a foreground significant image according to the first sorting result.
- the foregoing fourth computing module, the first defining module, the first sorting module, and the first generating module correspond to steps S602 to S608 in Embodiment 1, and the modules and corresponding steps are implemented by the corresponding steps. It is the same as the application scenario, but is not limited to the content disclosed in the above embodiment 1. It should be noted that the above modules may be implemented as part of a device in a computer system such as a set of computer executable instructions.
- the third calculating module includes a fifth calculating module, a second defining module, a second sorting module, and a second generating module, wherein the fifth calculating module is configured to calculate each of the initial images.
- the Euclidean distance between the eigenvector and the average eigenvector wherein the initial image is represented by an image matrix, the image matrix is composed of image elements, the eigenvector is a set of eigenvectors of the image elements at the boundary, and the average eigenvector is an image of all the boundaries a feature vector of an average of the elements; a second definition module, configured to define a background seed set according to the Euclidean distance, wherein the background seed set includes a strong background seed set and a weak background seed set; and a second sorting module is configured according to the initial image
- the correlation between each image element and the background seed set sorts each image element to obtain a second sorting result; and the second generating module is configured to obtain a background saliency image according to the second sort
- the foregoing fifth computing module, the second defining module, the second sorting module, and the second generating module correspond to steps S702 to S708 in Embodiment 1, and the modules and corresponding steps are implemented by the corresponding steps. It is the same as the application scenario, but is not limited to the content disclosed in the above embodiment 1. It should be noted that the above modules may be implemented as part of a device in a computer system such as a set of computer executable instructions.
- the apparatus further includes an adjustment module, after the fusion module obtains the initial saliency image, the two superpixels according to the geodesic distance between every two superpixels in the initial saliency image. The weights are adjusted to get the final saliency map.
- the foregoing adjustment module corresponds to step S106 in Embodiment 1, and the foregoing modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the contents disclosed in Embodiment 1 above. It should be noted that the above modules may be implemented as part of a device in a computer system such as a set of computer executable instructions.
- a product embodiment of a storage medium comprising a stored program, wherein the device in which the storage medium is located is controlled to execute the image saliency detection method when the program is running.
- a product embodiment of a processor for running a program wherein the image saliency detecting method is executed while the program is running.
- a product embodiment of a computer device comprising: a memory, a processor, and a computer program stored on the memory and operable on the processor, the processor performing the image saliency detection method.
- a product embodiment of a terminal where the terminal includes a first computing module, a fusion module, and a processor, wherein the first computing module is configured to perform foreground significant prior calculation on the initial image.
- the terminal includes a first computing module, a fusion module, and a processor, wherein the first computing module is configured to perform foreground significant prior calculation on the initial image.
- the processor runs the program, wherein the program runtime performs the image saliency detection method on the data output from the first calculation module and the fusion module.
- a product embodiment of a terminal where the terminal includes a first computing module, a fusion module, and a storage medium, wherein the first computing module is configured to perform foreground significant prior calculation on the initial image.
- the terminal includes a first computing module, a fusion module, and a storage medium, wherein the first computing module is configured to perform foreground significant prior calculation on the initial image.
- the program is stored, wherein the program executes the image saliency detection method described above for data output from the first calculation module and the fusion module at runtime.
- the disclosed technical contents may be implemented in other manners.
- the device embodiments described above are only schematic.
- the division of the unit may be a logical function division.
- there may be another division manner for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
- the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
- each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
- the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
- the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
- the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
- a number of instructions are included to cause a computer device (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
- the foregoing storage medium includes: a U disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and the like. .
Landscapes
- Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
一种图像显著性检测方法和装置。其中,该方法包括:对初始图像进行前景先验的显著性计算,得到前景显著性图像;对初始图像进行背景先验的显著性计算,得到背景显著性图像(S102);融合前景显著性图像和背景显著性图像,得到初始显著性图像(S104)。所述方法和装置解决了现有技术中对图像的显著性检测结果不够精确的技术问题。
Description
本发明涉及图像处理领域,具体而言,涉及一种图像显著性检测方法和装置。
在面对一个复杂场景时,人眼的注意力会迅速集中在少数几个显著的视觉对象上,并对这些对象进行优先处理,该过程被称为视觉显著性。显著性检测正是利用人眼的这种视觉生物学机制,用数学的计算方法模拟人眼对图像进行适当的处理,从而获得一张图片的显著性物体。由于我们可以通过显著性区域来优先分配图像分析与合成所需要的计算资源,所以,通过计算来检测图像的显著性区域意义重大。
显著性检测的任务是确定一个场景中最重要和最有信息的部分。它可以应用于众多的计算机视觉应用,包括图像检索、图像压缩、内容感知图像编辑和物体识别等方面。显著性检测方法通常可以分为自下而上模型和自上而下模型,自下而上的方法是数据驱动的,没有预先训练,而自上向下的方法是任务驱动的,通常使用带注释的数据进行预先训练。
与自然图像识别的眼动预测模型不同,显著物体检测模型的目的是突出显示边界清晰的显著物体,这对于许多高层次的视觉任务是有用的。应用前景先验可以明确提取图像中的显著性物体,这一先验已经被广泛应用于过去几年的研究成果中,但单纯依靠它却无法突出整个显著的物体。另一种有效的显著性物体检测模型是利用图像中的背景先验,隐式地从中检测显著性物体。通过假设大部分图像的窄边界为背景区域,可以利用背景先验信息来计算显著性图。但也会产生问题,因为与边界区域不同的图像元素并不总是属于显著性的物体。
综合来看,现有的图像显著性物体检测方法在检测显著性物体时精准度不高,方法健壮性不够强,容易造成误检、漏检等情况,很难得到一个精确的图像显著性检测结果,不仅造成显著性物体本身的错检,同时也会对利用显著性检测结果的应用造成一定的误差。
针对上述现有技术中对图像的显著性检测结果不够精确的问题,目前尚未提出有效的解决方案。
发明内容
本发明实施例提供了一种图像显著性检测方法和装置,以至少解决现有技术中对图像的显著性检测结果不够精确的技术问题。
根据本发明实施例的一个方面,提供了一种图像显著性检测方法,包括:对初始图像进行前景先验的显著性计算,得到前景显著性图像;对初始图像进行背景先验的显著性计算,得到背景显著性图像;融合前景显著性图像和背景显著性图像,得到初始显著性图像。
根据本发明实施例的另一方面,还提供了一种图像显著性检测装置,包括:第一计算模块,用于对初始图像进行前景先验的显著性计算,得到前景显著性图像;对初始图像进行背景先验的显著性计算,得到背景显著性图像;融合模块,用于融合前景显著性图像和背景显著性图像,得到初始显著性图像。
根据本发明实施例的另一方面,还提供了一种存储介质,存储介质包括存储的程序,其中,在程序运行时控制存储介质所在设备执行上述图像显著性检测方法。
根据本发明实施例的另一方面,还提供了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行程序时实现上述图像显著性检测方法。
在本发明实施例中,通过对初始图像进行前景先验的显著性计算,得到前景显著性图像;对初始图像进行背景先验的显著性计算,得到背景显著性图像;融合前景显著性图像和背景显著性图像,得到初始显著性图像,本发明同时利用了前景和背景先验进行显著性物体检测,从而实现了增加显著性物体检测的精准性,增强显著性检测的鲁棒性,使图像中的显著性区域更精准地显现出来,为后期的目标识别和分类等应用提供精准且有用的信息,并适用于更多复杂的场景,使用范围更广的技术效果,进而解决了现有技术中对图像的显著性检测结果不够精确的技术问题。
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1是根据本发明实施例的一种图像显著性检测方法的示意图;以及
图2是根据本发明实施例的一种图像显著性检测装置的示意图。
需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本发明。
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
实施例1
根据本发明实施例,提供了一种图像显著性检测方法的方法实施例,需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
图1是根据本发明实施例的图像显著性检测方法,如图1所示,该方法包括如下步骤:
步骤S102,对初始图像进行前景先验的显著性计算,得到前景显著性图像;对初始图像进行背景先验的显著性计算,得到背景显著性图像;
步骤S104,融合前景显著性图像和背景显著性图像,得到初始显著性图像。
具体的,通过计算显著性值分别得到基于背景和前景先验的两个显著性图像,然后进行融合,本实施例采用的基于前景先验和背景先验融合的图像显著性物体检测算法,能够更加精准,更加鲁棒地检测出显著性物体。此处需要说明的是,步骤S102中对初始图像进行前景先验的显著性计算以及对初始图像进行背景先验的显著性计算可以同步进行,也可以异步进行,异步进行时,不限定先后顺序。
在本发明实施例中,通过对初始图像进行前景先验的显著性计算,得到前景显著性图像;对初始图像进行背景先验的显著性计算,得到背景显著性图像;融合前景显著性图像和背景显著性图像,得到初始显著性图像,本发明同时利用了前景和背景先验进行显著性物体检测,从而实现了增加显著性物体检测的精准性,增强显著性检测的鲁棒性,使图像中的显著性区域更精准地显现出来,为后期的目标识别和分类等应用提供精准且有用的信息,并适用于更多复杂的场景,使用范围更广的技术效果,进而解决了现有技术中对图像的显著性检测结果不够精确的技术问题。
在一种可选的实施例中,步骤S102中对初始图像进行前景先验的显著性计算之前,方法还包括:步骤S202,将初始图像进行超像素分解,得到分解图像;步骤S102中对初始图像进行前景先验的显著性计算,得到前景显著性图像,包括:步骤S302,对分解图像进行前景先验的显著性计算,得到前景显著性图像;步骤S102中对初始图像进行背景先验的显著性计算,得到背景显著性图像,包括:步骤S402,对分解图像进行背景先验的显著性计算,得到背景显著性图像。
具体的,为了更好的利用结构信息和抽象小噪音,在对初始图像进行前景先验的显著性计算和背景先验的显著性计算之前,可以将初始图像进行超像素分解,分解成一组超像素,之后的前景先验的显著性计算和背景先验的显著性计算都是在超像素级进行的。
在一种可选的实施例中,步骤S202中将初始图像进行超像素分解,包括:步骤S502,使用简单线性迭代聚类的方法将初始图像进行超像素分解。
具体的,在对初始图像进行超像素分解时,可以采用SLIC(简单线性迭代聚类,simple linear iterative cluster的简写)算法对初始图像进行超像素分解。
在一种可选的实施例中,步骤S302中对分解图像进行前景先验的显著性计算,得到前景显著性图像,包括:
步骤S602,计算分解图像中每个超像素的包围值;
步骤S604,根据每个超像素的包围值定义前景种子集合,其中,前景种子集合包括强前景种子集合和弱前景种子集合;
步骤S606,根据初始图像中每个图像元素与前景种子集合的相关性对每个图像元素进行排序,得到第一排序结果,其中,初始图像使用图像矩阵表示,图像矩阵由图像元素构成;
步骤S608,根据第一排序结果得到前景显著性图像。
具体的,步骤S302中对分解图像进行前景先验的显著性计算时,可以根据前景种子进行计算,具体可以利用包围性线索来挖掘前景信息,具体可以采用二值分割技术,在分解图像中充分利用包围线索,并使用该线索作为前景种子的初始定位和随后的图像显著性值的计算,在使用包围性线索时,可以使用BMS(基于布尔图的显著性检测模型,Boolean Map based Saliency model的简写)算法生成包围图,包围图中的像素值表示包围度,每个超像素的包围值是通过对其内部的所有像素的值进行平均来定义的,步骤S602中计算分解图像中每个超像素的包围值时,通过计算每个超像素内部所有像素的值得平均值可以得到每个超像素的包围值,超像素的包围值可以使用Sp(i)表示,其中,i=1,2,...,N,N表示超像素的总数目。
步骤S604中根据每个超像素的包围值定义前景种子集合时,可以定义两种种子元素,强前景种子和弱前景种子,强前景种子构成强前景种子集合,弱前景种子构成弱前景种子集合,强前景种子属于前景的概率很高,弱前景种子属于前景的概率相对较低,对于前景种子,可以通过如下式1和式2进行选择:
上式1和2中,
表示强前景种子集合,
表示弱前景种子集合,i表示第i个超像素,mean()表示平均值函数,S
p(i)表示第i个超像素的包围值,S
p表示整幅分解图像的包围值,从公式1和2可以看出,高度包围的元素更有可能被选为强前景种子。
步骤S606中根据初始图像中每个图像元素与前景种子集合的相关性对每个图像元素进行排序,得到第一排序结果,其中,初始图像使用图像矩阵表示,图像矩阵由图像元素构成,即对于给定种子的显著性计算,首先可以利用数据的内在流形结构进行图标记的排序方法,对每个图像元素与给定的种子集合的相关性进行排序,具体的排序时,可以构建一个表示整个分解图像的图形,例如可以给定图G=(V,E),其中,V表示节点,E表示边,给定图中的节点是由SLIC算法生成的超像素,边E的权重值由相似度矩阵W=[w
ij]
n×n确定,定义对角矩阵为D=diag{d
11,...,d
nn},其中,d
ii=Σ
jw
ij,则如下式3为排序函数:
g
*=(D-αW)
-1y
上式3中,g
*是存储每个元素的排序结果的结果向量,y=[y
1,y
2,...,y
n]
T是种子 查询的指示向量,α表示一个控制权值大小的参数,具体可以取值为0.3。两个节点之间的权重可以如下式4所示:
上式4中,c
i和c
j表示CIE LAB颜色空间中对应于两个节点的超像素的平均值,σ表示一个控制权重强度的常数,指示向量中的y
i可以定义为额外查询的强度,即,如果i是强查询,则y
i=1,如果i是弱查询,则y
i=0.5,否则y
i=0,对于基于前景种子的排序,结合上式1、2、3和4,通过式3可以对初始图像中所有图像元素进行排序,最后可以得到基于前景先验的显著性图,即前景显著性图像。
在一种可选的实施例中,步骤S402中对分解图像进行背景先验的显著性计算,得到背景显著性图像,包括:
步骤S702,计算初始图像中每个特征向量与平均特征向量的欧氏距离,其中,初始图像使用图像矩阵表示,图像矩阵由图像元素构成,特征向量为一组位于边界的图像元素的特征向量,平均特征向量为全部位于边界的图像元素的平均值的特征向量;
步骤S704,根据欧式距离定义背景种子集合,其中,背景种子集合包括强背景种子集合和弱背景种子集合;
步骤S706,根据初始图像中每个图像元素与背景种子集合的相关性对每个图像元素进行排序,得到第二排序结果;
步骤S708,根据第二排序结果得到背景显著性图像。
具体的,步骤S402中对分解图像进行背景先验的显著性计算时,可以根据背景种子进行计算,具体可以从边界区域提取背景先验,具体的,可以计算初始图像中每个特征向量与平均特征向量的欧氏距离,其中,初始图像使用图像矩阵表示,图像矩阵由图像元素构成,特征向量为一组位于边界的图像元素的特征向量,平均特征向量为全部位于边界的图像元素的平均值的特征向量,其中,第i个特征向量可以用c表示,平均特征向量可以用
表示,则第i个特征向量和平均特征向量之间的欧式距离可以表示为
步骤S704中根据欧式距离定义背景种子集合时,可以定义两种种子元素,强背景种子和弱背景种子,强背景种子构成强背景种子集合,弱背景种子构成弱背景种子集合,强背景种子属于背景的概率很高,弱背景种子属于背景的概率相对较低,对于背景种子,可以通过如下式5和式6进行选择:
上式5和6中,
表示强背景种子集合,表示
弱背景种子集合,结合上式3,如果i属于
则背景种子的指示向量的值是y
i=1,如果i属于
则y
i=0.5,否则为0。通过式3可以计算每个图像元素与背景种子的相关度,结果向量g*中的元素表示节点与背景查询的相关性,其补码是显著性度量,通过使用如下式7表示的基于背景种子的显著性值,可以得到基于背景先验的显著性图,即背景显著性图像:
S(i)=1-g
*(i),i=1,2,…,N.
在一种可选的实施例中,步骤S104,融合前景显著性图像和背景显著性图像,得到初始显著性图像。
具体的,得到前景显著性图像和前景显著性图像之后,可以将两个显著性图像融合成一个,其中融合方式可以为:在前景显著性图像和前景显著性图像中分别选择图像元素数值大于该图平均值的图像元素作为显显著性元素并组合成一个集合,使用集合中的图像元素重新进行排序种子得到初始显著性图像,其中,初始显著性图像可以用S
com表示。
在一种可选的实施例中,步骤S104中得到初始显著性图像之后,方法还包括:步骤S106,根据初始显著性图像中每两个超像素之间的测地距离对两个超像素之间的权重进行调整,得到最终显著性图。
具体的,图像中的超像素的权重对测地距离敏感,因此可以采用测地距离对初始显著性图像进行优化,具体为,对于第j个超像素,其后验概率可表示为S
com(j),因此第q个超像素的显著性值用测地距离来表示可以如下式8所示:
上式8中,N是超像素的总数,δ
qj是基于第q和第j超像素之间的测地距离的权重,基于前景先验部分构建给定图G,可以将第q和第j超像素之间的测地距离d
g(p,i)定义为图像上最短路径上的累积边权重,第q和第j超像素在图G上的最短路径累计边权重数值的计算公式d
g(p,i)具体如下式9所示:
上式9中,a
k...a
k+1表示图像上每个像素点的位置,d
c(a
k,a
k+1)表示两个像素点之间的欧氏距离,通过上式9,可以得到任意两个超像素之间的测地距离,其中,权重δ
pj的定义可以如下式10所示:
上式10中,σ
c是所有欧氏距离d
c的偏差。
本实施例通过基于测地距离的精化操作对融合后的图片进行优化,可以使显著性物体被更加均匀的突出显示,使得显示结果更加精准和健壮。
实施例2
根据本发明实施例,提供了一种图像显著性检测装置的产品实施例,图2是根据本发明实施例的图像显著性检测装置,如图2所示,该装置包括第一计算模块和融合模块,其中,第一计算模块,用于对初始图像进行前景先验的显著性计算,得到前景显著性图像;对初始图像进行背景先验的显著性计算,得到背景显著性图像;融合模块,用于融合前景显著性图像和背景显著性图像,得到初始显著性图像。
在本发明实施例中,通过第一计算模块对初始图像进行前景先验的显著性计算,得到前景显著性图像;对初始图像进行背景先验的显著性计算,得到背景显著性图像;融合模块融合前景显著性图像和背景显著性图像,得到初始显著性图像,本发明同时利用了前景和背景先验进行显著性物体检测,从而实现了增加显著性物体检测的精准性,增强显著性检测的鲁棒性,使图像中的显著性区域更精准地显现出来,为后期的目标识别和分类等应用提供精准且有用的信息,并适用于更多复杂的场景,使用范围更广的技术效果,进而解决了现有技术中对图像的显著性检测结果不够精确的技术问题。
此处需要说明的是,上述第一计算模块和融合模块对应于实施例1中的步骤S102至步骤S104,上述模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例1所公开的内容。需要说明的是,上述模块作为装置的一部分可以在诸如一组计算机可执行指令的计算机系统中执行。
在一种可选的实施例中,装置还包括:第一分解模块,用于在第一计算模块对初 始图像进行前景先验的显著性计算之前,将初始图像进行超像素分解,得到分解图像;第一计算模块还包括第二计算模块和第三计算模块,其中,第二计算模块,用于对分解图像进行前景先验的显著性计算,得到前景显著性图像;第三计算模块,用于对分解图像进行背景先验的显著性计算,得到背景显著性图像。
此处需要说明的是,上述第一分解模块、第二计算模块和第三计算模块对应于实施例1中的步骤S202、步骤S302和步骤S402,上述模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例1所公开的内容。需要说明的是,上述模块作为装置的一部分可以在诸如一组计算机可执行指令的计算机系统中执行。
在一种可选的实施例中,第一分解模块,包括:第二分解模块,用于使用简单线性迭代聚类的方法将初始图像进行超像素分解。
此处需要说明的是,上述第二分解模块对应于实施例1中的步骤S502,上述模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例1所公开的内容。需要说明的是,上述模块作为装置的一部分可以在诸如一组计算机可执行指令的计算机系统中执行。
在一种可选的实施例中,第二计算模块包括第四计算模块、第一定义模块、第一排序模块和第一生成模块,其中,第四计算模块,用于计算分解图像中每个超像素的包围值;第一定义模块,用于根据每个超像素的包围值定义前景种子集合,其中,前景种子集合包括强前景种子集合和弱前景种子集合;第一排序模块,用于根据初始图像中每个图像元素与前景种子集合的相关性对每个图像元素进行排序,得到第一排序结果,其中,初始图像使用图像矩阵表示,图像矩阵由图像元素构成;第一生成模块,用于根据第一排序结果得到前景显著性图像。
此处需要说明的是,上述第四计算模块、第一定义模块、第一排序模块和第一生成模块对应于实施例1中的步骤S602至步骤S608,上述模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例1所公开的内容。需要说明的是,上述模块作为装置的一部分可以在诸如一组计算机可执行指令的计算机系统中执行。
在一种可选的实施例中,第三计算模块包括第五计算模块、第二定义模块、第二排序模块和第二生成模块,其中,第五计算模块,用于计算初始图像中每个特征向量与平均特征向量的欧氏距离,其中,初始图像使用图像矩阵表示,图像矩阵由图像元素构成,特征向量为一组位于边界的图像元素的特征向量,平均特征向量为全部位于边界的图像元素的平均值的特征向量;第二定义模块,用于根据欧式距离定义背景种子集合,其中,背景种子集合包括强背景种子集合和弱背景种子集合;第二排序模块, 用于根据初始图像中每个图像元素与背景种子集合的相关性对每个图像元素进行排序,得到第二排序结果;第二生成模块,用于根据第二排序结果得到背景显著性图像。
此处需要说明的是,上述第五计算模块、第二定义模块、第二排序模块和第二生成模块对应于实施例1中的步骤S702至步骤S708,上述模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例1所公开的内容。需要说明的是,上述模块作为装置的一部分可以在诸如一组计算机可执行指令的计算机系统中执行。
在一种可选的实施例中,装置还包括调整模块,用于在融合模块得到初始显著性图像之后,根据初始显著性图像中每两个超像素之间的测地距离对两个超像素之间的权重进行调整,得到最终显著性图。
此处需要说明的是,上述调整模块对应于实施例1中的步骤S106,上述模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例1所公开的内容。需要说明的是,上述模块作为装置的一部分可以在诸如一组计算机可执行指令的计算机系统中执行。
实施例3
根据本发明实施例,提供了一种存储介质的产品实施例,该存储介质包括存储的程序,其中,在程序运行时控制存储介质所在设备执行上述图像显著性检测方法。
实施例4
根据本发明实施例,提供了一种处理器的产品实施例,该处理器用于运行程序,其中,程序运行时执行上述图像显著性检测方法。
实施例5
根据本发明实施例,提供了一种计算机设备的产品实施例,该计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行上述图像显著性检测方法。
实施例6
根据本发明实施例,提供了一种终端的产品实施例,该终端包括第一计算模块、融合模块和处理器,其中,第一计算模块,用于对初始图像进行前景先验的显著性计算,得到前景显著性图像;对初始图像进行背景先验的显著性计算,得到背景显著性图像;融合模块,用于融合前景显著性图像和背景显著性图像,得到初始显著性图像;处理器,处理器运行程序,其中,程序运行时对于从第一计算模块和融合模块输出的 数据执行上述图像显著性检测方法。
实施例7
根据本发明实施例,提供了一种终端的产品实施例,该终端包括第一计算模块、融合模块和存储介质,其中,第一计算模块,用于对初始图像进行前景先验的显著性计算,得到前景显著性图像;对初始图像进行背景先验的显著性计算,得到背景显著性图像;融合模块,用于融合前景显著性图像和背景显著性图像,得到初始显著性图像;存储介质,用于存储程序,其中,程序在运行时对于从第一计算模块和融合模块输出的数据执行上述图像显著性检测方法。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
在本发明的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘 等各种可以存储程序代码的介质。
以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。
Claims (10)
- 一种图像显著性检测方法,其特征在于,包括:对初始图像进行前景先验的显著性计算,得到前景显著性图像;对所述初始图像进行背景先验的显著性计算,得到背景显著性图像;融合所述前景显著性图像和所述背景显著性图像,得到初始显著性图像。
- 根据权利要求1所述的方法,其特征在于,对初始图像进行前景先验的显著性计算之前,所述方法还包括:将所述初始图像进行超像素分解,得到分解图像;对初始图像进行前景先验的显著性计算,得到前景显著性图像,包括:对所述分解图像进行前景先验的显著性计算,得到所述前景显著性图像;对初始图像进行背景先验的显著性计算,得到背景显著性图像,包括:对所述分解图像进行背景先验的显著性计算,得到所述背景显著性图像。
- 根据权利要求2所述的方法,其特征在于,将所述初始图像进行超像素分解,包括:使用简单线性迭代聚类的方法将所述初始图像进行超像素分解。
- 根据权利要求2所述的方法,其特征在于,对所述分解图像进行前景先验的显著性计算,得到所述前景显著性图像,包括:计算所述分解图像中每个超像素的包围值;根据所述每个超像素的包围值定义前景种子集合,其中,所述前景种子集合包括强前景种子集合和弱前景种子集合;根据所述初始图像中每个图像元素与所述前景种子集合的相关性对所述每个图像元素进行排序,得到第一排序结果,其中,所述初始图像使用图像矩阵表示,所述图像矩阵由所述图像元素构成;根据所述第一排序结果得到所述前景显著性图像。
- 根据权利要求2所述的方法,其特征在于,对所述分解图像进行背景先验的显著性计算,得到所述背景显著性图像,包括:计算所述初始图像中每个特征向量与平均特征向量的欧氏距离,其中,所述初始图像使用图像矩阵表示,所述图像矩阵由图像元素构成,所述特征向量为一组位于边界的所述图像元素的特征向量,所述平均特征向量为全部位于边界的所述图像元素的平均值的特征向量;根据所述欧式距离定义背景种子集合,其中,所述背景种子集合包括强背景种子集合和弱背景种子集合;根据初始图像中每个图像元素与所述背景种子集合的相关性对所述每个图像元素进行排序,得到第二排序结果;根据所述第二排序结果得到所述背景显著性图像。
- 根据权利要求1-5中任意一项所述的方法,其特征在于,得到初始显著性图像之后,所述方法还包括:根据所述初始显著性图像中每两个超像素之间的测地距离对所述两个超像素之间的权重进行调整,得到最终显著性图。
- 一种图像显著性检测装置,其特征在于,包括:第一计算模块,用于对初始图像进行前景先验的显著性计算,得到前景显著性图像;对所述初始图像进行背景先验的显著性计算,得到背景显著性图像;融合模块,用于融合所述前景显著性图像和所述背景显著性图像,得到初始显著性图像。
- 根据权利要求7所述的方法,其特征在于,所述装置还包括:调整模块,用于在所述融合模块得到初始显著性图像之后,根据所述初始显著性图像中每两个超像素之间的测地距离对所述两个超像素之间的权重进行调整,得到最终显著性图。
- 一种存储介质,其特征在于,所述存储介质包括存储的程序,其中,在所述程序运行时控制所述存储介质所在设备执行权利要求1至6中任意一项所述的图像显著性检测方法。
- 一种计算机设备,其特征在于,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述程序时实现权利要求1至6中任意一项所述的图像显著性检测方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711454483.2 | 2017-12-28 | ||
CN201711454483.2A CN108198172B (zh) | 2017-12-28 | 2017-12-28 | 图像显著性检测方法和装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019128460A1 true WO2019128460A1 (zh) | 2019-07-04 |
Family
ID=62585223
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/113429 WO2019128460A1 (zh) | 2017-12-28 | 2018-11-01 | 图像显著性检测方法和装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN108198172B (zh) |
WO (1) | WO2019128460A1 (zh) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111652641A (zh) * | 2020-05-29 | 2020-09-11 | 泰康保险集团股份有限公司 | 数据处理方法、装置、设备及计算机可读存储介质 |
CN113159025A (zh) * | 2021-03-26 | 2021-07-23 | 西安交通大学 | 一种图像显著性检测方法、系统、终端及可读存储介质 |
CN114119506A (zh) * | 2021-11-10 | 2022-03-01 | 武汉大学 | 基于背景信息的图像显著性检测方法 |
CN114913472A (zh) * | 2022-02-23 | 2022-08-16 | 北京航空航天大学 | 一种联合图学习与概率传播的红外视频行人显著性检测方法 |
CN116612122A (zh) * | 2023-07-20 | 2023-08-18 | 湖南快乐阳光互动娱乐传媒有限公司 | 图像显著性区域的检测方法及装置、存储介质及电子设备 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108198172B (zh) * | 2017-12-28 | 2022-01-28 | 北京大学深圳研究生院 | 图像显著性检测方法和装置 |
CN109325484B (zh) * | 2018-07-30 | 2021-08-24 | 北京信息科技大学 | 基于背景先验显著性的花卉图像分类方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110229025A1 (en) * | 2010-02-10 | 2011-09-22 | Qi Zhao | Methods and systems for generating saliency models through linear and/or nonlinear integration |
CN103914834A (zh) * | 2014-03-17 | 2014-07-09 | 上海交通大学 | 一种基于前景先验和背景先验的显著性物体检测方法 |
CN106056590A (zh) * | 2016-05-26 | 2016-10-26 | 重庆大学 | 基于Manifold Ranking和结合前景背景特征的显著性检测方法 |
CN108198172A (zh) * | 2017-12-28 | 2018-06-22 | 北京大学深圳研究生院 | 图像显著性检测方法和装置 |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9202137B2 (en) * | 2008-11-13 | 2015-12-01 | Google Inc. | Foreground object detection from multiple images |
US9042648B2 (en) * | 2012-02-23 | 2015-05-26 | Microsoft Technology Licensing, Llc | Salient object segmentation |
CN102722891B (zh) * | 2012-06-12 | 2014-08-27 | 大连理工大学 | 一种图像显著度检测的方法 |
CN105631455B (zh) * | 2014-10-27 | 2019-07-05 | 阿里巴巴集团控股有限公司 | 一种图像主体提取方法及系统 |
CN104537679A (zh) * | 2015-01-16 | 2015-04-22 | 厦门大学 | 一种基于超像素拓扑分析的卡通图片显著性检测方法 |
CN105869173B (zh) * | 2016-04-19 | 2018-08-31 | 天津大学 | 一种立体视觉显著性检测方法 |
CN106056579A (zh) * | 2016-05-20 | 2016-10-26 | 南京邮电大学 | 一种基于背景对比的显著性检测方法 |
CN106327507B (zh) * | 2016-08-10 | 2019-02-22 | 南京航空航天大学 | 一种基于背景和前景信息的彩色图像显著性检测方法 |
CN106951829A (zh) * | 2017-02-23 | 2017-07-14 | 南京邮电大学 | 一种基于最小生成树的视频显著对象检测方法 |
-
2017
- 2017-12-28 CN CN201711454483.2A patent/CN108198172B/zh active Active
-
2018
- 2018-11-01 WO PCT/CN2018/113429 patent/WO2019128460A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110229025A1 (en) * | 2010-02-10 | 2011-09-22 | Qi Zhao | Methods and systems for generating saliency models through linear and/or nonlinear integration |
CN103914834A (zh) * | 2014-03-17 | 2014-07-09 | 上海交通大学 | 一种基于前景先验和背景先验的显著性物体检测方法 |
CN106056590A (zh) * | 2016-05-26 | 2016-10-26 | 重庆大学 | 基于Manifold Ranking和结合前景背景特征的显著性检测方法 |
CN108198172A (zh) * | 2017-12-28 | 2018-06-22 | 北京大学深圳研究生院 | 图像显著性检测方法和装置 |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111652641A (zh) * | 2020-05-29 | 2020-09-11 | 泰康保险集团股份有限公司 | 数据处理方法、装置、设备及计算机可读存储介质 |
CN113159025A (zh) * | 2021-03-26 | 2021-07-23 | 西安交通大学 | 一种图像显著性检测方法、系统、终端及可读存储介质 |
CN113159025B (zh) * | 2021-03-26 | 2024-04-05 | 西安交通大学 | 一种图像显著性检测方法、系统、终端及可读存储介质 |
CN114119506A (zh) * | 2021-11-10 | 2022-03-01 | 武汉大学 | 基于背景信息的图像显著性检测方法 |
CN114913472A (zh) * | 2022-02-23 | 2022-08-16 | 北京航空航天大学 | 一种联合图学习与概率传播的红外视频行人显著性检测方法 |
CN116612122A (zh) * | 2023-07-20 | 2023-08-18 | 湖南快乐阳光互动娱乐传媒有限公司 | 图像显著性区域的检测方法及装置、存储介质及电子设备 |
CN116612122B (zh) * | 2023-07-20 | 2023-10-10 | 湖南快乐阳光互动娱乐传媒有限公司 | 图像显著性区域的检测方法及装置、存储介质及电子设备 |
Also Published As
Publication number | Publication date |
---|---|
CN108198172B (zh) | 2022-01-28 |
CN108198172A (zh) | 2018-06-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019128460A1 (zh) | 图像显著性检测方法和装置 | |
Zhu et al. | A three-pathway psychobiological framework of salient object detection using stereoscopic technology | |
Lu et al. | Dense and sparse reconstruction error based saliency descriptor | |
Zhang et al. | A probabilistic associative model for segmenting weakly supervised images | |
EP2907082B1 (en) | Using a probabilistic model for detecting an object in visual data | |
CN112102237A (zh) | 基于半监督学习的脑部肿瘤识别模型的训练方法及装置 | |
CN110110694B (zh) | 一种基于目标检测的视觉slam闭环检测方法 | |
CN104572804A (zh) | 一种视频物体检索的方法及其系统 | |
Everingham et al. | Automated person identification in video | |
WO2009152509A1 (en) | Method and system for crowd segmentation | |
Demirkus et al. | Hierarchical temporal graphical model for head pose estimation and subsequent attribute classification in real-world videos | |
Tyagi et al. | Kernel-based 3d tracking | |
US20190311216A1 (en) | Image processing device, image processing method, and image processing program | |
Anvar et al. | Multiview face detection and registration requiring minimal manual intervention | |
Srivastava et al. | Salient object detection using background subtraction, Gabor filters, objectness and minimum directional backgroundness | |
Sajid et al. | The role of facial asymmetry in recognizing age-separated face images | |
Wang et al. | Rigid shape matching by segmentation averaging | |
Liu et al. | A new patch selection method based on parsing and saliency detection for person re-identification | |
CN111666976A (zh) | 基于属性信息的特征融合方法、装置和存储介质 | |
Su et al. | 3d-assisted image feature synthesis for novel views of an object | |
CN114387304A (zh) | 目标跟踪方法、计算机程序产品、存储介质及电子设备 | |
Peng et al. | Saliency-aware image-to-class distances for image classification | |
Xiao et al. | Saliency detection via multi-view graph based saliency optimization | |
CN116071569A (zh) | 图像选择方法、计算机设备及存储装置 | |
Yang et al. | On-road vehicle tracking using keypoint-based representation and online co-training |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18897778 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18897778 Country of ref document: EP Kind code of ref document: A1 |