CN108596893B - An image processing method and system - Google Patents
An image processing method and system Download PDFInfo
- Publication number
- CN108596893B CN108596893B CN201810372478.5A CN201810372478A CN108596893B CN 108596893 B CN108596893 B CN 108596893B CN 201810372478 A CN201810372478 A CN 201810372478A CN 108596893 B CN108596893 B CN 108596893B
- Authority
- CN
- China
- Prior art keywords
- image
- processing
- detection
- bmbd
- fmbd
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 12
- 238000012545 processing Methods 0.000 claims abstract description 81
- 238000001514 detection method Methods 0.000 claims abstract description 71
- 238000000034 method Methods 0.000 claims abstract description 39
- 230000011218 segmentation Effects 0.000 claims abstract description 30
- 230000008569 process Effects 0.000 claims abstract description 17
- 238000007781 pre-processing Methods 0.000 claims abstract description 15
- 238000009499 grossing Methods 0.000 claims abstract description 10
- 238000003709 image segmentation Methods 0.000 claims abstract description 8
- 230000000877 morphologic effect Effects 0.000 claims abstract description 8
- 230000003044 adaptive effect Effects 0.000 claims abstract description 4
- 238000009792 diffusion process Methods 0.000 claims description 48
- 230000005540 biological transmission Effects 0.000 claims description 25
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 230000004927 fusion Effects 0.000 claims description 6
- 238000000926 separation method Methods 0.000 claims description 6
- 238000003860 storage Methods 0.000 claims description 6
- 230000000007 visual effect Effects 0.000 abstract description 8
- 230000000694 effects Effects 0.000 abstract description 5
- 238000004519 manufacturing process Methods 0.000 abstract description 2
- 238000011897 real-time detection Methods 0.000 abstract description 2
- 238000001035 drying Methods 0.000 abstract 1
- 238000005516 engineering process Methods 0.000 description 6
- 238000013461 design Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 3
- 230000004888 barrier function Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013144 data compression Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000007480 spreading Effects 0.000 description 1
- 238000003892 spreading Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种图像处理方法及系统,该方法适用于一种图像处理装置,所述装置包括图像采集模块、图像预处理模块、图像处理模块、图像分割模块,所述方法包括以下步骤:S1图像采集,通过图像采集模块进行图像采集;S2图像预处理,对采集的图像进行去燥、对比度增强和图像形态学造作,完成图像数据的前期处理,为后续主要处理过程提供高质量图像数据;S3图像处理,包括图像显著性目标检测处理,及平滑、形态学操作处理;S4图像分割,通过自适应阈值分割,对S3中的最终图像进行目标分割,以得到感兴趣区域。本发明所述方法及系统可同时进行检测与分割的可视化展示,可对用户提供实时检测分割效果,提高用户体验,提高操作效率。
The invention discloses an image processing method and system. The method is suitable for an image processing device. The device includes an image acquisition module, an image preprocessing module, an image processing module, and an image segmentation module. The method includes the following steps: S1 image acquisition, image acquisition through the image acquisition module; S2 image preprocessing, de-drying, contrast enhancement and image morphological fabrication of the collected images, completing the pre-processing of image data, and providing high-quality image data for subsequent main processing processes ; S3 image processing, including image saliency target detection processing, and smoothing and morphological operation processing; S4 image segmentation, through adaptive threshold segmentation, the final image in S3 is subjected to target segmentation to obtain the region of interest. The method and system of the present invention can simultaneously perform visual display of detection and segmentation, provide users with real-time detection and segmentation effects, improve user experience, and improve operation efficiency.
Description
技术领域technical field
本发明涉及一种图像处理方法及系统,尤其涉及用于静态图像及视频序列中显著目标快速检测方法及系统。The present invention relates to an image processing method and system, in particular to a method and system for rapid detection of salient objects in static images and video sequences.
背景技术Background technique
图像处理装置及方法广泛应用于生产、管理、军事、医疗等许多领域。图像处理技术尤其是图像处理技术中的检测与分割技术在目标检测与分割的应用中,使我们可以在许多方面更便利的获取有用信息,从而对所处的情况作出更加准确的判断。图像处理技术的广泛应用能够使我们的生活更加便利,节省了更多的成本与资源。The image processing device and method are widely used in production, management, military, medical and many other fields. Image processing technology, especially the detection and segmentation technology in image processing technology, in the application of target detection and segmentation, allows us to obtain useful information more conveniently in many aspects, so as to make more accurate judgments on the situation. The wide application of image processing technology can make our life more convenient and save more cost and resources.
本发明提出的目标检测与分割算法可以快速的对采集到的输入图像进行显著目标的检测与分割,快速获取人们感兴趣的目标区域。在很多场景中,比如智能监控视频中,观察者需要快速提取目标人物或者主要标志物的区域,可以用该算法快速地提取相关的区域。再比如在大数据的有效信息的获取中,如何能快速的获取最有效的信息是许多数据专家一直考虑的问题,而与此相关的基于视觉注意力机制的该模型可以快速根据视觉显著性原理在数据库中检索出显著性信息,实现有效信息的快速提取。其实不仅仅是在这两个方面有很大的应用价值,这种通用性的显著目标检测算法在诸如病理检测、数据压缩、拍照自动定焦等方面都具有较大的意义。The target detection and segmentation algorithm proposed by the present invention can quickly detect and segment the salient target of the collected input image, and quickly obtain the target area that people are interested in. In many scenarios, such as intelligent surveillance videos, the observer needs to quickly extract the target person or the area of the main marker, and this algorithm can be used to quickly extract the relevant area. For another example, in the acquisition of effective information from big data, how to quickly obtain the most effective information is a problem that many data experts have been considering, and the related model based on the visual attention mechanism can quickly follow the principle of visual saliency. The significant information is retrieved in the database to realize the rapid extraction of effective information. In fact, it is not only of great application value in these two aspects, but also of great significance in such aspects as pathological detection, data compression, and automatic focusing of photography.
目前现有类似技术存在以下缺点:The existing similar technologies have the following disadvantages:
第一,如果图像纹理比较复杂、对比度比较低,现有的基于图像对比度定义的检测算法不能给出很好的目标的区域信息,从而使我们不能获得精确的目标信息。First, if the image texture is complex and the contrast is relatively low, the existing detection algorithms based on the definition of image contrast cannot give good target area information, so that we cannot obtain accurate target information.
第二,现有的公认的检测效果最好的模型(MBD),在对于待检测目标与图像的边界有接触的时候,基于算法本身的设计思想,很难给出一个比较好的检测结果。Second, the existing recognized model with the best detection effect (MBD) is difficult to give a better detection result based on the design idea of the algorithm itself when the target to be detected is in contact with the boundary of the image.
第三,现在比较流行的深度学习模型很少具有跨场景的适用性,在场景改变的时候,因为目标的不同需要根据不同的数据集训练新的模型,同时基于深度学习的模型需要大量的数据和时间资源。Third, the popular deep learning models rarely have cross-scenario applicability. When the scene changes, new models need to be trained according to different data sets due to different targets, and deep learning-based models require a lot of data. and time resources.
以上缺点是现有技术中显著性检测模型难以兼顾的点,也是大家都在积极探索解决的问题,如果不能最大程度地解决这几个问题,将很难给研究者或消费者提供准确的有效目标信息。The above shortcomings are the points that the saliency detection model in the existing technology is difficult to take into account, and it is also a problem that everyone is actively exploring and solving. If these problems cannot be solved to the greatest extent, it will be difficult to provide researchers or consumers with accurate and effective target information.
在已发表的论文和实现的应用中,有的是直接利用图像色彩对比度,基于像素间或者区域间色彩对于进行定义像素或者区域的显著性(人眼视觉的可区分性),比较有代表的是Mingming Cheng的HC、RC算法,但是上面提出的对比度低的问题依旧不好解决,在检测分割上效果不明显。还有许多研究者考虑图像的整体的拓扑信息,利用像素,区域之间的相关性去定义显著性,在效果上有很大额提高,如Jianming Zhang的基于最小障碍距离的扫描遍历的检测算法,在各大数据集上得到了不错的效果,但是存在的问题还是对比度低和检测目标与边界接触两个问题,特别是当检测目标与边界接触是,该算法性能大幅度下降。其他有一部分研究交互式分割算法的研究者提出可以以Trimap和Strokes的方式为图像添加手工标记(前景、背景信息),接下来按照用户提供的感兴趣信息进行基于像素的目标检测与分割效果还算可以,但是需要人为的参与标记。对于最近比较火的深度学习对于该问题的解决方案,就是基于卷积神经网络进行训练,经典的有Huchuan Lu的“SaliencyDetection with Recurrent Fully Convolutional Networks.”(European Conferenceon Computer Vision.Springer,Cham,2016:825-841.)在经典CNN模型后端进行全连接层的改造;还有就是Mingming Cheng的“Deeply Supervised Salient Object Detectionwith Short Connections”(IEEE TPAMI,2018)。但是深度学习网络后面的检测会产生场景迁移上的不足、特定目标专用性强、时间和数据资源需求大的情况,很难满足不同的应用场景。Among the published papers and realized applications, some directly use the image color contrast to define the saliency of pixels or regions based on the color between pixels or regions (distinguishability of human vision), and Mingming is more representative. Cheng's HC and RC algorithms, but the problem of low contrast proposed above is still difficult to solve, and the effect is not obvious in detection and segmentation. There are also many researchers who consider the overall topological information of the image and use the correlation between pixels and regions to define saliency, which greatly improves the effect. For example, Jianming Zhang's detection algorithm based on scan traversal of minimum obstacle distance , and obtained good results on major data sets, but the existing problems are still low contrast and the detection target is in contact with the boundary, especially when the detection target is in contact with the boundary, the performance of the algorithm is greatly reduced. Other researchers who have studied interactive segmentation algorithms proposed that manual tags (foreground and background information) can be added to images in the manner of Trimap and Strokes, and then pixel-based target detection and segmentation are performed according to the information of interest provided by users. It's OK, but requires human participation markers. For the recently popular deep learning solution to this problem, it is based on convolutional neural network training. The classic is Huchuan Lu's "SaliencyDetection with Recurrent Fully Convolutional Networks." (European Conferenceon Computer Vision. Springer, Cham, 2016: 825-841.) The transformation of the fully connected layer at the back end of the classic CNN model; and Mingming Cheng's "Deeply Supervised Salient Object Detection with Short Connections" (IEEE TPAMI, 2018). However, the detection behind the deep learning network will result in insufficient scene migration, strong specificity for specific targets, and large demand for time and data resources, making it difficult to meet different application scenarios.
发明内容SUMMARY OF THE INVENTION
针对上述现有技术存在的缺陷与不足,本发明要解决的技术问题是提供一种通用的目标检测算法,可对各种场景中采集的图像数据进行显著目标的检测,结合图像处理的方法进行处理及分割操作,最终得到精确的目标区域。In view of the above-mentioned defects and deficiencies in the prior art, the technical problem to be solved by the present invention is to provide a general target detection algorithm, which can detect salient targets on the image data collected in various scenes, combined with the method of image processing. Processing and segmentation operations, and finally get the precise target area.
本发明的技术方案是这样实现的:The technical scheme of the present invention is realized as follows:
一种图像处理方法,该方法适用于一种图像处理装置,所述装置包括图像采集模块、图像预处理模块、图像处理模块、图像分割模块;所述方法包括以下步骤:An image processing method, the method is applicable to an image processing device, the device includes an image acquisition module, an image preprocessing module, an image processing module, and an image segmentation module; the method includes the following steps:
S1,图像采集,通过图像采集模块进行图像采集;S1, image acquisition, image acquisition is performed through an image acquisition module;
S2,图像预处理,对采集的图像进行去噪、对比度增强和图像形态学操作,完成图像数据的前期处理,为后续主要处理过程提供高质量图像数据;S2, image preprocessing, perform denoising, contrast enhancement and image morphological operations on the collected images, complete the pre-processing of image data, and provide high-quality image data for subsequent main processing processes;
S3,图像处理,包括图像显著性目标检测处理,及平滑、形态学操作处理;S3, image processing, including image saliency target detection processing, and smoothing and morphological operation processing;
S4,图像分割,通过自适应阈值分割,对S3中的最终图像进行目标分割,以得到感兴趣区域。S4, image segmentation, through adaptive threshold segmentation, target segmentation is performed on the final image in S3 to obtain a region of interest.
较佳的,上述步骤S3中所述图像显著性目标检测主要包括种子点的选取、扩散次序确定、不同通道的图像融合;Preferably, the image saliency target detection in the above step S3 mainly includes the selection of seed points, the determination of the diffusion order, and the image fusion of different channels;
进一步的,所述图像显著性目标检测的步骤为,Further, the step of image saliency target detection is:
A1,进行BMBD算法处理;A1, perform BMBD algorithm processing;
A2,基于A1的处理结果,再进入FMBD算法处理;A2, based on the processing result of A1, and then enter the FMBD algorithm processing;
A3,对A1、s2中得到的两幅显著性检测图进行融合,得到显著性检测的BF_MBDS图。A3, fuse the two saliency detection maps obtained in A1 and s2 to obtain the BF_MBDS map of saliency detection.
较佳的,所述BMBD算法处理的步骤为,Preferably, the step of the BMBD algorithm processing is,
B1,对输入图像进行预处理,包括去噪处理和颜色空间转换;B1, preprocess the input image, including denoising and color space conversion;
B2,对处理后的图像进行通道分离;B2, perform channel separation on the processed image;
B3,初始种子点设置,设置图像四周为种子点,基于种子点进行扩散处理;B3, initial seed point setting, set the image surrounding as seed points, and perform diffusion processing based on the seed points;
B4,采取四邻域的方式从四周向中心进行扩散,进行一圈一圈的扩散更新;B4, adopt the method of four neighborhoods to spread from the surrounding to the center, and perform a circle-by-circle diffusion update;
B5,对前一轮扩散更新的像素点在下一轮扩散时,同样进行新一次的更新,直到所有的像素点均更新遍历完成;B5, in the next round of diffusion, a new update is also performed for the pixels updated in the previous round of diffusion, until all the pixels are updated and traversed;
B6,分别得到三个通道的扩散图像BMBD_L,BMBD_a,BMBD_b;B6, respectively obtain the diffusion images BMBD_L, BMBD_a and BMBD_b of the three channels;
B7,采用取最大值的方式对S6中得到的图像BMBD_L,BMBD_a,BMBD_b进行融合,得到BMBD_Lab图;B7, fuse the images BMBD_L, BMBD_a, and BMBD_b obtained in S6 by taking the maximum value to obtain the BMBD_Lab map;
B8,将B7中得到的BMBD_Lab图做进一步处理,主要包括平滑处理和对比度增强,得到最终的BMBD_map检测图像。B8, further process the BMBD_Lab image obtained in B7, mainly including smoothing and contrast enhancement, to obtain the final BMBD_map detection image.
较佳的,所述FMBD算法处理的步骤为,Preferably, the step of the FMBD algorithm processing is,
C1,对输入图像进行预处理,包括去噪处理和颜色空间转换;C1, preprocess the input image, including denoising and color space conversion;
C2,对处理后的图像进行通道分离;C2, perform channel separation on the processed image;
C3,初始种子点设置,基于上述BMBD算法检测图,选择其中置信度,即显著性值最高的像素,将这些像素点作为初始化的种子点,进行扩散处理;C3, initial seed point setting, based on the above-mentioned BMBD algorithm detection map, select the pixels with the highest confidence, that is, the highest saliency value, and use these pixels as the initial seed points for diffusion processing;
C4,采取四邻域的方式从四周向中心进行扩散,进行一圈一圈的扩散更新;C4, take the method of four neighborhoods to spread from the surrounding to the center, and perform a circle-by-circle diffusion update;
C5,对前一轮扩散更新的像素点在下一轮扩散时,同样进行新一次的更新,直到所有的像素点均更新遍历完成;C5, for the pixels updated in the previous round of diffusion, in the next round of diffusion, a new update is also performed until all the pixels are updated and traversed;
C6,分别得到三个通道的扩散图像FMBD_L,FMBD_a,FMBD_b;C6, respectively obtain three channel diffusion images FMBD_L, FMBD_a, FMBD_b;
C7,采用取最大值的方式对S6中得到的图像FMBD_L,FMBD_a,FMBD_b进行融合,得到FMBD_Lab图;C7, fuse the images FMBD_L, FMBD_a, and FMBD_b obtained in S6 by taking the maximum value to obtain the FMBD_Lab map;
C8,将C7中得到的FMBD_Lab图做进一步处理,主要包括平滑处理和对比度增强,得到最终的FMBD_map检测图像。C8, further process the FMBD_Lab map obtained in C7, mainly including smoothing and contrast enhancement, to obtain the final FMBD_map detection image.
一种图像处理系统,包括图像采集设备、图像传输模块、图像处理装置、图像显示模块及图像存储模块:An image processing system, comprising an image acquisition device, an image transmission module, an image processing device, an image display module and an image storage module:
所述图像采集设备在用户需要进行检测分割的场景下进行数据采集,可根据用户需求,设置数据的单帧图像采集和不同视频序列的视频采集;The image acquisition device performs data acquisition in a scenario where the user needs to perform detection and segmentation, and can set single-frame image acquisition of data and video acquisition of different video sequences according to user requirements;
所述图像传输模块用于对上述采集到的数据的进行传输;The image transmission module is used for transmitting the above-mentioned collected data;
所述图像处理装置采用权利要求1所述的图像处理方法对所述图像传输模块传来的图像进行处理;The image processing device uses the image processing method of
所述图像存储模块用于存储图像传输模块传来的图像数据,以及图像处理装置处理过程中的中间数据;The image storage module is used to store the image data transmitted by the image transmission module and the intermediate data in the processing process of the image processing device;
所述图像显示模块将处理的结果进行实时的显示,供用户观察。The image display module displays the processed result in real time for the user to observe.
较佳的,所述图像处理装置的处理过程主要包括以下步骤,Preferably, the processing process of the image processing device mainly includes the following steps:
D1,对传输数据进行预处理,为后面的检测算法提供高质量的数据;D1, preprocess the transmission data to provide high-quality data for the subsequent detection algorithm;
D2,根据权利要求1所述检测算法的对传输模块传来的图像进行处理,得到检测图像;D2, according to the detection algorithm described in
D3,对S2中得到的检测图像进行分割处理,得到最终的分割图。D3, perform segmentation processing on the detection image obtained in S2 to obtain a final segmentation map.
较佳的,所述图像传输模块采用有线和无线两种数据传输方式,满足不同用户在不同应用条件下的需求。Preferably, the image transmission module adopts wired and wireless data transmission modes to meet the needs of different users under different application conditions.
本发明的有益效果在于:The beneficial effects of the present invention are:
1.本发明所述的图像处理方法及系统可同时进行检测与分割的可视化展示,可对用户提供实时检测分割效果,提高用户体验,提高操作效率。1. The image processing method and system of the present invention can simultaneously perform visual display of detection and segmentation, and can provide users with real-time detection and segmentation effects, improve user experience, and improve operational efficiency.
2.本发明所述图像处理方法及系统可适用不同的应用场景,可进行多场景下的目标检测与分割处理。2. The image processing method and system of the present invention can be applied to different application scenarios, and can perform target detection and segmentation processing in multiple scenarios.
3.所述系统采用了远程无线的传输方式大大增加了空间的自由性。3. The system adopts the long-distance wireless transmission mode, which greatly increases the freedom of space.
附图说明Description of drawings
附图1为BMBD算法流程框图。Figure 1 is a flow chart of the BMBD algorithm.
附图2为FMBD算法流程框图。Accompanying drawing 2 is the flow chart of FMBD algorithm.
附图3为BMBD算法扩散图示例。Figure 3 is an example of the diffusion map of the BMBD algorithm.
附图4为FMBD算法扩散图示例。Figure 4 is an example of a diffusion map of the FMBD algorithm.
附图5为BF_MBD整体流程框图。Figure 5 is a block diagram of the overall flow of BF_MBD.
附图6为本发明系统框图。Figure 6 is a system block diagram of the present invention.
具体实施方式Detailed ways
下面结合附图和具体实施例对本发明做进一步详述:The present invention is described in further detail below in conjunction with the accompanying drawings and specific embodiments:
如图1、2所示,一种图像处理方法,该方法适用于一种图像处理装置,所述装置包括图像采集模块、图像预处理模块、图像处理模块、图像分割模块,所述方法包括以下步骤:As shown in Figures 1 and 2, an image processing method is applicable to an image processing device, the device includes an image acquisition module, an image preprocessing module, an image processing module, and an image segmentation module, and the method includes the following step:
S1,图像采集,通过图像采集模块进行图像采集;S1, image acquisition, image acquisition is performed through an image acquisition module;
S2,图像预处理,对采集的图像进行去噪、对比度增强和图像形态学操作,完成图像数据的前期处理,为后续主要处理过程提供高质量图像数据;S2, image preprocessing, perform denoising, contrast enhancement and image morphological operations on the collected images, complete the pre-processing of image data, and provide high-quality image data for subsequent main processing processes;
S3,图像处理,包括图像显著性目标检测处理,及平滑、形态学操作处理;S3, image processing, including image saliency target detection processing, and smoothing and morphological operation processing;
S4,图像分割,通过自适应阈值分割,对S3中的最终图像进行目标分割,以得到感兴趣区域。S4, image segmentation, through adaptive threshold segmentation, target segmentation is performed on the final image in S3 to obtain a region of interest.
如图5所示,进一步的,S3中所述图像显著性目标检测主要包括种子点的选取、扩散次序确定、不同通道的图像融合。其中,所述图像显著性目标检测的步骤为,As shown in FIG. 5 , further, the image saliency target detection described in S3 mainly includes the selection of seed points, the determination of the diffusion order, and the image fusion of different channels. Wherein, the step of image saliency target detection is:
A1,进行BMBD算法处理;A1, perform BMBD algorithm processing;
A2,基于A1的处理结果,再进行FMBD算法处理;A2, based on the processing result of A1, and then perform FMBD algorithm processing;
A3,对A1、A2中得到的两幅显著性检测图进行融合,得到显著性检测的BF_MBDS图。A3, fuse the two saliency detection maps obtained in A1 and A2 to obtain a BF_MBDS map of saliency detection.
如图1所示,所述BMBD算法处理的步骤为,As shown in Figure 1, the steps of the BMBD algorithm processing are:
B1,对输入图像进行预处理,包括去噪处理和颜色空间转换;B1, preprocess the input image, including denoising and color space conversion;
B2,对处理后的图像进行通道分离;B2, perform channel separation on the processed image;
B3,初始种子点设置,设置图像四周为种子点,基于种子点进行扩散处理;B3, initial seed point setting, set the image surrounding as seed points, and perform diffusion processing based on the seed points;
B4,采取四邻域的方式从四周向中心进行扩散,进行一圈一圈的扩散更新;B4, adopt the method of four neighborhoods to spread from the surrounding to the center, and perform a circle-by-circle diffusion update;
B5,对前一轮扩散更新的像素点在下一轮扩散时,同样进行新一次的更新,直到所有的像素点均更新遍历完成;B5, in the next round of diffusion, a new update is also performed for the pixels updated in the previous round of diffusion, until all the pixels are updated and traversed;
B6,分别得到三个通道的扩散图像BMBD_L,BMBD_a,BMBD_b;B6, respectively obtain the diffusion images BMBD_L, BMBD_a and BMBD_b of the three channels;
B7,采用取最大值的方式对B6中得到的图像BMBD_L,BMBD_a,BMBD_b进行融合,得到BMBD_Lab图;B7, fuse the images BMBD_L, BMBD_a, and BMBD_b obtained in B6 by taking the maximum value to obtain the BMBD_Lab map;
B8,将B7中得到的BMBD_Lab图做进一步处理,主要包括平滑处理和对比度增强,得到最终的BMBD_map检测图像。B8, further process the BMBD_Lab image obtained in B7, mainly including smoothing and contrast enhancement, to obtain the final BMBD_map detection image.
如图2所示,所述FMBD算法处理的步骤为,As shown in Figure 2, the steps of the FMBD algorithm processing are:
C1,对输入图像进行预处理,包括去噪处理和颜色空间转换;C1, preprocess the input image, including denoising and color space conversion;
C2,对处理后的图像进行通道分离;C2, perform channel separation on the processed image;
C3,初始种子点设置,基于上述BMBD算法检测图,选择其中置信度,即显著性值最高的像素,将这些像素点作为初始化的种子点,进行扩散处理;C3, initial seed point setting, based on the above-mentioned BMBD algorithm detection map, select the pixels with the highest confidence, that is, the highest saliency value, and use these pixels as the initial seed points for diffusion processing;
C4,采取四邻域的方式从四周向中心进行扩散,进行一圈一圈的扩散更新;C4, take the method of four neighborhoods to spread from the surrounding to the center, and perform a circle-by-circle diffusion update;
C5,对前一轮扩散更新的像素点在下一轮扩散时,同样进行新一次的更新,直到所有的像素点均更新遍历完成;C5, for the pixels updated in the previous round of diffusion, in the next round of diffusion, a new update is also performed until all the pixels are updated and traversed;
C6,分别得到三个通道的扩散图像FMBD_L,FMBD_a,FMBD_b;C6, respectively obtain three channel diffusion images FMBD_L, FMBD_a, FMBD_b;
C7,采用取最大值的方式对C6中得到的图像FMBD_L,FMBD_a,FMBD_b进行融合,得到FMBD_Lab图;C7, fuse the images FMBD_L, FMBD_a, and FMBD_b obtained in C6 by taking the maximum value to obtain the FMBD_Lab map;
C8,将C7中得到的FMBD_Lab图做进一步处理,主要包括平滑处理和对比度增强,得到最终的FMBD_map检测图像。C8, further process the FMBD_Lab map obtained in C7, mainly including smoothing and contrast enhancement, to obtain the final FMBD_map detection image.
如图6所示,一种图像处理系统,包括图像采集设备601、图像传输模块602、图像处理模块603、图像存储模块604及图像显示模块605,其中,As shown in FIG. 6, an image processing system includes an
所述图像采集设备601在用户需要进行检测分割的场景下进行数据采集,可根据用户需求,设置数据的单帧图像采集和不同视频序列的视频采集;The
所述图像传输模块602用于对上述采集到的数据的进行传输The
所述图像处理模块603采用权利要求1所述的图像处理方法对所述图像传输模块传来的图像进行处理;The
所述图像存储模块604用于存储图像传输模块传来的图像数据,以及图像处理装置处理过程中的中间数据;The
所述图像显示模块605将处理的结果进行实时的显示,供用户观察。The
进一步的,所述图像处理模块的处理过程主要包括以下步骤,Further, the processing process of the image processing module mainly includes the following steps:
D1,对传输数据进行预处理,为后面的检测算法提供高质量的数据;D1, preprocess the transmission data to provide high-quality data for the subsequent detection algorithm;
D2,根据权利要求1所述检测算法的对传输模块传来的图像进行处理,得到检测图像;D2, according to the detection algorithm described in
D3,对D2中得到的检测图像进行分割处理,得到最终的分割图。D3, perform segmentation processing on the detection image obtained in D2 to obtain a final segmentation map.
进一步的,所述图像传输模块采用有线和无线两种数据传输方式,满足不同用户在不同应用条件下的需求。Further, the image transmission module adopts wired and wireless data transmission modes to meet the needs of different users under different application conditions.
如图3所示,实施例中BMBD算法的设计思想如下:As shown in Figure 3, the design idea of the BMBD algorithm in the embodiment is as follows:
BMBD全称是Background Minimum Barrier Distance,即基于背景信息最小障碍距离的检测方法。该方法考虑到背景信息对检测的影响,所以在种子点的选取上,设置图像四周的像素点为种子点;然后基于四周种子点向内部一圈一圈的更新,更新的具体方式采用四邻域更新方式。The full name of BMBD is Background Minimum Barrier Distance, which is a detection method based on the minimum obstacle distance of background information. This method takes into account the influence of background information on detection, so in the selection of seed points, the pixel points around the image are set as seed points; then based on the surrounding seed points, the inner circle is updated, and the specific update method adopts four neighborhoods. update method.
图中标为0的点为我们选定的背景中作为种子点的像素点,接下来采用视觉注意力中视觉扩散的思想进行四邻域的扩散操作,得到第一次扩散的标为1的点,该点进过一次扩散,其初始BMBD值得到第一次更新。接下来以相同的方式进行下一次更新,但是与第一次不同的是,在第二次扩散更新标为2的区域时,标为1的区域同时也再次被更新。这种扩散的思想符合现实中水漫重复扩散思想,就像实际中四周山上的水向中心盆地汇聚,在不断的汇聚过程中,从山上到盆地的各个路径经历了多次的浸润,这如同我们算法中扩散的思想。这种扩散的思想是我们提出的,也是我们视觉显著性检测算法中的一个重要创新点。The point marked 0 in the figure is the pixel point in the background we selected as the seed point. Next, the idea of visual diffusion in visual attention is used to carry out the diffusion operation of the four neighborhoods, and the point marked 1 for the first diffusion is obtained. This point has been diffused once, and its initial BMBD value is updated for the first time. The next update follows in the same way, but unlike the first one, when the area marked 2 is updated for the second diffusion, the area marked 1 is also updated again at the same time. This idea of diffusion is in line with the idea of repeated diffusion of water in reality, just like in reality, the water on the surrounding mountains converges to the central basin. The idea of diffusion in our algorithm. This idea of diffusion is proposed by us and is an important innovation in our visual saliency detection algorithm.
如图4所示,实施例中FMBD算法的设计思想如下:As shown in Figure 4, the design idea of the FMBD algorithm in the embodiment is as follows:
FMBD全称是Foreground Minimum Barrier Distance,即基于前景(目标)信息最小障碍距离的检测方法。该方法考虑到前景信息对检测的影响,所以在种子点的选取上,设置图像中部分前景目标的像素点为种子点;然后基于所选择的种子点向外部一圈一圈的更新,更新的具体方式采用四邻域更新方式。The full name of FMBD is Foreground Minimum Barrier Distance, which is a detection method based on the minimum obstacle distance of foreground (target) information. This method takes into account the influence of foreground information on detection, so in the selection of seed points, the pixel points of some foreground objects in the image are set as seed points; The specific method adopts the four-neighbor update method.
图中标为0的点是初始化种子点,基于中心向四周扩散的思想正是借鉴了泉水外溢的思想,能很好地解决检测目标与图像边界接触的检测问题。中心种子点的选取是比较关键的问题,采用的是基于背景信息扩散的算法BMBD算法提供的显著性检测目标的部分区域,这里的部分区域是经过图像处理中的形态学操作等处理,得到的小部分像素是显著性检测中高显著性值的中心区域(间接说明了该部分是最大概率是前景区域)。该区域可以作为前景信息,以此来设置种子点,进行向四周的扩散操作,直到扩散完全且充分才停止扩散,得到最终的显著性扩散检测图。The point marked 0 in the figure is the initialization seed point. The idea of spreading from the center to the surrounding area is based on the idea of spring overflow, which can well solve the detection problem of the contact between the detection target and the image boundary. The selection of the center seed point is a more critical issue. The saliency detection target provided by the BMBD algorithm based on the background information diffusion algorithm is used. The partial area here is obtained by morphological operations in image processing. A small part of pixels is the central area of high saliency value in saliency detection (indirectly indicating that this part is the foreground area with the greatest probability). This area can be used as foreground information to set seed points and perform diffusion operations to the surrounding area. The diffusion will not stop until the diffusion is complete and sufficient, and the final saliency diffusion detection map will be obtained.
如图5所示,BMBD算法与FMBD算法融合生成最终BF_MBDS图像设计思想与具体实现具体如下:As shown in Figure 5, the fusion of the BMBD algorithm and the FMBD algorithm to generate the final BF_MBDS image design idea and specific implementation are as follows:
该部分主要采用的融合算法是基于最大值的思想,在单个算法多通道上依次采用多通道显著性值作和操作,定义为该点的显著性值,分别为BMBD_value,FMBD_value,接下来是融合不同算法的显著性检测图,我们采用简单的最大值思想,即BF_MBDS_value=BMBD_value+FMBD_value。至此,我们得到了一幅图像的视觉显著性检测图,该图为检测的灰度图像,目标区域显著性值高,可明显区分,接下来可以采用适当的分割算法进行分割,得到标准的二值图像。The fusion algorithm mainly used in this part is based on the idea of maximum value. The multi-channel saliency value is used for sum operation on multiple channels of a single algorithm, which is defined as the saliency value of this point, which are BMBD_value, FMBD_value, and the next is fusion. For the saliency detection maps of different algorithms, we adopt a simple maximum value idea, that is, BF_MBDS_value=BMBD_value+FMBD_value. So far, we have obtained a visual saliency detection map of an image, which is a grayscale image of the detection. The target area has a high saliency value and can be clearly distinguished. Next, an appropriate segmentation algorithm can be used for segmentation to obtain a standard two. value image.
其中算法的具体实现(伪代码)为:The specific implementation (pseudo code) of the algorithm is:
对于每一圈更新中的每一个像素,它的定义如下:For each pixel in each lap update, it is defined as follows:
而对于βI(Py(x))的定义如下:And the definition of β I (P y (x)) is as follows:
βI(Py(x))=max{U(y),I(x)}-min{L(y),I(x)}β I (P y (x))=max{U(y),I(x)}-min{L(y),I(x)}
应当指出的是,上述实施例及算法的具体实现(伪代码)并非是对本发明的限制,本发明也并不仅限于上述举例,本技术领域的普通技术人员在本发明的实质范围内所做出的变化、改性、添加或替换,均应属于本发明的保护范围。It should be pointed out that the specific implementation (pseudo code) of the above-mentioned embodiments and algorithms is not a limitation of the present invention, and the present invention is not limited to the above-mentioned examples. Changes, modifications, additions or substitutions should all belong to the protection scope of the present invention.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810372478.5A CN108596893B (en) | 2018-04-24 | 2018-04-24 | An image processing method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810372478.5A CN108596893B (en) | 2018-04-24 | 2018-04-24 | An image processing method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108596893A CN108596893A (en) | 2018-09-28 |
CN108596893B true CN108596893B (en) | 2022-04-08 |
Family
ID=63614931
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810372478.5A Active CN108596893B (en) | 2018-04-24 | 2018-04-24 | An image processing method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108596893B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114359288B (en) * | 2022-03-22 | 2022-06-07 | 珠海市人民医院 | Medical image cerebral aneurysm detection and positioning method based on artificial intelligence |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102945378A (en) * | 2012-10-23 | 2013-02-27 | 西北工业大学 | Method for detecting potential target regions of remote sensing image on basis of monitoring method |
CN105761266A (en) * | 2016-02-26 | 2016-07-13 | 民政部国家减灾中心 | Method of extracting rectangular building from remote sensing image |
CN106778903A (en) * | 2017-01-09 | 2017-05-31 | 深圳市美好幸福生活安全系统有限公司 | Conspicuousness detection method based on Sugeno fuzzy integrals |
CN107123150A (en) * | 2017-03-25 | 2017-09-01 | 复旦大学 | The method of global color Contrast Detection and segmentation notable figure |
CN107330861A (en) * | 2017-07-03 | 2017-11-07 | 清华大学 | Image significance object detection method based on diffusion length high confidence level information |
CN107357834A (en) * | 2017-06-22 | 2017-11-17 | 浙江工业大学 | Image retrieval method based on visual saliency fusion |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9355635B2 (en) * | 2010-11-15 | 2016-05-31 | Futurewei Technologies, Inc. | Method and system for video summarization |
-
2018
- 2018-04-24 CN CN201810372478.5A patent/CN108596893B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102945378A (en) * | 2012-10-23 | 2013-02-27 | 西北工业大学 | Method for detecting potential target regions of remote sensing image on basis of monitoring method |
CN105761266A (en) * | 2016-02-26 | 2016-07-13 | 民政部国家减灾中心 | Method of extracting rectangular building from remote sensing image |
CN106778903A (en) * | 2017-01-09 | 2017-05-31 | 深圳市美好幸福生活安全系统有限公司 | Conspicuousness detection method based on Sugeno fuzzy integrals |
CN107123150A (en) * | 2017-03-25 | 2017-09-01 | 复旦大学 | The method of global color Contrast Detection and segmentation notable figure |
CN107357834A (en) * | 2017-06-22 | 2017-11-17 | 浙江工业大学 | Image retrieval method based on visual saliency fusion |
CN107330861A (en) * | 2017-07-03 | 2017-11-07 | 清华大学 | Image significance object detection method based on diffusion length high confidence level information |
Non-Patent Citations (5)
Title |
---|
Minimum Barrier Salient Object Detection at 80 FPS;Jianming Zhang等;《2015 IEEE International Conference on Computer Vision》;20160218;1404-1412 * |
Studyofvisualsaliencydetectionvianonlocalanisotropic diffusion equation;Xiujun Zhang等;《PatternRecognition》;20141022;1315-1327 * |
Visual saliencydetection:Fromspacetofrequency;Dongyue Chen等;《SignalProcessing:ImageCommunication》;20160312;57-68 * |
基于生物视觉机制的场景识别关键技术研究;陈硕;《中国优秀博硕士学位论文全文数据库(博士) 信息科技辑》;20150715(第07期);I138-131 * |
层次图融合的显著性检测;王慧玲等;《计算机科学与探索》;20160908;1752-1762 * |
Also Published As
Publication number | Publication date |
---|---|
CN108596893A (en) | 2018-09-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11551333B2 (en) | Image reconstruction method and device | |
CN112750140B (en) | Image segmentation method of camouflage target based on information mining | |
CN113313657B (en) | An unsupervised learning method and system for low-light image enhancement | |
Yang et al. | Towards perceptual image dehazing by physics-based disentanglement and adversarial training | |
CN107274419B (en) | Deep learning significance detection method based on global prior and local context | |
WO2020199931A1 (en) | Face key point detection method and apparatus, and storage medium and electronic device | |
CN112949508B (en) | Model training method, pedestrian detection method, electronic device, and readable storage medium | |
CN111274921B (en) | Method for recognizing human body behaviors by using gesture mask | |
Yang et al. | Single image haze removal via region detection network | |
CN107564022B (en) | Saliency detection method based on Bayesian Fusion | |
CN107730515B (en) | Saliency detection method for panoramic images based on region growing and eye movement model | |
WO2022041830A1 (en) | Pedestrian re-identification method and device | |
WO2021073311A1 (en) | Image recognition method and apparatus, computer-readable storage medium and chip | |
CN105913456A (en) | Video significance detecting method based on area segmentation | |
CN109086777B (en) | Saliency map refining method based on global pixel characteristics | |
CN106599805A (en) | Supervised data driving-based monocular video depth estimating method | |
CN108388905B (en) | A Light Source Estimation Method Based on Convolutional Neural Network and Neighborhood Context | |
CN109558806A (en) | The detection method and system of high score Remote Sensing Imagery Change | |
CN108596102A (en) | Indoor scene object segmentation grader building method based on RGB-D | |
CN109389569B (en) | Monitoring video real-time defogging method based on improved DehazeNet | |
CN109886146B (en) | Flood information remote sensing intelligent acquisition method and device based on machine vision detection | |
Moriwaki et al. | Hybrid loss for learning single-image-based HDR reconstruction | |
CN109978848A (en) | Method based on hard exudate in multiple light courcess color constancy model inspection eye fundus image | |
CN115393225A (en) | A low-light image enhancement method based on multi-level feature extraction and fusion | |
CN114663371A (en) | Image salient object detection method based on modal unique and common feature extraction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |