CN108596893B - Image processing method and system - Google Patents

Image processing method and system Download PDF

Info

Publication number
CN108596893B
CN108596893B CN201810372478.5A CN201810372478A CN108596893B CN 108596893 B CN108596893 B CN 108596893B CN 201810372478 A CN201810372478 A CN 201810372478A CN 108596893 B CN108596893 B CN 108596893B
Authority
CN
China
Prior art keywords
image
processing
detection
bmbd
fmbd
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810372478.5A
Other languages
Chinese (zh)
Other versions
CN108596893A (en
Inventor
付爱国
陈东岳
贾同
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN201810372478.5A priority Critical patent/CN108596893B/en
Publication of CN108596893A publication Critical patent/CN108596893A/en
Application granted granted Critical
Publication of CN108596893B publication Critical patent/CN108596893B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image processing method and a system, wherein the method is suitable for an image processing device, the device comprises an image acquisition module, an image preprocessing module, an image processing module and an image segmentation module, and the method comprises the following steps: s1, acquiring images, and acquiring images through an image acquisition module; s2, preprocessing the image, removing dryness, enhancing contrast and making image morphology to the collected image, completing the pre-processing of the image data, and providing high-quality image data for the following main processing process; s3, processing the image, including image saliency target detection processing, and smoothing and morphological operation processing; and S4, performing image segmentation, and performing target segmentation on the final image in S3 through adaptive threshold segmentation to obtain a region of interest. The method and the system can simultaneously carry out visual display of detection and segmentation, can provide real-time detection segmentation effect for users, improve user experience and improve operation efficiency.

Description

Image processing method and system
Technical Field
The present invention relates to image processing methods and systems, and particularly to a method and system for fast detecting a salient object in a still image and a video sequence.
Background
The image processing device and method are widely applied to many fields such as production, management, military, medical treatment and the like. The application of image processing technology, especially detection and segmentation technology in image processing technology, in target detection and segmentation enables people to more conveniently acquire useful information in many aspects, so that more accurate judgment can be made on the situation. The wide application of the image processing technology can facilitate the life of people and save more cost and resources.
The target detection and segmentation algorithm provided by the invention can be used for rapidly detecting and segmenting the remarkable target of the acquired input image and rapidly acquiring the target region which is interested by people. In many scenes, such as intelligent surveillance videos, an observer needs to quickly extract the region of a target person or a main marker, and the relevant region can be quickly extracted by the algorithm. For example, in the acquisition of effective information of big data, how to quickly acquire the most effective information is a problem which is always considered by many data experts, and the relevant model based on the visual attention mechanism can quickly retrieve the significance information from the database according to the visual significance principle, so that the quick extraction of the effective information is realized. The method has great application value in two aspects, and the universal obvious target detection algorithm has great significance in aspects such as pathological detection, data compression, automatic focusing during photographing and the like.
The prior similar technology has the following defects:
firstly, if the image texture is complex and the contrast is low, the existing detection algorithm based on image contrast definition cannot give good regional information of the target, so that people cannot obtain accurate target information.
Secondly, the existing acknowledged Model (MBD) with the best detection effect is difficult to provide a better detection result based on the design idea of the algorithm itself when the object to be detected contacts the boundary of the image.
Third, the currently popular deep learning models rarely have applicability across scenes, when a scene changes, a new model needs to be trained according to different data sets because of the difference of targets, and meanwhile, the deep learning-based models need a large amount of data and time resources.
The above disadvantages are the points that the significance detection model in the prior art is difficult to consider, and the problems that people are actively exploring and solving, if the problems cannot be solved to the maximum extent, it is difficult to provide accurate effective target information for researchers or consumers.
In the published papers and the implemented applications, some of them directly use the image color contrast, and based on the significance (human visual differentiability) of the inter-pixel or inter-region colors for defining the pixels or regions, the HC and RC algorithms represented by Mingming Cheng are compared, but the problem of low contrast is still not solved well, and the effect on detecting segmentation is not obvious. Many researchers consider the overall topological information of the image, and define significance by using the correlation between pixels and regions, which greatly improves the effect, for example, the detection algorithm based on the scanning traversal of the minimum barrier distance of Jianming Zhang obtains good effect on each large data set, but has the problems of low contrast and contact between the detection target and the boundary, and especially when the detection target is in contact with the boundary, the performance of the algorithm is greatly reduced. Some other researchers who research interactive segmentation algorithms propose that manual labels (foreground and background information) can be added to images in the way of Trimap and Strokes, and then pixel-based target detection and segmentation effects can be performed according to the information of interest provided by users, but manual participation in labeling is required. The solution to this problem for the deep learning of recent comparative fires is training based on Convolutional neural networks, and the classic "sales Detection with current functional networks" of Huchuan Lu (European Conference on Computer vision. springer, Cham,2016: 825-; also known as "deep Supervised saline Object Detection with Short Connections" by Mingming Cheng (IEEE TPAMI, 2018). However, the detection behind the deep learning network may cause situations of insufficient scene migration, strong specificity of a specific target, and large requirements for time and data resources, and is difficult to meet different application scenarios.
Disclosure of Invention
In view of the above-mentioned drawbacks and deficiencies of the prior art, the present invention provides a general target detection algorithm, which can detect a significant target from image data acquired in various scenes, and perform processing and segmentation operations by combining an image processing method to obtain an accurate target region.
The technical scheme of the invention is realized as follows:
an image processing method is suitable for an image processing device, and the device comprises an image acquisition module, an image preprocessing module, an image processing module and an image segmentation module; the method comprises the following steps:
s1, acquiring images, and acquiring images through an image acquisition module;
s2, image preprocessing, namely, carrying out denoising, contrast enhancement and image morphology operation on the acquired image, completing the preprocessing of the image data and providing high-quality image data for the subsequent main processing process;
s3, image processing, including image saliency target detection processing, and smoothing and morphological operation processing;
and S4, image segmentation, namely performing target segmentation on the final image in the S3 through adaptive threshold segmentation to obtain a region of interest.
Preferably, the detecting of the image saliency target in the step S3 mainly includes selecting seed points, determining diffusion order, and fusing images of different channels;
further, the step of detecting the image salient object comprises,
a1, performing BMBD algorithm processing;
a2, based on the processing result of A1, entering FMBD algorithm processing;
and A3, fusing the two significance detection graphs obtained in A1 and s2 to obtain a BF _ MBDS graph for significance detection.
Preferably, the BMBD algorithm processing includes the steps of,
b1, preprocessing the input image, including denoising and color space conversion;
b2, performing channel separation on the processed image;
b3, setting initial seed points, setting the periphery of the image as seed points, and performing diffusion treatment based on the seed points;
b4, diffusing from the periphery to the center in a four-neighborhood mode, and performing diffusion updating for one circle;
b5, when the pixel point of the previous round of diffusion updating is in the next round of diffusion, the new updating is carried out in the same way until all the pixel points are updated and traversed;
b6, respectively obtaining diffusion images BMBD _ L, BMBD _ a and BMBD _ B of the three channels;
b7, fusing the images BMBD _ L, BMBD _ a and BMBD _ B obtained in the S6 in a mode of taking the maximum value to obtain a BMBD _ Lab image;
and B8, further processing the BMBD _ Lab image obtained in the B7, mainly comprising smoothing processing and contrast enhancement, and obtaining a final BMBD _ map detection image.
Preferably, the FMBD algorithm processing comprises the steps of,
c1, preprocessing the input image, including denoising and color space conversion;
c2, channel separation is carried out on the processed image;
c3, setting an initial seed point, selecting the pixels with the confidence coefficient, namely the highest significance value based on the BMBD algorithm detection graph, and performing diffusion treatment by taking the pixel points as initialized seed points;
c4, diffusing from the periphery to the center in a four-neighborhood mode, and performing diffusion updating for one circle;
c5, when the pixel point of the previous diffusion updating is in the next diffusion, the new updating is carried out in the same way until all the pixel points are updated and traversed;
c6, obtaining diffusion images FMBD _ L, FMBD _ a and FMBD _ b of the three channels respectively;
c7, fusing the images FMBD _ L, FMBD _ a and FMBD _ b obtained in the S6 in a mode of taking the maximum value to obtain an FMBD _ Lab image;
and C8, further processing the FMBD _ Lab image obtained in the C7, mainly comprising smoothing processing and contrast enhancement, and obtaining a final FMBD _ map detection image.
An image processing system comprises an image acquisition device, an image transmission module, an image processing device, an image display module and an image storage module:
the image acquisition equipment acquires data in a scene that a user needs to detect and segment, and can set single-frame image acquisition of the data and video acquisition of different video sequences according to the user requirements;
the image transmission module is used for transmitting the acquired data;
the image processing device processes the image transmitted by the image transmission module by adopting the image processing method of claim 1;
the image storage module is used for storing the image data transmitted by the image transmission module and the intermediate data in the processing process of the image processing device;
and the image display module displays the processed result in real time for a user to observe.
Preferably, the processing procedure of the image processing device mainly comprises the following steps,
d1, preprocessing the transmission data to provide high-quality data for a later detection algorithm;
d2, processing the image transmitted by the transmission module according to the detection algorithm of claim 1 to obtain a detection image;
d3, performing segmentation processing on the detection image obtained in S2 to obtain a final segmentation map.
Preferably, the image transmission module adopts two data transmission modes, namely wired and wireless, so that the requirements of different users under different application conditions are met.
The invention has the beneficial effects that:
1. the image processing method and the image processing system can simultaneously carry out visual display of detection and segmentation, can provide a real-time detection segmentation effect for a user, improve user experience and improve operation efficiency.
2. The image processing method and the image processing system can be suitable for different application scenes and can be used for carrying out target detection and segmentation processing under multiple scenes.
3. The system adopts a remote wireless transmission mode, so that the freedom of space is greatly increased.
Drawings
Fig. 1 is a flow chart of the BMBD algorithm.
FIG. 2 is a flow chart of the FMBD algorithm.
Fig. 3 is an example of a BMBD algorithm diffusion map.
Fig. 4 is an example FMBD algorithm diffusion map.
FIG. 5 is a BF _ MBD overall flow diagram.
FIG. 6 is a block diagram of the system of the present invention.
Detailed Description
The invention will be further described in detail with reference to the following figures and specific embodiments:
as shown in fig. 1 and 2, an image processing method is applied to an image processing apparatus, the apparatus includes an image acquisition module, an image preprocessing module, an image processing module, and an image segmentation module, and the method includes the following steps:
s1, acquiring images, and acquiring images through an image acquisition module;
s2, image preprocessing, namely, carrying out denoising, contrast enhancement and image morphology operation on the acquired image, completing the preprocessing of the image data and providing high-quality image data for the subsequent main processing process;
s3, image processing, including image saliency target detection processing, and smoothing and morphological operation processing;
and S4, image segmentation, namely performing target segmentation on the final image in the S3 through adaptive threshold segmentation to obtain a region of interest.
As shown in fig. 5, further, the image saliency target detection in S3 mainly includes selection of seed points, determination of diffusion order, and image fusion of different channels. Wherein the step of detecting the image salient object comprises the following steps,
a1, performing BMBD algorithm processing;
a2, based on the processing result of A1, performing FMBD algorithm processing again;
and A3, fusing the two significance detection graphs obtained in A1 and A2 to obtain a BF _ MBDS graph for significance detection.
As shown in fig. 1, the BMBD algorithm processes as follows,
b1, preprocessing the input image, including denoising and color space conversion;
b2, performing channel separation on the processed image;
b3, setting initial seed points, setting the periphery of the image as seed points, and performing diffusion treatment based on the seed points;
b4, diffusing from the periphery to the center in a four-neighborhood mode, and performing diffusion updating for one circle;
b5, when the pixel point of the previous round of diffusion updating is in the next round of diffusion, the new updating is carried out in the same way until all the pixel points are updated and traversed;
b6, respectively obtaining diffusion images BMBD _ L, BMBD _ a and BMBD _ B of the three channels;
b7, fusing the images BMBD _ L, BMBD _ a and BMBD _ B obtained in B6 in a maximum value mode to obtain a BMBD _ Lab image;
and B8, further processing the BMBD _ Lab image obtained in the B7, mainly comprising smoothing processing and contrast enhancement, and obtaining a final BMBD _ map detection image.
As shown in fig. 2, the FMBD algorithm process includes the steps of,
c1, preprocessing the input image, including denoising and color space conversion;
c2, channel separation is carried out on the processed image;
c3, setting an initial seed point, selecting the pixels with the confidence coefficient, namely the highest significance value based on the BMBD algorithm detection graph, and performing diffusion treatment by taking the pixel points as initialized seed points;
c4, diffusing from the periphery to the center in a four-neighborhood mode, and performing diffusion updating for one circle;
c5, when the pixel point of the previous diffusion updating is in the next diffusion, the new updating is carried out in the same way until all the pixel points are updated and traversed;
c6, obtaining diffusion images FMBD _ L, FMBD _ a and FMBD _ b of the three channels respectively;
c7, fusing the images FMBD _ L, FMBD _ a and FMBD _ b obtained in the C6 in a mode of taking the maximum value to obtain an FMBD _ Lab image;
and C8, further processing the FMBD _ Lab image obtained in the C7, mainly comprising smoothing processing and contrast enhancement, and obtaining a final FMBD _ map detection image.
As shown in fig. 6, an image processing system includes an image capturing apparatus 601, an image transmission module 602, an image processing module 603, an image storage module 604, and an image display module 605, wherein,
the image acquisition equipment 601 acquires data in a scene that a user needs to detect and segment, and can set single-frame image acquisition of data and video acquisition of different video sequences according to the user requirements;
the image transmission module 602 is configured to transmit the acquired data
The image processing module 603 processes the image transmitted from the image transmission module by using the image processing method of claim 1;
the image storage module 604 is used for storing the image data transmitted by the image transmission module and the intermediate data in the processing process of the image processing device;
the image display module 605 displays the processed result in real time for the user to observe.
Furthermore, the processing procedure of the image processing module mainly comprises the following steps,
d1, preprocessing the transmission data to provide high-quality data for a later detection algorithm;
d2, processing the image transmitted by the transmission module according to the detection algorithm of claim 1 to obtain a detection image;
and D3, performing segmentation processing on the detection image obtained in the step D2 to obtain a final segmentation map.
Furthermore, the image transmission module adopts two data transmission modes of wire and wireless, and meets the requirements of different users under different application conditions.
As shown in fig. 3, the design concept of the BMBD algorithm in the embodiment is as follows:
the BMBD is called Background Minimum Barrier Distance, namely a detection method based on the Minimum Barrier Distance of Background information. In the method, the influence of background information on detection is considered, so that pixel points around the image are set as seed points in the selection of the seed points; then, based on the updating of the seed points around to one circle inside, the specific updating mode adopts a four-adjacent domain updating mode.
The point marked as 0 in the figure is a pixel point which is selected by people and is used as a seed point in the background, then the diffusion operation of four neighborhoods is carried out by adopting the idea of visual diffusion in visual attention, the point marked as 1 which is diffused for the first time is obtained, the point is diffused for the first time, and the initial BMBD value of the point is updated for the first time. The next update is then performed in the same manner, but unlike the first time, when the region labeled 2 is updated by diffusion the second time, the region labeled 1 is also updated again at the same time. The idea of diffusion is in accordance with the idea of real water diffusion repetition, like that water on mountains around the mountains is converged to a central basin in practice, and in the process of continuous convergence, each path from the mountains to the basin is subjected to multiple soakings, like the idea of diffusion in an algorithm. The idea of diffusion is proposed and is an important innovation point in the visual saliency detection algorithm.
As shown in fig. 4, the design concept of the FMBD algorithm in the embodiment is as follows:
FMBD is known as Foreground Minimum Barrier Distance, i.e. a detection method based on Foreground (target) information Minimum Barrier Distance. In the method, the influence of foreground information on detection is considered, so that pixel points of partial foreground targets in the image are set as seed points in the selection of the seed points; and then updating the selected seed points to the outside by one circle, wherein the specific updating mode adopts a four-adjacent domain updating mode.
The point marked as 0 in the graph is an initialization seed point, and the idea of spring water overflow is just used for reference based on the idea of center to periphery diffusion, so that the problem of detection of contact between a detection target and an image boundary can be well solved. The selection of the central seed point is a relatively critical problem, a partial region of a saliency detection target provided by an algorithm BMBD based on background information diffusion is adopted, the partial region is processed by morphological operations and the like in image processing, and an obtained small part of pixels is a central region with high saliency values in saliency detection (indirectly explaining that the partial region is a foreground region with the maximum probability). The region can be used as foreground information to set seed points, diffusion operation towards the periphery is carried out, diffusion is stopped until the diffusion is complete and sufficient, and a final significance diffusion detection image is obtained.
As shown in fig. 5, the design concept and specific implementation of the fusion of the BMBD algorithm and the FMBD algorithm to generate the final BF _ MBDS image are as follows:
the fusion algorithm mainly adopted by the part is based on the idea of maximum value, and multichannel significance values are adopted on a single algorithm multichannel in sequence for summation operation, the significance values are defined as the significance values of the point, namely BMBD _ value and FMBD _ value, and then significance detection graphs fusing different algorithms are adopted. Therefore, a visual saliency detection map of an image is obtained, the map is a detected gray image, the saliency value of a target region is high and can be distinguished obviously, and then the image can be segmented by adopting a proper segmentation algorithm to obtain a standard binary image.
The specific implementation (pseudo code) of the algorithm is as follows:
Figure GDA0003488233530000081
Figure GDA0003488233530000091
for each pixel in each update turn, it is defined as follows:
Figure GDA0003488233530000092
and for betaI(Py(x) ) are defined as follows:
βI(Py(x))=max{U(y),I(x)}-min{L(y),I(x)}
it should be noted that the specific implementation (pseudo code) of the above embodiments and algorithms is not intended to limit the present invention, and the present invention is not limited to the above examples, and those skilled in the art should make changes, modifications, additions or substitutions within the spirit and scope of the present invention.

Claims (5)

1. An image processing method is applicable to an image processing device, the device comprises an image acquisition module, an image preprocessing module, an image processing module and an image segmentation module, and the method is characterized by comprising the following steps:
s1, acquiring images, and acquiring images through an image acquisition module;
s2, image preprocessing, namely, carrying out denoising, contrast enhancement and image morphology operation on the acquired image, completing the preprocessing of the image data and providing high-quality image data for the subsequent main processing process;
s3, image processing, including image saliency target detection processing, and smoothing and morphological operation processing;
s4, image segmentation, namely, performing target segmentation on the final image in the S3 through adaptive threshold segmentation to obtain a region of interest;
wherein the step of detecting the image salient object comprises the following steps,
a1, performing BMBD algorithm processing;
a2, based on the processing result of A1, performing FMBD algorithm processing again;
a3, fusing the two significance detection graphs obtained in A1 and A2 to obtain a BF _ MBDS graph for significance detection;
the steps of the BMBD algorithm processing are,
b1, preprocessing the input image, including denoising and color space conversion;
b2, performing channel separation on the processed image;
b3, setting initial seed points, setting the periphery of the image as seed points, and performing diffusion treatment based on the seed points;
b4, diffusing from the periphery to the center in a four-neighborhood mode, and performing diffusion updating for one circle;
b5, when the pixel point of the previous round of diffusion updating is in the next round of diffusion, the new updating is carried out in the same way until all the pixel points are updated and traversed;
b6, respectively obtaining diffusion images BMBD _ L, BMBD _ a and BMBD _ B of the three channels;
b7, fusing the images BMBD _ L, BMBD _ a and BMBD _ B obtained in B6 in a maximum value mode to obtain a BMBD _ Lab image;
b8, further processing the BMBD _ Lab image obtained in B7, mainly comprising smoothing processing and contrast enhancement, and obtaining a final BMBD _ map detection image;
the FMBD algorithm process has the steps of,
c1, preprocessing the input image, including denoising and color space conversion;
c2, channel separation is carried out on the processed image;
c3, setting an initial seed point, selecting the pixels with the confidence coefficient, namely the highest significance value based on the BMBD algorithm detection graph, and performing diffusion treatment by taking the pixel points as initialized seed points;
c4, diffusing from the periphery to the center in a four-neighborhood mode, and performing diffusion updating for one circle;
c5, when the pixel point of the previous diffusion updating is in the next diffusion, the new updating is carried out in the same way until all the pixel points are updated and traversed;
c6, obtaining diffusion images FMBD _ L, FMBD _ a and FMBD _ b of the three channels respectively;
c7, fusing the images FMBD _ L, FMBD _ a and FMBD _ b obtained in the C6 in a mode of taking the maximum value to obtain an FMBD _ Lab image;
and C8, further processing the FMBD _ Lab image obtained in the C7, mainly comprising smoothing processing and contrast enhancement, and obtaining a final FMBD _ map detection image.
2. The image processing method according to claim 1, characterized in that: the detection of the image salient object in the step S3 mainly includes selection of seed points, determination of diffusion order, and image fusion of different channels.
3. An image processing system comprises an image acquisition device, an image transmission module, an image processing device, an image display module and an image storage module, and is characterized in that:
the image acquisition equipment acquires data in a scene that a user needs to detect and segment, and can set single-frame image acquisition of the data and video acquisition of different video sequences according to the user requirements;
the image transmission module is used for transmitting the acquired data;
the image processing device processes the image transmitted by the image transmission module by adopting the image processing method of claim 1;
the image storage module is used for storing the image data transmitted by the image transmission module and the intermediate data in the processing process of the image processing device;
and the image display module displays the processed result in real time for a user to observe.
4. The image processing system according to claim 3, characterized in that: the processing procedure of the image processing apparatus mainly includes the following steps,
d1, preprocessing the transmission data to provide high-quality data for a later detection algorithm;
d2, processing the image transmitted by the transmission module according to the detection algorithm of claim 1 to obtain a detection image;
and D3, performing segmentation processing on the detection image obtained in the step D2 to obtain a final segmentation map.
5. The image processing system according to claim 3, characterized in that: the image transmission module adopts two data transmission modes of wire and wireless, and meets the requirements of different users under different application conditions.
CN201810372478.5A 2018-04-24 2018-04-24 Image processing method and system Active CN108596893B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810372478.5A CN108596893B (en) 2018-04-24 2018-04-24 Image processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810372478.5A CN108596893B (en) 2018-04-24 2018-04-24 Image processing method and system

Publications (2)

Publication Number Publication Date
CN108596893A CN108596893A (en) 2018-09-28
CN108596893B true CN108596893B (en) 2022-04-08

Family

ID=63614931

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810372478.5A Active CN108596893B (en) 2018-04-24 2018-04-24 Image processing method and system

Country Status (1)

Country Link
CN (1) CN108596893B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114359288B (en) * 2022-03-22 2022-06-07 珠海市人民医院 Medical image cerebral aneurysm detection and positioning method based on artificial intelligence

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102945378A (en) * 2012-10-23 2013-02-27 西北工业大学 Method for detecting potential target regions of remote sensing image on basis of monitoring method
CN105761266A (en) * 2016-02-26 2016-07-13 民政部国家减灾中心 Method of extracting rectangular building from remote sensing image
CN106778903A (en) * 2017-01-09 2017-05-31 深圳市美好幸福生活安全系统有限公司 Conspicuousness detection method based on Sugeno fuzzy integrals
CN107123150A (en) * 2017-03-25 2017-09-01 复旦大学 The method of global color Contrast Detection and segmentation notable figure
CN107330861A (en) * 2017-07-03 2017-11-07 清华大学 Image significance object detection method based on diffusion length high confidence level information
CN107357834A (en) * 2017-06-22 2017-11-17 浙江工业大学 Image retrieval method based on visual saliency fusion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2641401B1 (en) * 2010-11-15 2017-04-05 Huawei Technologies Co., Ltd. Method and system for video summarization

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102945378A (en) * 2012-10-23 2013-02-27 西北工业大学 Method for detecting potential target regions of remote sensing image on basis of monitoring method
CN105761266A (en) * 2016-02-26 2016-07-13 民政部国家减灾中心 Method of extracting rectangular building from remote sensing image
CN106778903A (en) * 2017-01-09 2017-05-31 深圳市美好幸福生活安全系统有限公司 Conspicuousness detection method based on Sugeno fuzzy integrals
CN107123150A (en) * 2017-03-25 2017-09-01 复旦大学 The method of global color Contrast Detection and segmentation notable figure
CN107357834A (en) * 2017-06-22 2017-11-17 浙江工业大学 Image retrieval method based on visual saliency fusion
CN107330861A (en) * 2017-07-03 2017-11-07 清华大学 Image significance object detection method based on diffusion length high confidence level information

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Minimum Barrier Salient Object Detection at 80 FPS;Jianming Zhang等;《2015 IEEE International Conference on Computer Vision》;20160218;1404-1412 *
Studyofvisualsaliencydetectionvianonlocalanisotropic diffusion equation;Xiujun Zhang等;《PatternRecognition》;20141022;1315-1327 *
Visual saliencydetection:Fromspacetofrequency;Dongyue Chen等;《SignalProcessing:ImageCommunication》;20160312;57-68 *
基于生物视觉机制的场景识别关键技术研究;陈硕;《中国优秀博硕士学位论文全文数据库(博士) 信息科技辑》;20150715(第07期);I138-131 *
层次图融合的显著性检测;王慧玲等;《计算机科学与探索》;20160908;1752-1762 *

Also Published As

Publication number Publication date
CN108596893A (en) 2018-09-28

Similar Documents

Publication Publication Date Title
CN109829443B (en) Video behavior identification method based on image enhancement and 3D convolution neural network
Pang et al. Visual haze removal by a unified generative adversarial network
CN106296725B (en) Moving target real-time detection and tracking method and target detection device
CN108062525B (en) Deep learning hand detection method based on hand region prediction
CN108154086B (en) Image extraction method and device and electronic equipment
CN111639564B (en) Video pedestrian re-identification method based on multi-attention heterogeneous network
CN111724400B (en) Automatic video matting method and system
JP2005513656A (en) Method for identifying moving objects in a video using volume growth and change detection masks
CN109063667B (en) Scene-based video identification mode optimization and pushing method
Wang et al. Background extraction based on joint gaussian conditional random fields
Palou et al. Occlusion-based depth ordering on monocular images with binary partition tree
CN114022823A (en) Shielding-driven pedestrian re-identification method and system and storable medium
CN111460964A (en) Moving target detection method under low-illumination condition of radio and television transmission machine room
CN113065568A (en) Target detection, attribute identification and tracking method and system
CN112926388A (en) Campus violent behavior video detection method based on action recognition
CN114627269A (en) Virtual reality security protection monitoring platform based on degree of depth learning target detection
CN117409476A (en) Gait recognition method based on event camera
CN111597978A (en) Method for automatically generating pedestrian re-identification picture based on StarGAN network model
CN111583357A (en) Object motion image capturing and synthesizing method based on MATLAB system
CN117710868B (en) Optimized extraction system and method for real-time video target
CN108596893B (en) Image processing method and system
CN102510437B (en) Method for detecting background of video image based on distribution of red, green and blue (RGB) components
CN113538304A (en) Training method and device of image enhancement model, and image enhancement method and device
CN110852172B (en) Method for expanding crowd counting data set based on Cycle Gan picture collage and enhancement
Teixeira et al. Object segmentation using background modelling and cascaded change detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant