CN111369557B - Image processing method, device, computing equipment and storage medium - Google Patents
Image processing method, device, computing equipment and storage medium Download PDFInfo
- Publication number
- CN111369557B CN111369557B CN202010241994.1A CN202010241994A CN111369557B CN 111369557 B CN111369557 B CN 111369557B CN 202010241994 A CN202010241994 A CN 202010241994A CN 111369557 B CN111369557 B CN 111369557B
- Authority
- CN
- China
- Prior art keywords
- image
- gray
- target
- gray level
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003860 storage Methods 0.000 title claims abstract description 22
- 238000003672 processing method Methods 0.000 title claims abstract description 19
- 238000012545 processing Methods 0.000 claims abstract description 105
- 238000004458 analytical method Methods 0.000 claims abstract description 43
- 230000011218 segmentation Effects 0.000 claims abstract description 18
- 238000000034 method Methods 0.000 claims description 31
- 238000004422 calculation algorithm Methods 0.000 claims description 15
- 230000000877 morphologic effect Effects 0.000 claims description 12
- 230000002194 synthesizing effect Effects 0.000 claims description 12
- 230000003044 adaptive effect Effects 0.000 claims description 6
- 238000012544 monitoring process Methods 0.000 abstract description 20
- 238000001931 thermography Methods 0.000 description 19
- 238000010586 diagram Methods 0.000 description 16
- 238000001914 filtration Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 10
- 230000006870 function Effects 0.000 description 9
- 238000004590 computer program Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 7
- 238000003709 image segmentation Methods 0.000 description 7
- 230000015572 biosynthetic process Effects 0.000 description 6
- 238000003786 synthesis reaction Methods 0.000 description 6
- 238000006243 chemical reaction Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000007480 spreading Effects 0.000 description 4
- 238000003892 spreading Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 230000004075 alteration Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000010339 dilation Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000003628 erosive effect Effects 0.000 description 1
- 238000003331 infrared imaging Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000010902 straw Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Image Processing (AREA)
Abstract
The application discloses an image processing method, an image processing device, a computing device and a storage medium. The image processing method comprises the following steps: carrying out significance analysis on the gray level image of the target image to be processed of the fire video to obtain a significance gray level image; based on the gray value of the pixel in the saliency gray image, carrying out threshold segmentation processing on the saliency gray image to obtain a saliency binary image; and obtaining a target binary image corresponding to the target image based on the saliency binary image. Therefore, the video image processing efficiency is improved by effectively combining various characteristics, so as to provide support for improving the efficiency of fire monitoring and/or fire analysis.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing device, a computing device, and a storage medium.
Background
Among the various disasters, fire is one of the main disasters threatening public safety and social development. Along with the development of technology, the development of fire monitoring technology is rapid, and accurate early warning and analysis can guide fire extinguishing personnel to extinguish fire rapidly, so that a lot of losses are recovered.
At present, the infrared thermal imaging technology has a certain degree of application in fire monitoring, and fire points, fire spreading trends and the like are analyzed through continuous monitoring and analysis of fire characteristics so as to objectively and accurately evaluate fire loss, tissue disaster relief and the like. However, due to the self imaging and detection principle, the infrared thermal imaging image has the defects of low contrast, low signal to noise ratio and the like, and the infrared thermal imaging image needs to be subjected to later image processing to be convenient for monitoring, early warning and analyzing fire.
Therefore, how to improve the image processing to provide support for rapid and accurate fire early warning and fire analysis is a technical problem to be solved.
Disclosure of Invention
The application aims to provide an image processing method, an image processing device, a computing device and a storage medium, so as to provide support for realizing rapid and accurate fire early warning and fire analysis.
In a first aspect, an embodiment of the present application provides an image processing method, including:
carrying out significance analysis on the gray level image of the target image to be processed of the fire video to obtain a significance gray level image;
based on the gray value of the pixel in the saliency gray image, carrying out threshold segmentation processing on the saliency gray image to obtain a saliency binary image;
And obtaining a target binary image corresponding to the target image based on the saliency binary image.
In one embodiment, performing a saliency analysis on a gray scale image of a target image results in a saliency gray scale image, comprising:
analyzing a gray level histogram of a gray level image of the target image, and determining gray levels corresponding to pixels in the gray level image and gray level frequencies corresponding to the gray levels;
and processing the gray value of each pixel in the gray image of the target image based on the gray level of each pixel and the gray level frequency corresponding to each gray level to obtain the significant gray image.
In one embodiment, processing the gray value of each pixel in the gray image of the target image based on the gray level of each pixel and the gray level frequency corresponding to each gray level to obtain the significant gray image includes:
for each pixel, determining a distance between a gray level of the pixel and the remaining gray levels;
and determining the sum of the gray level frequency corresponding to the gray level of the pixel and the product of the gray level of the pixel and the distance between the rest gray levels as the gray value of the pixel so as to obtain the remarkable gray image.
In one embodiment, obtaining the target binary image corresponding to the target image based on the saliency binary image includes:
and performing AND operation on the significant binary image and a G component binary image corresponding to the target image to obtain the target binary image, wherein the G component binary image is obtained by extracting a G component channel from the target image.
In one embodiment, the method further comprises:
and carrying out enhancement processing on the target image based on an adaptive gray histogram equalization algorithm.
In one embodiment, after obtaining the target binary image and/or obtaining the enhanced target image, the method further comprises:
synthesizing the processed multi-frame target binary image into a corresponding target binary video; and/or
And synthesizing the multi-frame target image obtained after the enhancement processing into a corresponding enhancement target video.
In one embodiment, synthesizing the images into corresponding videos includes:
if the frame rate of the fire video is greater than or equal to the preset video output frame rate, synthesizing the target binary image or the target image obtained after the enhancement processing into a corresponding video based on the frame rate of the fire video;
If the frame rate of the fire video is smaller than the preset video output frame rate, the target binary image or the target image obtained after the enhancement processing is synthesized into a corresponding video based on the video output frame rate.
In one embodiment, the fire video is an infrared thermography video and the target image is an infrared thermography image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the saliency analysis unit is used for carrying out saliency analysis on the gray level image of the target image to be processed of the fire video to obtain a saliency gray level image;
the threshold segmentation unit is used for carrying out threshold segmentation processing on the saliency gray image based on the gray values of pixels in the saliency gray image to obtain a saliency binary image;
and the image processing unit is used for obtaining a target binary image corresponding to the target image based on the saliency binary image.
In one embodiment, the saliency analysis unit is configured to:
analyzing a gray level histogram of a gray level image of the target image, and determining gray levels corresponding to pixels in the gray level image and gray level frequencies corresponding to the gray levels;
And processing the gray value of each pixel in the gray image of the target image based on the gray level of each pixel and the gray level frequency corresponding to each gray level to obtain the significant gray image.
In one embodiment, the gray value processing unit is configured to:
for each pixel, determining a distance between a gray level of the pixel and the remaining gray levels;
and determining the sum of the gray level frequency corresponding to the gray level of the pixel and the product of the gray level of the pixel and the distance between the rest gray levels as the gray value of the pixel so as to obtain the remarkable gray image.
In one embodiment, the image processing unit is configured to:
and performing AND operation on the significant binary image and a G component binary image corresponding to the target image to obtain the target binary image, wherein the G component binary image is obtained by extracting a G component channel from the target image.
In one embodiment, the apparatus further comprises:
and the enhancement processing unit is used for carrying out enhancement processing on the target image based on an adaptive gray histogram equalization algorithm.
In an embodiment, the apparatus further comprises a video compositing unit for:
Synthesizing the processed multi-frame target binary image into a corresponding target binary video; and/or
And synthesizing the multi-frame target image obtained after the enhancement processing into a corresponding enhancement target video.
In one embodiment, the video composition unit is configured to:
if the frame rate of the fire video is greater than or equal to the preset video output frame rate, the video synthesis unit synthesizes the target binary image or the target image obtained after the enhancement processing into a corresponding video based on the frame rate of the fire video;
if the frame rate of the fire video is smaller than the preset video output frame rate, the video synthesis unit synthesizes the target binary image or the target image obtained after the enhancement processing into a corresponding video based on the video output frame rate.
In one embodiment, the fire video is an infrared thermography video and the target image is an infrared thermography image.
In a third aspect, another embodiment of the application also provides a computing device comprising at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute any image processing method provided by the embodiment of the application.
In a fourth aspect, another embodiment of the present application further provides a computer storage medium, where the computer storage medium stores computer-executable instructions for causing a computer to perform any one of the image processing methods in the embodiments of the present application.
According to the image processing scheme provided by the embodiment of the application, the accuracy of image segmentation is improved by designing a simple and low-complexity algorithm structure and effectively integrating various image characteristic information with smaller calculated amount, so that support is provided for improving the efficiency of fire monitoring and/or fire analysis. In addition, the image processing scheme can realize the automation of the processing operation of the fire video without manual auxiliary operation, and can greatly reduce the labor cost.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of an application environment according to one embodiment of the application;
FIG. 2 is a flow chart of an image processing method according to an embodiment of the application;
FIGS. 3A, 3B, and 3C are examples of processed images according to embodiments of the present application;
FIG. 4 is a schematic diagram of an image processing flow according to one embodiment of the application;
FIGS. 5A and 5B are examples of processed images according to embodiments of the present application;
FIG. 6 is a schematic diagram of an image enhancement process flow according to one embodiment of the present application;
FIG. 7 is an example of a processed image according to an embodiment of the application;
FIG. 8 is a schematic diagram of an implementation principle of an image processing system according to an embodiment of the present application;
fig. 9 is a schematic diagram of an image processing apparatus according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a computing device according to one embodiment of the application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application.
FIG. 1 is a schematic diagram of an application environment according to one embodiment of the application.
As shown in fig. 1, the application environment may include at least one server 20 and a plurality of terminal devices 10. The terminal device 10 can transmit and receive information to and from the server 20 via the network 40. The server 20 may obtain the content required by the terminal device 10 by accessing the database 30. The terminal devices (e.g., between 10_1 and 10_2 or 10_n) may also communicate with each other via the network 40. Network 40 may be a broad network for information transfer and may include one or more communication networks such as a wireless communication network, the internet, a private network, a local area network, a metropolitan area network, a wide area network, or a cellular data network. In one embodiment, network 40 may also comprise a satellite network, whereby GPS signals of terminal device 10 are transmitted to server 20. It should be noted that the underlying concepts of the exemplary embodiments of this invention are not altered if additional modules are added to or individual modules are removed from the illustrated environment. In addition, although a bi-directional arrow from the database 30 to the server 20 is shown for ease of illustration, it will be understood by those skilled in the art that the above-described data transmission and reception may also be implemented through the network 40.
Terminal device 10 is any suitable electronic device that may be used for network access, including but not limited to a computer, notebook, smart phone, tablet, or other type of device. Server 20 is any server that is capable of providing information needed for interactive services through a network access. And one or a portion of which will be selected for description in the following description (e.g., terminal device 10-1), it will be understood by those skilled in the art that the above-described single terminal device is intended to represent a large number of terminals present in a real network, and that the illustrated single server 20 and database 30 are intended to represent that aspects of the present invention may relate to server and database operations. The specific numbering of the terminal devices and individual servers and databases is described in detail for at least convenience of illustration and is not meant to imply limitations on the type or location of mobile terminals and servers, etc.
In one embodiment, the terminal device is capable of acquiring a target image, and also capable of outputting the target image or a corresponding video. The system for performing image processing may be configured on the terminal device side as shown in fig. 1, may be configured on the server side, or may be configured with a part of functions on the terminal device side and a part of functions on the server side, which is not limited in the present application.
Fig. 2 is a flow chart of an image processing method according to an embodiment of the application.
As shown in fig. 2, in step S210, a significance analysis is performed on a gray image of a target image to be processed of a fire video, to obtain a significance gray image.
Here, the fire video may be a video collected in any fire monitoring application scene, such as a forest fire monitoring scene, a gas station daily monitoring scene, a straw burning scene, and the like. And, the fire condition video can be based on real-time acquisition of the camera equipment, or can be historical video data acquired from a related storage medium, and the application is not limited to this.
In one embodiment, the fire video may be an ir thermal imaging video acquired based on ir thermal imaging technology, and the target image to be processed may be an ir thermal imaging image. Taking a forest fire monitoring scene as an example, the obtained target image may be an infrared thermal imaging image as shown in fig. 3A. The infrared imaging technology can be converted into a thermal image of the target object through system processing according to the radiation energy of the detected object, and the temperature distribution of the detected object is obtained, so that the state of the object is judged. Based on the infrared thermal imaging technology, if a fire event occurs in a monitored place (such as a forest, a gas station, a residential building and the like), the fire characteristics can be analyzed through the collected infrared thermal imaging video or image, so that the fire point, the fire spreading trend and the like can be analyzed, and the fire loss, the tissue disaster relief and the like can be objectively and accurately evaluated. Aiming at the acquired fire video, a target image to be processed can be acquired from the fire video by extracting a video frame. In practice, for example, a video frame may be acquired from a fire video frame by frame as a target image to be processed. It should be understood that the target image is merely illustrated and not limited herein, and in other embodiments, the target image may be acquired in a frame-skip manner (e.g., every 24 frames) as desired, for example, and the application is not limited in this respect.
After that, the extracted target image may be converted into a grayscale image. As an example, the target image may be converted into a corresponding gray-scale image by, for example, the following gray-scale conversion formula (1).
G i =(R i *299+G i *587+B i *114+500)/1000 (1)
Wherein G is i R is the gray value of pixel i i 、G i 、B i The numerical values of R, G, B three channels of the infrared thermal imaging image are respectively represented, M represents the total pixel number of the target image, i and M are positive integers, and i is more than or equal to 1 and less than or equal to M. It should be understood that the above gray conversion formula is merely illustrative of specific gray conversion and not limiting, and in other embodiments, a conventional gray conversion formula may be adopted, or detailed parameters involved in the gray conversion formula may be adjusted according to service requirements, which is not limited by the present application.
It should be understood that in the embodiment of the present application, the method of acquiring the target image from the video and converting the target image into the corresponding gray scale image is merely illustrative, and not limiting, and in other embodiments, for example, the gray scale image of the target image may be directly acquired and the acquired gray scale image may be subjected to subsequent image processing, which is not described herein.
In order to ensure the accuracy of the subsequent image processing, after the gray level image of the target image is obtained, before the gray level image of the target image is subjected to the saliency analysis, for example, morphological filtering processing may be further performed on the gray level image of the target image to remove image noise. For example, the image shown in fig. 3B is obtained by morphological filtering the target image shown in fig. 3A. The morphological filtering process may be, for example, a morphological erosion operation, a morphological dilation operation, or the like, performed on a gray scale image of the target image. In other embodiments, other denoising or enhancing processes, such as median filtering, mean filtering, etc., may be performed on the gray scale image of the target image, which is not limited by the present application.
And then, carrying out significance analysis on the gray level image corresponding to the target image to obtain a significance gray level image.
In implementation, the gray level corresponding to each pixel in the gray level image and the gray level frequency corresponding to each gray level can be determined by analyzing the gray level histogram of the gray level image of the target image, and the gray value of each pixel in the gray level image of the target image is processed based on the gray level of each pixel and the gray level frequency corresponding to each gray level to obtain the significant gray level image.
In particular, the gray histogram of the gray image may be analyzed, the image gray level range l is counted, and for each pixel i in the gray image, the gray level l of said pixel i may be determined i And the rest gray level l j Distance between them, the gray level l of the pixel i i Corresponding gray level frequency f i And the gray level of the pixel and the rest gray level l j Distance d (l) i ,l j ) And the sum of the products of said pixels is determined as the gray value of said pixel. For example as shown in the following formula (2),
wherein S (l) j ) Representing gray level l j The corresponding saliency value is denoted in the present application as the gray value of a pixel in the saliency gray image. d (l) i ,l j ) Representing gray level l i And gray level l j The distance between the two plates is set to be equal, f i representing gray level l i Corresponding gray scale frequency, ">Wherein (1)>Representing gray level l i Corresponding pixel number, i is more than or equal to 0 and less than or equal to 255, and i is more than or equal to 0 and less than or equal to 0 i ≤255,0≤l j ≤255。
Therefore, the calculation formula of the significant value is simplified, only the gray level corresponding to each pixel is counted and the gray level frequency corresponding to each gray level is calculated, the distance between the gray levels corresponding to any two pixels can be calculated and recorded, repeated calculation can be avoided, and therefore the image processing speed is increased.
In step S220, based on the gray values of the pixels in the saliency gray scale image, a threshold segmentation process is performed on the saliency gray scale image, so as to obtain a saliency binary image (as shown in fig. 3C). The saliency binary image comprises two parts, namely a monitored fire area and a background area.
As an example, the significant gray image may be subjected to a thresholding process by, for example, a maximum inter-class variance method (Ostu).
Taking an infrared thermal imaging image as an example, if a fire is detected, the difference between the two parts of the acquired target image, namely the fire area and the background area, serving as targets is large. The maximum inter-class variance method (Ostu) is characterized in that the larger the difference between the target and the background, the larger the inter-class variance value. By adopting a maximum inter-class variance method (Ostu), the image can be divided into a target (namely a fire area) and a background by traversing all gray values in the saliency gray image, and selecting the gray value with the minimum intra-class equation and the maximum inter-class equation as the threshold value of image segmentation. Therefore, the method has small calculated amount, can reduce the probability of error segmentation, improves the accuracy of threshold segmentation, and provides support for improving the accuracy of subsequent fire monitoring and/or fire analysis.
In step S230, a target binary image corresponding to the target image is obtained based on the saliency binary image.
Here, the saliency binary image may be used as a target object-corresponding target binary image. The target binary image comprehensively considers the gray information and the saliency information of the target image, and can improve the accuracy of the subsequent image segmentation. In addition, morphological filtering processing can be carried out on the image before threshold segmentation, so that the signal-to-noise ratio of the video image and the robustness to noise can be improved, and support can be provided for subsequently improving the accuracy of image segmentation so as to better monitor and/or analyze the fire.
In one embodiment, in the image processing process, more feature information of the target image can be acquired, so as to improve the accuracy of image segmentation. For example, component channel information may be extracted for a target image, and the extracted component channel information may be combined to obtain a target binary image.
Taking an infrared thermal imaging image as an example, taking the imaging and analysis principles of the infrared thermal imaging image into consideration, a G component can be extracted from a target image to obtain a G component image of a target object, and morphological thresholding and thresholding are carried out on the G component image to obtain a corresponding G component binary image. And performing AND operation on the saliency binary image and the G component binary image corresponding to the target image to obtain the target binary image corresponding to the target image.
Fig. 4 is a schematic diagram of an image processing flow according to an embodiment of the present application.
As shown in fig. 4, in step S401, a target image to be processed may be acquired. For example, a fire video is acquired from an image pickup apparatus or a storage medium, and video frames are acquired from the fire video frame by frame as target images to be processed.
As shown on the left side of fig. 4, a significant binary image is acquired according to the processing flow shown in fig. 2. Specifically, in step S402, the target image is converted into a grayscale image. In step S403, morphological filtering processing is performed on the gradation image of the target image. In step S404, the filtered gray-scale image is subjected to saliency analysis, and a saliency gray-scale image is obtained. In step S405, a threshold segmentation algorithm is adopted to perform threshold segmentation processing on the saliency gray scale image, so as to obtain a saliency binary image.
As shown on the right side of fig. 4, a G-component binary image corresponding to the target image is acquired. Specifically, in step S406, the G component channel of the target image is extracted to obtain a G component image. In step S407, morphological filtering processing is performed on the acquired G component image, and a processed G component image is obtained. In step S408, a threshold segmentation algorithm is used to perform threshold segmentation processing on the G component image after the filtering processing, so as to obtain a G component binary image, as shown in fig. 5A.
Then, in step S409, the saliency binary image and the G-component binary image corresponding to the target image are subjected to and operation, so as to obtain the target binary image. As shown in fig. 5B, the target binary image obtained by performing an and operation on the significant binary image shown in fig. 3C and the G-component binary image shown in fig. 5A is described.
The obtained target binary image can comprise a target area (such as a fire area) and a background area, and based on the target binary image, the fire can be monitored, the ignition point and the fire spreading degree can be analyzed by analyzing the fire characteristics.
The AND operation can be preset according to an algorithm, and the application is not limited to the specific implementation.
For example, the and operation may be, for example, if the pixel values of the saliency binary image and the G-component binary image at the same pixel position are both 1, determining the pixel value of the target binary image at the pixel position as 1. If the pixel values of the saliency binary image and the pixel point of the G component binary image at the same pixel position are different or are 0, determining the pixel value of the target binary image at the pixel position as 0. Thus, the obtained target binary image is an image with black background and white fire area, and the fire monitoring and/or fire analysis can be performed by the feature analysis of the white pixel points.
Alternatively, the and operation may be set such that if the pixel values of the significant binary image and the G component binary image at the same pixel position are both 0, the pixel value of the target binary image at the pixel position is determined to be 0. If the pixel values of the saliency binary image and the pixel value of the G component binary image at the same pixel position are different or are 1, determining the pixel value of the target binary image at the pixel position as 1. Thus, the obtained target binary image is an image with white background and black fire area, and the fire monitoring and/or fire analysis can be performed by the feature analysis of the white pixel points.
Therefore, according to the embodiment of the application, the accuracy of image segmentation is improved by designing a simple and low-complexity algorithm structure and effectively integrating various image characteristic information with smaller calculation amount, so that support is provided for improving the efficiency of fire monitoring and/or fire analysis. In addition, the image processing scheme can realize the automation of the processing operation of the fire video without manual auxiliary operation, and can greatly reduce the labor cost.
In addition, in the embodiment of the application, for example, enhancement processing can be performed on the target image so as to improve the local contrast of the image and enhance the image edge information. The enhanced target image can be visually output to assist in fire monitoring and/or fire analysis.
Fig. 6 is a schematic diagram of an image enhancement process according to an embodiment of the present application. The target image may be enhanced based on an adaptive gray histogram equalization algorithm, for example, and some of the following processing means are the same as or similar to those shown in fig. 4, and details thereof will be described above in connection with fig. 4, which will not be repeated.
Referring to fig. 6, in step S601, a target image to be processed may be acquired. In step S602, morphological filtering processing is performed on the target image. In step S603, the image after morphological filtering is enhanced based on an adaptive gray histogram equalization Algorithm (AHE), the input image is equally divided into a plurality of rectangular local areas, and the brightness value of the image is redistributed by calculating histograms of the plurality of local areas of the image, so as to change the image contrast, improve the local contrast of the image, enhance the image edge information, and the like, and obtain the target image after enhancement processing. For example, as shown in fig. 7, the target image shown in fig. 3A is enhanced based on the AHE algorithm.
Thus, by enhancing the processing, the contrast of the target image is improved, which image may serve as an additional function to better distinguish between fire areas and background areas in the image for better fire monitoring and/or fire analysis.
In addition, in the embodiment of the application, the algorithm structure of the designed whole image processing system can take the fire video as input, and the target binary video corresponding to the target binary image and the enhanced target video corresponding to the enhanced target image as output. The algorithm structure can automatically realize the processing operation of the fire video without manual auxiliary operation. The related personnel can analyze the fire point, the fire spreading trend and the like according to the output target binary video and/or the enhanced target video and the fire characteristics determined in the image processing process, so as to objectively and accurately evaluate fire loss, tissue disaster relief and the like.
Fig. 8 is a schematic diagram of an implementation principle of an image processing system according to an embodiment of the present application.
As shown in fig. 8, the image processing system 800 may include, for example, an image acquisition module 810, a binary image processing module 820, an image enhancement processing module 830, and a video compositing module 840. The connection lines in the figure represent that information interaction exists between the unit modules, and the connection lines can be wired connection, wireless connection or any form of connection capable of carrying out information transmission.
Similar to the previous description of the image processing method, the image acquisition module 810 may be used to acquire a target image to be processed. The image acquisition module 810 may acquire a fire video from an image capturing apparatus or a storage medium, and acquire a target image from the fire video. The binary image processing module 820 may be used for the target binary image. The image enhancement processing module 830 may perform enhancement processing on the target image to improve the contrast ratio, the signal-to-noise ratio, and the like of the target image. The video synthesis module 840 may synthesize the processed multi-frame target binary image into a corresponding target binary video, and may also synthesize the enhanced multi-frame target image into a corresponding enhanced target video. The resulting target binary video and enhanced target video can be stored and visually output for fire monitoring and/or analysis.
In one embodiment, the frame rate of the fire video may not be fully adapted to the frame rate corresponding to the visual output video due to differences in device specifications or performance, etc. Therefore, to ensure the smoothness of the playing of the obtained video, the video composition module 840 may determine whether the frame rate of the acquired fire video is less than a preset video output frame rate, such as 25 frames/second or 30 frames/second, when composing the corresponding video.
If the frame rate of the fire video is greater than or equal to the preset video output frame rate, the target binary image or the target image obtained after the enhancement processing can be synthesized into a corresponding video based on the frame rate of the fire video. If the frame rate of the fire video is smaller than the preset video output frame rate, the target binary image or the target image obtained after the enhancement processing can be synthesized into a corresponding video based on the video output frame rate. Therefore, the playing fluency of the synthesized video is guaranteed.
When the method is implemented, if the frame rate of the fire video is smaller than the preset video output frame rate and the video is synthesized through the preset video output frame rate, although the fluency of the video can be improved, the playing time of the obtained video is shortened compared with that of the original fire video through a compression synthesis mode, and the method can lead information such as a fire analysis time point to be misled and seriously affect the accuracy of fire analysis.
For this reason, for the case where the frame rate of the fire video is smaller than the preset video output frame rate, when the corresponding video is actually synthesized, for example, the insufficient number of frames in each second of video may be complemented by means of interpolation. For example, if the frame rate of the fire video is 16 frames/second and the preset video output frame rate is 25 frames/second, 9 frames of pictures need to be added to each second of video when the video is synthesized. Since 16/9=1, one frame of picture can be copied every other frame of picture for filling, and if the last frame of picture is still unable to be filled, the last frame of picture can still be copied for filling.
For example, 1 frame of video per second is expressed by letters, for an original fire video, the frame rate is 16 frames/second, and the multi-frame frames included in the video per second are: ABCDEFGHIJKLMNOP. After corresponding video is synthesized based on a preset video output frame rate 25 frames per second stacked image, multi-frame pictures included in each second of video are as follows: the duration of the finally synthesized video is shortened compared with that of the original fire video. If the corresponding video is synthesized based on the preset video output frame rate of 25 frames/second and the frame picture filling mode, the multi-frame pictures included in each second of video are: AABCCDEEFGGHII JKKLMNOOPP, thus, the original video duration is ensured not to be shortened as much as possible, and misleading of information such as fire analysis time point and the like is avoided. Meanwhile, as the difference between two adjacent frames is very small during original video acquisition, a mode of copying frames to insert frames to complement video frames is adopted, and larger errors can not be caused to fire monitoring and/or fire analysis based on the frames.
Therefore, the target binary video and the enhanced target video obtained after image processing can be kept consistent with the video duration of the original fire video as much as possible on the basis of ensuring the video playing fluency, misguidance of information such as a fire analysis time point is avoided, and therefore accuracy of fire monitoring and/or fire analysis is guaranteed.
Thus, the image processing scheme of the application has been described in detail with reference to fig. 1-8, and by designing a simple and low-complexity algorithm structure, various image characteristic information is effectively integrated with a small calculation amount, so that the accuracy of image segmentation is improved, and support is provided for improving the efficiency of fire monitoring and/or fire analysis. In addition, the image processing scheme can realize the automation of the processing operation of the fire video without manual auxiliary operation, and can greatly reduce the labor cost.
Based on the same conception, the embodiment of the application also provides an image processing device.
Fig. 9 is a schematic diagram of an image processing apparatus according to an embodiment of the present application.
As shown in fig. 9, the image processing apparatus 900 may include:
a saliency analysis unit 910, configured to perform saliency analysis on a gray scale image of a target image to be processed of a fire video, so as to obtain a saliency gray scale image;
a threshold segmentation unit 920, configured to perform a threshold segmentation process on the salient gray image based on the gray value of the pixel in the salient gray image, so as to obtain a salient binary image;
and an image processing unit 930 configured to obtain a target binary image corresponding to the target image based on the saliency binary image.
In one embodiment, the saliency analysis unit is configured to:
analyzing a gray level histogram of a gray level image of the target image, and determining gray levels corresponding to pixels in the gray level image and gray level frequencies corresponding to the gray levels;
and processing the gray value of each pixel in the gray image of the target image based on the gray level of each pixel and the gray level frequency corresponding to each gray level to obtain the significant gray image.
In one embodiment, the gray value processing unit is configured to:
for each pixel, determining a distance between a gray level of the pixel and the remaining gray levels;
and determining the sum of the gray level frequency corresponding to the gray level of the pixel and the product of the gray level of the pixel and the distance between the rest gray levels as the gray value of the pixel.
In one embodiment, the image processing unit is configured to:
and performing AND operation on the significant binary image and a G component binary image corresponding to the target image to obtain the target binary image, wherein the G component binary image is obtained by extracting a G component channel from the target image.
In one embodiment, the apparatus further comprises:
And the enhancement processing unit is used for carrying out enhancement processing on the target image based on an adaptive gray histogram equalization algorithm.
In an embodiment, the apparatus further comprises a video compositing unit for: after the target binary image is obtained and/or the target image after the enhancement processing is obtained, synthesizing a multi-frame target binary image obtained after the processing into a corresponding target binary video; and/or synthesizing the multi-frame target image obtained after the enhancement processing into a corresponding enhancement target video.
In one embodiment, if the frame rate of the fire video is greater than or equal to a preset video output frame rate, the video synthesis unit synthesizes the target binary image or the target image obtained after the enhancement processing into a corresponding video based on the frame rate of the fire video;
if the frame rate of the fire video is smaller than the preset video output frame rate, the video synthesis unit synthesizes the target binary image or the target image obtained after the enhancement processing into a corresponding video based on the video output frame rate.
In one embodiment, the fire video is an infrared thermography video and the target image is an infrared thermography image.
The image processing device and the functional modules thereof can implement the above image processing scheme, and details of the implementation can be found in the above description related to fig. 1-8, which are not repeated here.
Having described an image processing method and apparatus according to an exemplary embodiment of the present application, next, a computing device according to another exemplary embodiment of the present application is described.
Those skilled in the art will appreciate that the various aspects of the application may be implemented as a system, method, or program product. Accordingly, aspects of the application may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
In some possible implementations, a computing device according to the application may include at least one processor, and at least one memory. Wherein the memory stores program code which, when executed by the processor, causes the processor to perform the steps in the image processing method according to various exemplary embodiments of the application described above in this specification. For example, the processor may perform the steps shown in fig. 2, 4, 6.
A computing device 130 according to such an embodiment of the application is described below with reference to fig. 10. The computing device 130 shown in fig. 10 is merely an example and should not be taken as limiting the functionality and scope of use of embodiments of the present application.
As shown in fig. 10, the computing device 130 is in the form of a general purpose computing device. Components of computing device 130 may include, but are not limited to: the at least one processor 131, the at least one memory 132, and a bus 133 connecting the various system components, including the memory 132 and the processor 131.
Bus 133 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, and a local bus using any of a variety of bus architectures.
Memory 132 may include readable media in the form of volatile memory such as Random Access Memory (RAM) 1321 and/or cache memory 1322, and may further include Read Only Memory (ROM) 1323.
Memory 132 may also include a program/utility 1325 having a set (at least one) of program modules 1324, such program modules 1324 include, but are not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Computing device 130 may also communicate with one or more external devices 134 (e.g., keyboard, pointing device, etc.), one or more devices that enable a user to interact with computing device 130, and/or any devices (e.g., routers, modems, etc.) that enable computing device 130 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 135. Moreover, computing device 130 may also communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, through network adapter 136. As shown, network adapter 136 communicates with other modules for computing device 130 over bus 133. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in connection with computing device 130, including, but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
In some possible embodiments, aspects of an image processing method provided by the present application may also be implemented in the form of a program product, which includes a program code for causing a computer device to perform the steps of the image processing method according to the various exemplary embodiments of the present application described above when the program product is run on the computer device, for example, the computer device may perform the steps as shown in fig. 2, 4, and 6.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for image processing of embodiments of the present application may employ a portable compact disc read only memory (CD-ROM) and include program code and may run on a computing device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such a division is merely exemplary and not mandatory. Indeed, the features and functions of two or more of the elements described above may be embodied in one element in accordance with embodiments of the present application. Conversely, the features and functions of one unit described above may be further divided into a plurality of units to be embodied.
Furthermore, although the operations of the methods of the present application are depicted in the drawings in a particular order, this is not required to either imply that the operations must be performed in that particular order or that all of the illustrated operations be performed to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Claims (9)
1. An image processing method, the method comprising:
Carrying out significance analysis on the gray level image of the target image to be processed of the fire video to obtain a significance gray level image;
based on the gray value of the pixel in the saliency gray image, carrying out threshold segmentation processing on the saliency gray image to obtain a saliency binary image;
obtaining a target binary image corresponding to the target image based on the saliency binary image;
the performing the saliency analysis on the gray scale image of the target image to obtain the saliency gray scale image comprises the following steps:
analyzing a gray level histogram of a gray level image of the target image, and determining gray levels corresponding to pixels in the gray level image and gray level frequencies corresponding to the gray levels;
processing the gray value of each pixel in the gray image of the target image based on the gray level of each pixel and the gray level frequency corresponding to each gray level to obtain the significant gray image;
processing a gray value of each pixel in a gray image of the target image based on a gray level of each pixel and a gray level frequency corresponding to each gray level to obtain the significant gray image, including:
for each pixel, determining a distance between a gray level of the pixel and the remaining gray levels;
And determining the sum of the gray level frequency corresponding to the gray level of the pixel and the product of the gray level of the pixel and the distance between the rest gray levels as the gray value of the pixel so as to obtain the remarkable gray image.
2. The method according to claim 1, wherein obtaining a target binary image corresponding to the target image based on the saliency binary image comprises:
and performing AND operation on the significant binary image and a G component binary image corresponding to the target image to obtain the target binary image, wherein the G component binary image is obtained by extracting a G component channel from the target image and performing morphological threshold processing and threshold segmentation processing.
3. The method according to claim 1, wherein the method further comprises:
and carrying out enhancement processing on the target image based on an adaptive gray histogram equalization algorithm.
4. A method according to claim 1 or 3, characterized in that after obtaining the target binary image and/or obtaining the enhanced target image, the method further comprises:
synthesizing the processed multi-frame target binary image into a corresponding target binary video; and/or
And synthesizing the multi-frame target image obtained after the enhancement processing into a corresponding enhancement target video.
5. The method of claim 4, wherein synthesizing the images into the respective videos comprises:
if the frame rate of the fire video is greater than or equal to the preset video output frame rate, synthesizing the target binary image or the target image obtained after the enhancement processing into a corresponding video based on the frame rate of the fire video;
if the frame rate of the fire video is smaller than the preset video output frame rate, the target binary image or the target image obtained after the enhancement processing is synthesized into a corresponding video based on the video output frame rate.
6. The method of claim 1, wherein the fire video is an infrared thermographic video and the target image is an infrared thermographic image.
7. An image processing apparatus, characterized in that the apparatus comprises:
the saliency analysis unit is used for carrying out saliency analysis on the gray level image of the target image to be processed of the fire video to obtain a saliency gray level image;
the threshold segmentation unit is used for carrying out threshold segmentation processing on the saliency gray image based on the gray values of pixels in the saliency gray image to obtain a saliency binary image;
The image processing unit is used for obtaining a target binary image corresponding to the target image based on the saliency binary image;
the saliency analysis unit is specifically configured to, when performing saliency analysis on a gray scale image of a target image to obtain a saliency gray scale image:
analyzing a gray level histogram of a gray level image of the target image, and determining gray levels corresponding to pixels in the gray level image and gray level frequencies corresponding to the gray levels;
processing the gray value of each pixel in the gray image of the target image based on the gray level of each pixel and the gray level frequency corresponding to each gray level to obtain the significant gray image;
the saliency analysis unit is specifically configured to, when processing a gray value of each pixel in a gray image of the target image based on a gray level of each pixel and a gray level frequency corresponding to each gray level to obtain the saliency gray image:
for each pixel, determining a distance between a gray level of the pixel and the remaining gray levels;
and determining the sum of the gray level frequency corresponding to the gray level of the pixel and the product of the gray level of the pixel and the distance between the rest gray levels as the gray value of the pixel so as to obtain the remarkable gray image.
8. A computing device comprising at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image processing method according to any one of claims 1-6.
9. A computer storage medium storing computer executable instructions for causing a computer to perform the image processing method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010241994.1A CN111369557B (en) | 2020-03-31 | 2020-03-31 | Image processing method, device, computing equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010241994.1A CN111369557B (en) | 2020-03-31 | 2020-03-31 | Image processing method, device, computing equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111369557A CN111369557A (en) | 2020-07-03 |
CN111369557B true CN111369557B (en) | 2023-09-15 |
Family
ID=71210810
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010241994.1A Active CN111369557B (en) | 2020-03-31 | 2020-03-31 | Image processing method, device, computing equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111369557B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111798448A (en) * | 2020-07-31 | 2020-10-20 | 北京百度网讯科技有限公司 | Method, apparatus, device and storage medium for processing image |
CN113067960B (en) * | 2021-03-16 | 2022-08-12 | 合肥合芯微电子科技有限公司 | Image interpolation method, device and storage medium |
CN114638845B (en) * | 2022-03-21 | 2024-08-06 | 南京信息工程大学 | Quantum image segmentation method, device and storage medium based on double threshold values |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20030010530A (en) * | 2001-07-26 | 2003-02-05 | 캐논 가부시끼가이샤 | Image processing method, apparatus and system |
WO2011127825A1 (en) * | 2010-04-16 | 2011-10-20 | 杭州海康威视软件有限公司 | Processing method and device of image contrast |
EP2747028A1 (en) * | 2012-12-18 | 2014-06-25 | Universitat Pompeu Fabra | Method for recovering a relative depth map from a single image or a sequence of still images |
CN108090885A (en) * | 2017-12-20 | 2018-05-29 | 百度在线网络技术(北京)有限公司 | For handling the method and apparatus of image |
CN108665443A (en) * | 2018-04-11 | 2018-10-16 | 中国石油大学(北京) | A kind of the infrared image sensitizing range extracting method and device of mechanical equipment fault |
CN109242877A (en) * | 2018-09-21 | 2019-01-18 | 新疆大学 | Image partition method and device |
CN110490848A (en) * | 2019-08-02 | 2019-11-22 | 上海海事大学 | Infrared target detection method, apparatus and computer storage medium |
CN110532876A (en) * | 2019-07-26 | 2019-12-03 | 纵目科技(上海)股份有限公司 | Night mode camera lens pays detection method, system, terminal and the storage medium of object |
EP3598386A1 (en) * | 2018-07-20 | 2020-01-22 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for processing image |
-
2020
- 2020-03-31 CN CN202010241994.1A patent/CN111369557B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20030010530A (en) * | 2001-07-26 | 2003-02-05 | 캐논 가부시끼가이샤 | Image processing method, apparatus and system |
WO2011127825A1 (en) * | 2010-04-16 | 2011-10-20 | 杭州海康威视软件有限公司 | Processing method and device of image contrast |
EP2747028A1 (en) * | 2012-12-18 | 2014-06-25 | Universitat Pompeu Fabra | Method for recovering a relative depth map from a single image or a sequence of still images |
CN108090885A (en) * | 2017-12-20 | 2018-05-29 | 百度在线网络技术(北京)有限公司 | For handling the method and apparatus of image |
CN108665443A (en) * | 2018-04-11 | 2018-10-16 | 中国石油大学(北京) | A kind of the infrared image sensitizing range extracting method and device of mechanical equipment fault |
EP3598386A1 (en) * | 2018-07-20 | 2020-01-22 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for processing image |
CN109242877A (en) * | 2018-09-21 | 2019-01-18 | 新疆大学 | Image partition method and device |
CN110532876A (en) * | 2019-07-26 | 2019-12-03 | 纵目科技(上海)股份有限公司 | Night mode camera lens pays detection method, system, terminal and the storage medium of object |
CN110490848A (en) * | 2019-08-02 | 2019-11-22 | 上海海事大学 | Infrared target detection method, apparatus and computer storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111369557A (en) | 2020-07-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110276767B (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
CN111369557B (en) | Image processing method, device, computing equipment and storage medium | |
CN114584849B (en) | Video quality evaluation method, device, electronic equipment and computer storage medium | |
WO2021022983A1 (en) | Image processing method and apparatus, electronic device and computer-readable storage medium | |
US11153553B2 (en) | Synthesis of transformed image views | |
US20170126968A1 (en) | Fusion of panoramic background images using color and depth data | |
CN110675404A (en) | Image processing method, image processing apparatus, storage medium, and terminal device | |
US9697592B1 (en) | Computational-complexity adaptive method and system for transferring low dynamic range image to high dynamic range image | |
EP4137991A1 (en) | Pedestrian re-identification method and device | |
CN110366001B (en) | Method and device for determining video definition, storage medium and electronic device | |
WO2018228310A1 (en) | Image processing method and apparatus, and terminal | |
CN111260037B (en) | Convolution operation method and device of image data, electronic equipment and storage medium | |
CN113158773B (en) | Training method and training device for living body detection model | |
CN111767828A (en) | Certificate image copying and identifying method and device, electronic equipment and storage medium | |
WO2023082453A1 (en) | Image processing method and device | |
CN105430247A (en) | Method and device for taking photograph by using image pickup device | |
CN112686165A (en) | Method and device for identifying target object in video, electronic equipment and storage medium | |
CN110659627A (en) | Intelligent video monitoring method based on video segmentation | |
CN115861891B (en) | Video target detection method, device, equipment and medium | |
CN110570376B (en) | Image rain removing method, device, equipment and computer readable storage medium | |
CN112052863B (en) | Image detection method and device, computer storage medium and electronic equipment | |
US12035033B2 (en) | DNN assisted object detection and image optimization | |
CN114140744A (en) | Object-based quantity detection method and device, electronic equipment and storage medium | |
CN111311603B (en) | Method and device for outputting number information of target objects | |
CN113538268A (en) | Image processing method and device, computer readable storage medium and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20230815 Address after: Room 201, Building A, Integrated Circuit Design Industrial Park, No. 858, Jianshe 2nd Road, Economic and Technological Development Zone, Xiaoshan District, Hangzhou City, Zhejiang Province, 311215 Applicant after: Zhejiang Huagan Technology Co.,Ltd. Address before: Hangzhou City, Zhejiang province Binjiang District 310053 shore road 1187 Applicant before: ZHEJIANG DAHUA TECHNOLOGY Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |