CN111369557A - Image processing method, image processing device, computing equipment and storage medium - Google Patents

Image processing method, image processing device, computing equipment and storage medium Download PDF

Info

Publication number
CN111369557A
CN111369557A CN202010241994.1A CN202010241994A CN111369557A CN 111369557 A CN111369557 A CN 111369557A CN 202010241994 A CN202010241994 A CN 202010241994A CN 111369557 A CN111369557 A CN 111369557A
Authority
CN
China
Prior art keywords
image
target
gray level
gray
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010241994.1A
Other languages
Chinese (zh)
Other versions
CN111369557B (en
Inventor
郝浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Huagan Technology Co ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010241994.1A priority Critical patent/CN111369557B/en
Publication of CN111369557A publication Critical patent/CN111369557A/en
Application granted granted Critical
Publication of CN111369557B publication Critical patent/CN111369557B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method, an image processing device, a computing device and a storage medium. The image processing method comprises the following steps: performing significance analysis on a gray level image of a target image to be processed of the fire video to obtain a significant gray level image; performing threshold segmentation processing on the saliency gray image based on the gray value of the pixel in the saliency gray image to obtain a saliency binary image; and obtaining a target binary image corresponding to the target image based on the significance binary image. Therefore, the video image processing efficiency is improved by effectively integrating various characteristics, and support is provided for improving the efficiency of fire monitoring and/or fire analysis.

Description

Image processing method, image processing device, computing equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, a computing device, and a storage medium.
Background
Among various disasters, a fire is one of the major disasters threatening public safety and social development. With the development of science and technology, the fire monitoring technology is developed rapidly, and accurate early warning and analysis can guide fire extinguishing personnel to extinguish fire quickly and recover a lot of losses.
At present, the infrared thermal imaging technology is applied to fire monitoring to a certain extent, and fire points, fire spreading trends and the like are analyzed through continuous monitoring and analysis of fire characteristics so as to objectively and accurately evaluate fire loss, organize disaster relief and the like. However, due to the self-imaging and detection principle, the infrared thermal imaging image has the defects of low contrast, low signal-to-noise ratio and the like, and the infrared thermal imaging image is convenient to monitor, early warn and analyze the fire condition only by carrying out post-image processing on the infrared thermal imaging image.
Therefore, how to improve the image processing to provide support for quickly and accurately performing fire early warning and fire analysis becomes a technical problem which needs to be solved urgently.
Disclosure of Invention
The application aims to provide an image processing method, an image processing device, a computing device and a storage medium, so as to provide support for quickly and accurately carrying out fire early warning and fire analysis.
In a first aspect, an embodiment of the present application provides an image processing method, including:
performing significance analysis on a gray level image of a target image to be processed of the fire video to obtain a significant gray level image;
performing threshold segmentation processing on the saliency gray image based on the gray value of the pixel in the saliency gray image to obtain a saliency binary image;
and obtaining a target binary image corresponding to the target image based on the significance binary image.
In one embodiment, performing saliency analysis on a grayscale image of a target image results in a saliency grayscale image, comprising:
analyzing a gray level histogram of a gray level image of the target image, and determining a gray level corresponding to each pixel in the gray level image and a gray level frequency corresponding to each gray level;
and processing the gray value of each pixel in the gray image of the target image based on the gray level of each pixel and the gray level frequency corresponding to each gray level to obtain the significant gray image.
In one embodiment, processing the gray value of each pixel in the gray image of the target image based on the gray level of each pixel and the gray level frequency corresponding to each gray level to obtain the significant gray image includes:
determining, for each pixel, a Euclidean distance of a gray level of the pixel from the remaining gray levels;
and determining the sum of the gray level frequency corresponding to the gray level of the pixel and the product of the gray level of the pixel and the Euclidean distances of the rest gray levels as the gray level of the pixel to obtain the significant gray level image.
In one embodiment, obtaining a target binary image corresponding to the target image based on the saliency binary image includes:
and performing AND operation on the significance binary image and a G component binary image corresponding to the target image to obtain the target binary image, wherein the G component binary image is obtained by extracting a G component channel from the target image.
In one embodiment, the method further comprises:
and performing enhancement processing on the target image based on a self-adaptive gray histogram equalization algorithm.
In one embodiment, after obtaining the target binary image, and/or obtaining the target image after the enhancement processing, the method further includes:
synthesizing the multi-frame target binary image obtained after processing into a corresponding target binary video; and/or
And synthesizing the multiple frames of target images obtained after enhancement processing into corresponding enhanced target videos.
In one embodiment, synthesizing images into respective videos includes:
if the frame rate of the fire video is greater than or equal to a preset video output frame rate, synthesizing the target binary image or the target image obtained after enhancement processing into a corresponding video based on the frame rate of the fire video;
and if the frame rate of the fire video is less than the preset video output frame rate, synthesizing the target binary image or the target image obtained after enhancement processing into a corresponding video based on the video output frame rate.
In one embodiment, the fire video is an infrared thermal imaging video, and the target image is an infrared thermal imaging image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the saliency analysis unit is used for carrying out saliency analysis on the gray level image of the target image to be processed of the fire video to obtain a saliency gray level image;
the threshold segmentation unit is used for carrying out threshold segmentation processing on the saliency gray level image based on the gray level value of the pixel in the saliency gray level image to obtain a saliency binary image;
and the image processing unit is used for obtaining a target binary image corresponding to the target image based on the significance binary image.
In one embodiment, the significance analysis unit is to:
analyzing a gray level histogram of a gray level image of the target image, and determining a gray level corresponding to each pixel in the gray level image and a gray level frequency corresponding to each gray level;
and processing the gray value of each pixel in the gray image of the target image based on the gray level of each pixel and the gray level frequency corresponding to each gray level to obtain the significant gray image.
In one embodiment, the gray value processing unit is configured to:
determining, for each pixel, a Euclidean distance of a gray level of the pixel from the remaining gray levels;
and determining the sum of the gray level frequency corresponding to the gray level of the pixel and the product of the gray level of the pixel and the Euclidean distances of the rest gray levels as the gray level of the pixel to obtain the significant gray level image.
In one embodiment, the image processing unit is configured to:
and performing AND operation on the significance binary image and a G component binary image corresponding to the target image to obtain the target binary image, wherein the G component binary image is obtained by extracting a G component channel from the target image.
In one embodiment, the apparatus further comprises:
and the enhancement processing unit is used for enhancing the target image based on an adaptive gray histogram equalization algorithm.
In one embodiment, the apparatus further comprises a video composition unit to:
synthesizing the multi-frame target binary image obtained after processing into a corresponding target binary video; and/or
And synthesizing the multiple frames of target images obtained after enhancement processing into corresponding enhanced target videos.
In one embodiment, a video compositing unit to:
if the frame rate of the fire video is greater than or equal to a preset video output frame rate, the video synthesis unit synthesizes the target binary image or the target image obtained after enhancement processing into a corresponding video based on the frame rate of the fire video;
and if the frame rate of the fire video is less than the preset video output frame rate, the video synthesis unit synthesizes the target binary image or the target image obtained after enhancement processing into a corresponding video based on the video output frame rate.
In one embodiment, the fire video is an infrared thermal imaging video, and the target image is an infrared thermal imaging image.
In a third aspect, another embodiment of the present application also provides a computing device comprising at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute any image processing method provided by the embodiment of the application.
In a fourth aspect, another embodiment of the present application further provides a computer storage medium, where the computer storage medium stores computer-executable instructions for causing a computer to execute any one of the image processing methods in the embodiments of the present application.
The image processing scheme provided by the embodiment of the application improves the image segmentation accuracy by designing a simple algorithm structure with low complexity and effectively integrating various image characteristic information with smaller calculated amount, thereby providing support for improving the efficiency of fire monitoring and/or fire analysis. Moreover, the image processing scheme can realize automation of processing operation of the fire video without manual auxiliary operation, and can greatly reduce labor cost.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments of the present invention will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic illustration of an application environment according to one embodiment of the present application;
FIG. 2 is a schematic flow chart diagram of an image processing method according to an embodiment of the present application;
3A, 3B, 3C are examples of processed images according to embodiments of the present application;
FIG. 4 is a schematic diagram of an image processing flow according to one embodiment of the present application;
FIGS. 5A and 5B are examples of processed images according to embodiments of the present application;
FIG. 6 is a schematic diagram of an image enhancement process flow according to an embodiment of the present application;
FIG. 7 is an example of a processed image according to an embodiment of the present application;
FIG. 8 is a schematic diagram of an implementation principle of an image processing system according to one embodiment of the present application;
FIG. 9 is a schematic diagram of an image processing apparatus according to one embodiment of the present application;
FIG. 10 is a schematic diagram of a computing device according to one embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention.
FIG. 1 is a schematic diagram of an application environment according to one embodiment of the present application.
As shown in fig. 1, the application environment may include at least one server 20 and a plurality of terminal devices 10. The terminal device 10 can transmit and receive information to and from the server 20 via the network 40. The server 20 can acquire contents required by the terminal device 10 by accessing the database 30. Terminal devices (e.g., 10_1 and 10_2 or 10_ N) may also communicate with each other via network 40. Network 40 may be a network for information transfer in a broad sense and may include one or more communication networks such as a wireless communication network, the internet, a private network, a local area network, a metropolitan area network, a wide area network, or a cellular data network, among others. In one embodiment, the network 40 may also include a satellite network, whereby the GPS signals of the terminal device 10 are transmitted to the server 20. It should be noted that the underlying concepts of the exemplary embodiments of the present invention are not altered if additional modules are added or removed from the illustrated environments. In addition, although a bidirectional arrow from the database 30 to the server 20 is shown in the figure for convenience of explanation, it will be understood by those skilled in the art that the above-described data transmission and reception may be realized through the network 40.
Terminal device 10 is any suitable electronic device that may be used for network access including, but not limited to, a computer, laptop, smartphone, tablet, or other type of device. The server 20 is any server capable of providing information required for an interactive service through a network. And one or some of them will be selected for description in the following description (e.g., terminal device 10-1), but it will be understood by those skilled in the art that the single terminal device is intended to represent a large number of terminals existing in a real network, and the single server 20 and database 30 shown are intended to represent that the technical solution of the present invention may involve the operation of the server and the database. The specific numbering of the terminal devices and the individual servers and databases is described in detail for convenience of illustration at least, and does not imply any limitation as to the type or location of the mobile terminal and server.
In one embodiment, the terminal device can acquire the target image and also can output the target image or corresponding video. The system for performing image processing may be configured on the terminal device side shown in fig. 1, may be configured on the server side, may be configured with partial functions on the terminal device side, and may be configured with functions on the server side, which is not limited in the present application.
Fig. 2 is a flowchart illustrating an image processing method according to an embodiment of the present application.
As shown in fig. 2, in step S210, a saliency analysis is performed on a grayscale image of a target image to be processed of a fire video, so as to obtain a saliency grayscale image.
Here, the fire video may be a video collected in any fire monitoring application scenario, such as a forest fire monitoring scenario, a gas station daily monitoring scenario, a straw burning scenario, and the like. Moreover, the fire video may be acquired in real time based on the camera device, or may be historical video data acquired from a related storage medium, which is not limited in the present application.
In one embodiment, the fire video may be an infrared thermal imaging video acquired based on an infrared thermal imaging technology, and the target image to be processed may be an infrared thermal imaging image. Taking a forest fire monitoring scene as an example, the obtained target image may be an infrared thermal imaging image as shown in fig. 3A. The infrared imaging technology can be processed by a system according to the detected radiation energy of the object and converted into a thermal image of the target object, namely the temperature distribution of the detected target is obtained, so that the state of the object is judged. Based on the infrared thermal imaging technology, if a fire accident occurs in a monitored place (such as a forest, a gas station, a residential building and the like), the fire characteristics can be analyzed through the collected infrared thermal imaging videos or images, so that the fire point, the fire spreading trend and the like are analyzed, and the fire loss can be objectively and accurately evaluated, and the fire disaster relief can be organized. Aiming at the collected fire video, a target image to be processed can be obtained from the fire video in a mode of extracting a video frame. In practice, for example, the video frames may be acquired frame by frame from the fire video as the target image to be processed. It should be understood that the target image is obtained by way of illustration and not limitation, and in other embodiments, the target image may be obtained in a frame skipping manner (for example, every 24 frames) according to the requirement, for example, which is not limited in this application.
Thereafter, the extracted target image may be converted into a grayscale image. As an example, the target image may be converted into a corresponding grayscale image by, for example, the following grayscale conversion formula (1).
Gi=(Ri*299+Gi*587+Bi*114+500)/1000 (1)
Wherein G isiIs the gray value of pixel i, Ri、Gi、BiRespectively representing R, G, B three-channel numerical values of the infrared thermal imaging image, wherein M represents the total pixel number of the target image, i and M are positive integers, and i is more than or equal to 1 and is less than or equal to M. It should be understood that the above gray scale conversion formula is only an example of specific gray scale conversion and is not limited in any way, and in other embodiments, a conventional gray scale conversion formula may be adopted, or detailed parameters involved in the gray scale conversion formula may also be adjusted according to business needs, which is not limited in this application.
It should be understood that, in the embodiment of the present application, acquiring a target image from a video and converting the target image into a corresponding grayscale image is merely an illustration and is not limited to any way of acquiring the grayscale image to be processed in the present application, and in other embodiments, for example, the grayscale image of the target image may also be directly acquired, and subsequent image processing is performed on the acquired grayscale image, which is not described herein again.
In order to ensure the accuracy of the subsequent image processing, after the grayscale image of the target image is obtained, before the grayscale image of the target image is subjected to saliency analysis, for example, morphological filtering processing may be performed on the grayscale image of the target image to remove image noise. For example, the image shown in fig. 3B is obtained by performing morphological filtering processing on the target image shown in fig. 3A. The morphological filtering process may be, for example, a morphological erosion operation or a morphological dilation operation performed on a gray-scale image of the target image. In other embodiments, other denoising or enhancement processes may be performed on the grayscale image of the target image, for example, a median filtering process, a mean filtering process, and the like, which is not limited in this application.
And then, performing significance analysis on the gray level image corresponding to the target image to obtain a significant gray level image.
In implementation, the gray level histogram of the gray image of the target image may be analyzed to determine the gray level corresponding to each pixel in the gray image and the gray level frequency corresponding to each gray level, and the gray value of each pixel in the gray image of the target image may be processed based on the gray level of each pixel and the gray level frequency corresponding to each gray level to obtain the significant gray image.
Specifically, a gray histogram of a gray image may be analyzed, a gray level range l of the image may be counted, and for each pixel i in the gray image, the gray level l of the pixel i may be determinediAnd the remaining gray levels ljIs calculated by dividing the gray level l of the pixel i by the Euclidean distance ofiCorresponding grey scale frequency fiAnd the gray level of the pixel and the remaining gray level ljEuclidean distance d (l)i,lj) The sum of the products of (a) is determined as the gray value of the pixel. For example as shown in the following equation (2),
Figure BDA0002432867490000081
wherein, S (l)j) Representing a grey level ljThe corresponding saliency value is denoted in this application as the gray value of a pixel in the saliency gray image. d (l)i,lj) Representing a grey level liAnd a gray level ljThe Euclidean distance of (a) is,
Figure BDA0002432867490000091
Figure BDA0002432867490000092
firepresenting a grey level liThe frequency of the corresponding gray scale level is,
Figure BDA0002432867490000093
wherein the content of the first and second substances,
Figure BDA0002432867490000094
representing a grey level liThe corresponding number of pixels, l is more than or equal to 0 and less than or equal to 255, l is more than or equal to 0i≤255,0≤lj≤255。
Therefore, the calculation formula of the significant value is simplified, only the gray level corresponding to each pixel needs to be counted and the gray level frequency corresponding to each gray level needs to be calculated, the Euclidean distance between the gray levels corresponding to any two pixels can be calculated and recorded, repeated calculation can be avoided, and the image processing speed is accelerated.
In step S220, a threshold segmentation process is performed on the saliency grayscale image based on the grayscale values of the pixels in the saliency grayscale image to obtain a saliency binary image (as shown in fig. 3C). The saliency binary image includes two parts, namely a monitored fire region and a background region.
As an example, the saliency grayscale image may be subjected to threshold segmentation processing by the maximum inter-class variance method (Ostu), for example.
Still taking the infrared thermal imaging image as an example, if a fire is monitored, the difference between the two parts of the fire area and the background area serving as the target in the acquired target image is large. The maximum between class variance method (Ostu) is characterized in that the larger the difference between the target and background parts is, the larger the between class variance value is. By adopting a maximum inter-class variance method (Ostu), the image can be divided into a target part (namely a fire region) and a background part by traversing all gray values in the significant gray image, selecting the gray value with the minimum intra-class equation and the maximum inter-class equation as a threshold value for image segmentation. Therefore, the calculation amount is small, the probability of error segmentation can be reduced, the accuracy of threshold segmentation is improved, and support is provided for improving the accuracy of subsequent fire monitoring and/or fire analysis.
In step S230, a target binary image corresponding to the target image is obtained based on the saliency binary image.
Here, the saliency binary image may be a target object-corresponding target binary image. The target binary image comprehensively considers the gray information and the significance information of the target image, and the accuracy of subsequent image segmentation can be improved. In addition, morphological filtering processing can be carried out on the image before threshold segmentation, the signal-to-noise ratio of the video image and the robustness of noise can be improved, and support can be provided for subsequently improving the accuracy of image segmentation so as to better carry out fire monitoring and/or fire analysis.
In one embodiment, in the image processing process, more characteristic information of the target image can be acquired to improve the accuracy of image segmentation. For example, component channel information may be extracted for the target image, and the extracted component channel information may be combined to obtain the target binary image.
Taking the infrared thermal imaging image as an example, considering the imaging and analysis principle of the infrared thermal imaging image, a G component image of a target object can be obtained by extracting a G component from the target image, and a corresponding G component binary image can be obtained by performing morphological threshold processing and threshold segmentation processing on the G component image. And carrying out AND operation on the significance binary image and the G component binary image corresponding to the target image to obtain a target binary image corresponding to the target image.
FIG. 4 is a schematic diagram of an image processing flow according to one embodiment of the present application.
As shown in fig. 4, in step S401, a target image to be processed may be acquired. For example, a fire video is acquired from an image pickup apparatus or a storage medium, and video frames are acquired from the fire video frame by frame as a target image to be processed.
As shown on the left side of fig. 4, a saliency binary image is acquired according to the processing flow shown in fig. 2. Specifically, in step S402, the target image is converted into a grayscale image. In step S403, morphological filtering processing is performed on the grayscale image of the target image. In step S404, the grayscale image after the filtering process is subjected to saliency analysis, so as to obtain a saliency grayscale image. In step S405, an Ostu threshold segmentation algorithm is used to perform threshold segmentation processing on the saliency grayscale image, so as to obtain a saliency binary image.
As shown in the right side of fig. 4, a G component binary image corresponding to the target image is acquired. Specifically, in step S406, a G component channel of the target image is extracted to obtain a G component image. In step S407, the morphological filtering process is performed on the acquired G component image, so that a processed G component image is obtained. In step S408, an Ostu threshold segmentation algorithm is used to perform threshold segmentation processing on the filtered G component image, so as to obtain a G component binary image, as shown in fig. 5A.
Then, in step S409, an and operation is performed on the saliency binary image and the G component binary image corresponding to the target image to obtain the target binary image. As shown in fig. 5B, the target binary image is obtained by performing an and operation on the saliency binary image shown in fig. 3C and the G-component binary image shown in fig. 5A.
The obtained target binary image may include a target region (e.g., a fire region) and a background region, and based on the target binary image, the fire may be monitored, the ignition point may be analyzed, the degree of fire spreading may be analyzed, and the like by analyzing the fire characteristics.
The and operation may be set in advance according to an algorithm, and the specific implementation thereof is not limited in the present application.
For example, the and operation may be, for example, if the values of the pixel points of the saliency binary image and the G component binary image at the same pixel position are the same as 1, determining the value of the pixel point of the target binary image at the pixel position as 1. And if the values of the pixel points of the saliency binary image and the G component binary image at the same pixel position are different or are 0, determining the value of the pixel point of the target binary image at the pixel position as 0. Therefore, the obtained target binary image is an image with black background and white fire region, and fire monitoring and/or fire analysis can be performed through characteristic analysis of the white pixel points.
Or, the and operation may also be set to determine that, if the values of the pixel points of the saliency binary image and the G component binary image at the same pixel position are the same as 0, the value of the pixel point of the target binary image at the pixel position is determined to be 0. And if the values of the pixel points of the saliency binary image and the G component binary image at the same pixel position are different or 1, determining the value of the pixel point of the target binary image at the pixel position as 1. Therefore, the obtained target binary image is an image with a white background and a black fire area, and fire monitoring and/or fire analysis can be performed through characteristic analysis of the white pixel points.
Therefore, according to the embodiment of the application, the accuracy of image segmentation is improved by designing a simple algorithm structure with low complexity and effectively integrating various image characteristic information with small calculated amount, so that support is provided for improving the efficiency of fire monitoring and/or fire analysis. Moreover, the image processing scheme can realize automation of processing operation of the fire video without manual auxiliary operation, and can greatly reduce labor cost.
In addition, in the embodiment of the application, for example, enhancement processing may be performed on the target image to improve local contrast of the image and enhance image edge information. The enhanced target image can be visually output so as to assist in fire monitoring and/or fire analysis.
FIG. 6 is a schematic diagram of an image enhancement process flow according to an embodiment of the present application. For example, the target image may be enhanced based on an adaptive gray histogram equalization algorithm, and some processing means described below are the same as or similar to the steps shown in fig. 4, and refer to the related description above in conjunction with fig. 4 in detail, and will not be described again below.
Referring to fig. 6, in step S601, a target image to be processed may be acquired. In step S602, morphological filtering processing is performed on the target image. In step S603, the image after the morphological filtering processing is enhanced based on an adaptive gray histogram equalization Algorithm (AHE), the input image is equally divided into a plurality of rectangular local regions, and the luminance values of the image are redistributed by calculating histograms of the plurality of local regions of the image, so as to change the contrast of the image, improve the local contrast of the image, enhance the edge information of the image, and the like, thereby obtaining the enhanced target image. For example, as shown in fig. 7, the target image shown in fig. 3A is an image obtained by enhancing the target image based on the AHE algorithm.
Thus, by the enhancement process, the contrast of the target image is improved, which can serve as an additional function to better distinguish between a fire region and a background region in the image for better fire monitoring and/or fire analysis.
In addition, in the embodiment of the present application, the designed algorithm structure of the whole image processing system may use the fire video as input, and use the target binary video corresponding to the target binary image and the enhanced target video corresponding to the enhanced target image as output. The algorithm structure can automatically realize the processing operation of the fire video without manual auxiliary operation. Related personnel can analyze the fire point, the fire spreading trend and the like according to the output target binary video and/or the enhanced target video and the fire characteristics determined in the image processing process so as to objectively and accurately evaluate the fire loss, organize the relief and the like.
Fig. 8 is a schematic diagram of an implementation principle of an image processing system according to an embodiment of the present application.
As shown in fig. 8, the image processing system 800 may include, for example, an image acquisition module 810, a binary image processing module 820, an image enhancement processing module 830, and a video composition module 840. The connection lines in the drawing indicate that there is information interaction between the unit modules, and the connection lines may be wired connection, wireless connection, or any connection form capable of performing information transmission.
Similar to the description related to the image processing method, the image obtaining module 810 can be used for obtaining a target image to be processed. The image obtaining module 810 may obtain a fire video from a camera or a storage medium, and obtain a target image from the fire video. The binary image processing module 820 may be for a target binary image. The image enhancement processing module 830 can perform enhancement processing on the target image to improve the contrast and signal-to-noise ratio of the target image. The video synthesis module 840 may synthesize the processed multiple frames of target binary images into corresponding target binary videos, and may also synthesize the enhanced multiple frames of target images into corresponding enhanced target videos. The obtained target binary video and the enhanced target video can be stored and can be visually output so as to carry out fire monitoring and/or fire analysis.
In one embodiment, the frame rate of the fire video, and the frame rate corresponding to the visual output video may not be fully adapted due to differences in device specifications or performance, etc. Therefore, in order to guarantee the fluency of the obtained video, when synthesizing the corresponding video, the video synthesis module 840 may determine whether the frame rate of the acquired fire video is less than a preset video output frame rate, for example, 25 frames/second or 30 frames/second.
If the frame rate of the fire video is greater than or equal to the preset video output frame rate, the target binary image or the target image obtained after enhancement processing can be synthesized into a corresponding video based on the frame rate of the fire video. If the frame rate of the fire video is less than the preset video output frame rate, the target binary image or the target image obtained after enhancement processing can be synthesized into a corresponding video based on the video output frame rate. Therefore, the playing fluency of the synthesized video is guaranteed.
In implementation, if the frame rate of the fire videos is less than the preset video output frame rate and the videos are synthesized through the preset video output frame rate, although the fluency of the videos can be improved, the playing time of the obtained videos is also shortened compared with that of the original fire videos through a compression synthesis mode, and misdirection is brought to information such as the fire analysis time point, and the accuracy of the fire analysis is seriously influenced.
Therefore, when the frame rate of the fire video is lower than the preset video output frame rate, for example, the insufficient number of frames in each second of the video can be supplemented in a frame interpolation mode when the corresponding video is actually synthesized. For example, if the frame rate of the fire video is 16 frames/second and the preset video output frame rate is 25 frames/second, 9 frames of pictures need to be supplemented in each second of the video when the video is synthesized. Since 16/9 is 1, it is possible to copy one frame of picture for every other frame of picture and perform the padding, and if the last frame of picture remains to be padded, it is possible to copy the last frame of picture and perform the padding.
For example, 1 frame of picture in each second of video is represented by letters, for the original fire video, the frame rate is 16 frames/second, and each second of video comprises multiple frames of pictures: ABCDEFGHIJKLMNOP. After synthesizing the corresponding video based on the stacked images at the preset video output frame rate of 25 frames/second, the video per second comprises a plurality of frames as follows: ABCDEFGHIJKLMNOPQRSTUWXY, and the time length of the finally synthesized video is shortened compared with that of the original fire video. If the corresponding video is synthesized based on the preset video output frame rate of 25 frames/second and the mode of frame image supplement, the multi-frame images included in each second video are: AABCCDEEFGGHIIJKKLMMNOOPP ensures that the time length of the original video is not shortened as much as possible, and avoids misleading information such as fire analysis time points and the like. Meanwhile, when the original video is collected, the difference between two adjacent frames is extremely small, and the mode of copying frame picture interpolation frames to complement the frame number of the video does not bring large errors to the fire monitoring and/or the fire analysis based on the frame picture interpolation frames.
Therefore, the target binary video and the enhanced target video obtained after image processing can be kept consistent with the video duration of the original fire video as far as possible on the basis of guaranteeing the video playing fluency, misleading of information such as fire analysis time points and the like is avoided, and therefore the accuracy of fire monitoring and/or fire analysis is guaranteed.
So far, the image processing scheme of the present application has been described in detail with reference to fig. 1 to 8, and by designing a simple and low-complexity algorithm structure, and effectively integrating a plurality of image feature information with a small amount of calculation, the accuracy of image segmentation is improved, thereby providing support for improving the efficiency of fire monitoring and/or fire analysis. Moreover, the image processing scheme can realize automation of processing operation of the fire video without manual auxiliary operation, and can greatly reduce labor cost.
Based on the same conception, the embodiment of the application also provides an image processing device.
Fig. 9 is a schematic diagram of an image processing apparatus according to an embodiment of the present application.
As shown in fig. 9, the image processing apparatus 900 may include:
the saliency analysis unit 910 is configured to perform saliency analysis on a grayscale image of a target image to be processed of a fire video to obtain a saliency grayscale image;
a threshold segmentation unit 920, configured to perform threshold segmentation processing on the saliency grayscale image based on a grayscale value of a pixel in the saliency grayscale image to obtain a saliency binary image;
the image processing unit 930 obtains a target binary image corresponding to the target image based on the saliency binary image.
In one embodiment, the significance analysis unit is to:
analyzing a gray level histogram of a gray level image of the target image, and determining a gray level corresponding to each pixel in the gray level image and a gray level frequency corresponding to each gray level;
and processing the gray value of each pixel in the gray image of the target image based on the gray level of each pixel and the gray level frequency corresponding to each gray level to obtain the significant gray image.
In one embodiment, the gray value processing unit is configured to:
determining, for each pixel, a Euclidean distance of a gray level of the pixel from the remaining gray levels;
and determining the sum of the gray level frequency corresponding to the gray level of the pixel and the product of the gray level of the pixel and the Euclidean distances of the rest gray levels as the gray level of the pixel.
In one embodiment, the image processing unit is configured to:
performing AND operation on the significance binary image and a G component binary image corresponding to the target image to obtain the target binary image, wherein the G component binary image is obtained by extracting a G component channel from the target image
In one embodiment, the apparatus further comprises:
and the enhancement processing unit is used for enhancing the target image based on an adaptive gray histogram equalization algorithm.
In one embodiment, the apparatus further comprises a video composition unit to: after the target binary image and/or the target image subjected to enhancement processing are/is obtained, synthesizing the multi-frame target binary image obtained after processing into a corresponding target binary video; and/or synthesizing the multiple frames of target images obtained after enhancement processing into corresponding enhanced target videos.
In one embodiment, if the frame rate of the fire video is greater than or equal to a preset video output frame rate, the video synthesis unit synthesizes the target binary image or the target image obtained after enhancement processing into a corresponding video based on the frame rate of the fire video;
and if the frame rate of the fire video is less than the preset video output frame rate, the video synthesis unit synthesizes the target binary image or the target image obtained after enhancement processing into a corresponding video based on the video output frame rate.
In one embodiment, the fire video is an infrared thermal imaging video, and the target image is an infrared thermal imaging image.
The image processing apparatus and the functional modules thereof may implement the image processing scheme, and details of relevant implementation may be referred to in the above description in conjunction with fig. 1 to 8, which are not described herein again.
Having described an image processing method and apparatus of an exemplary embodiment of the present application, a computing device according to another exemplary embodiment of the present application is next described.
As will be appreciated by one skilled in the art, aspects of the present application may be embodied as a system, method or program product. Accordingly, various aspects of the present application may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
In some possible implementations, a computing device according to the present application may include at least one processor, and at least one memory. Wherein the memory stores program code which, when executed by the processor, causes the processor to perform the steps in the image processing method according to various exemplary embodiments of the present application described above in the present specification. For example, the processor may perform the steps shown in fig. 2, 4, 6.
The computing device 130 according to this embodiment of the present application is described below with reference to fig. 10. The computing device 130 shown in fig. 10 is only an example and should not bring any limitations to the functionality or scope of use of the embodiments of the present application.
As shown in fig. 10, computing device 130 is embodied in the form of a general purpose computing device. Components of computing device 130 may include, but are not limited to: the at least one processor 131, the at least one memory 132, and a bus 133 that connects the various system components (including the memory 132 and the processor 131).
Bus 133 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures.
The memory 132 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)1321 and/or cache memory 1322, and may further include Read Only Memory (ROM) 1323.
Memory 132 may also include a program/utility 1325 having a set (at least one) of program modules 1324, such program modules 1324 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Computing device 130 may also communicate with one or more external devices 134 (e.g., keyboard, pointing device, etc.), with one or more devices that enable a user to interact with computing device 130, and/or with any devices (e.g., router, modem, etc.) that enable computing device 130 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 135. Also, computing device 130 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via network adapter 136. As shown, network adapter 136 communicates with other modules for computing device 130 over bus 133. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with computing device 130, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
In some possible embodiments, aspects of an image processing method provided by the present application may also be implemented in the form of a program product including program code for causing a computer device to perform the steps of an image processing method according to various exemplary embodiments of the present application described above in this specification when the program product is run on a computer device, for example, the computer device may perform the steps as shown in fig. 2, fig. 4, and fig. 6.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for image processing of the embodiments of the present application may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a computing device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device over any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., over the internet using an internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (11)

1. An image processing method, characterized in that the method comprises:
performing significance analysis on a gray level image of a target image to be processed of the fire video to obtain a significant gray level image;
performing threshold segmentation processing on the saliency gray image based on the gray value of the pixel in the saliency gray image to obtain a saliency binary image;
and obtaining a target binary image corresponding to the target image based on the significance binary image.
2. The method of claim 1, wherein performing a saliency analysis on a grayscale image of the target image results in a saliency grayscale image, comprising:
analyzing a gray level histogram of a gray level image of the target image, and determining a gray level corresponding to each pixel in the gray level image and a gray level frequency corresponding to each gray level;
and processing the gray value of each pixel in the gray image of the target image based on the gray level of each pixel and the gray level frequency corresponding to each gray level to obtain the significant gray image.
3. The method of claim 2, wherein processing the gray value of each pixel in the gray image of the target image based on the gray level of each pixel and the gray level frequency corresponding to each gray level to obtain the saliency gray image comprises:
determining, for each pixel, a Euclidean distance of a gray level of the pixel from the remaining gray levels;
and determining the sum of the gray level frequency corresponding to the gray level of the pixel and the product of the gray level of the pixel and the Euclidean distances of the rest gray levels as the gray level of the pixel to obtain the significant gray level image.
4. The method according to claim 1, wherein obtaining a target binary image corresponding to the target image based on the saliency binary image comprises:
and performing AND operation on the significance binary image and a G component binary image corresponding to the target image to obtain the target binary image, wherein the G component binary image is obtained by extracting a G component channel from the target image.
5. The method of claim 1, further comprising:
and performing enhancement processing on the target image based on a self-adaptive gray histogram equalization algorithm.
6. The method according to claim 1 or 5, wherein after obtaining the target binary image and/or obtaining the enhanced target image, the method further comprises:
synthesizing the multi-frame target binary image obtained after processing into a corresponding target binary video; and/or
And synthesizing the multiple frames of target images obtained after enhancement processing into corresponding enhanced target videos.
7. The method of claim 6, wherein synthesizing images into respective videos comprises:
if the frame rate of the fire video is greater than or equal to a preset video output frame rate, synthesizing the target binary image or the target image obtained after enhancement processing into a corresponding video based on the frame rate of the fire video;
and if the frame rate of the fire video is less than the preset video output frame rate, synthesizing the target binary image or the target image obtained after enhancement processing into a corresponding video based on the video output frame rate.
8. The method of claim 1, wherein the fire video is an infrared thermography video and the target image is an infrared thermography image.
9. An image processing apparatus, characterized in that the apparatus comprises:
the saliency analysis unit is used for carrying out saliency analysis on the gray level image of the target image to be processed of the fire video to obtain a saliency gray level image;
the threshold segmentation unit is used for carrying out threshold segmentation processing on the saliency gray level image based on the gray level value of the pixel in the saliency gray level image to obtain a saliency binary image;
and the image processing unit is used for obtaining a target binary image corresponding to the target image based on the significance binary image.
10. A computing device comprising at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image processing method according to any one of claims 1-8.
11. A computer storage medium storing computer-executable instructions for causing a computer to perform the image processing method according to any one of claims 1 to 8.
CN202010241994.1A 2020-03-31 2020-03-31 Image processing method, device, computing equipment and storage medium Active CN111369557B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010241994.1A CN111369557B (en) 2020-03-31 2020-03-31 Image processing method, device, computing equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010241994.1A CN111369557B (en) 2020-03-31 2020-03-31 Image processing method, device, computing equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111369557A true CN111369557A (en) 2020-07-03
CN111369557B CN111369557B (en) 2023-09-15

Family

ID=71210810

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010241994.1A Active CN111369557B (en) 2020-03-31 2020-03-31 Image processing method, device, computing equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111369557B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111798448A (en) * 2020-07-31 2020-10-20 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for processing image
CN113067960A (en) * 2021-03-16 2021-07-02 合肥合芯微电子科技有限公司 Image interpolation method, device and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20030010530A (en) * 2001-07-26 2003-02-05 캐논 가부시끼가이샤 Image processing method, apparatus and system
WO2011127825A1 (en) * 2010-04-16 2011-10-20 杭州海康威视软件有限公司 Processing method and device of image contrast
EP2747028A1 (en) * 2012-12-18 2014-06-25 Universitat Pompeu Fabra Method for recovering a relative depth map from a single image or a sequence of still images
CN108090885A (en) * 2017-12-20 2018-05-29 百度在线网络技术(北京)有限公司 For handling the method and apparatus of image
CN108665443A (en) * 2018-04-11 2018-10-16 中国石油大学(北京) A kind of the infrared image sensitizing range extracting method and device of mechanical equipment fault
CN109242877A (en) * 2018-09-21 2019-01-18 新疆大学 Image partition method and device
CN110490848A (en) * 2019-08-02 2019-11-22 上海海事大学 Infrared target detection method, apparatus and computer storage medium
CN110532876A (en) * 2019-07-26 2019-12-03 纵目科技(上海)股份有限公司 Night mode camera lens pays detection method, system, terminal and the storage medium of object
EP3598386A1 (en) * 2018-07-20 2020-01-22 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for processing image

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20030010530A (en) * 2001-07-26 2003-02-05 캐논 가부시끼가이샤 Image processing method, apparatus and system
WO2011127825A1 (en) * 2010-04-16 2011-10-20 杭州海康威视软件有限公司 Processing method and device of image contrast
EP2747028A1 (en) * 2012-12-18 2014-06-25 Universitat Pompeu Fabra Method for recovering a relative depth map from a single image or a sequence of still images
CN108090885A (en) * 2017-12-20 2018-05-29 百度在线网络技术(北京)有限公司 For handling the method and apparatus of image
CN108665443A (en) * 2018-04-11 2018-10-16 中国石油大学(北京) A kind of the infrared image sensitizing range extracting method and device of mechanical equipment fault
EP3598386A1 (en) * 2018-07-20 2020-01-22 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for processing image
CN109242877A (en) * 2018-09-21 2019-01-18 新疆大学 Image partition method and device
CN110532876A (en) * 2019-07-26 2019-12-03 纵目科技(上海)股份有限公司 Night mode camera lens pays detection method, system, terminal and the storage medium of object
CN110490848A (en) * 2019-08-02 2019-11-22 上海海事大学 Infrared target detection method, apparatus and computer storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111798448A (en) * 2020-07-31 2020-10-20 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for processing image
CN113067960A (en) * 2021-03-16 2021-07-02 合肥合芯微电子科技有限公司 Image interpolation method, device and storage medium
CN113067960B (en) * 2021-03-16 2022-08-12 合肥合芯微电子科技有限公司 Image interpolation method, device and storage medium

Also Published As

Publication number Publication date
CN111369557B (en) 2023-09-15

Similar Documents

Publication Publication Date Title
US10984556B2 (en) Method and apparatus for calibrating relative parameters of collector, device and storage medium
CN111654700B (en) Privacy mask processing method and device, electronic equipment and monitoring system
US20120057745A9 (en) Detection of objects using range information
CN110853033A (en) Video detection method and device based on inter-frame similarity
CN110941978B (en) Face clustering method and device for unidentified personnel and storage medium
US20210337073A1 (en) Print quality assessments via patch classification
CN111767828A (en) Certificate image copying and identifying method and device, electronic equipment and storage medium
CN111369557B (en) Image processing method, device, computing equipment and storage medium
CN112686165A (en) Method and device for identifying target object in video, electronic equipment and storage medium
CN114898416A (en) Face recognition method and device, electronic equipment and readable storage medium
CN113158773B (en) Training method and training device for living body detection model
CN114663871A (en) Image recognition method, training method, device, system and storage medium
CN112966687B (en) Image segmentation model training method and device and communication equipment
CN111327946A (en) Video quality evaluation and feature dictionary training method, device and medium
CN115861891B (en) Video target detection method, device, equipment and medium
CN111062922A (en) Method and system for judging copied image and electronic equipment
CN112052863B (en) Image detection method and device, computer storage medium and electronic equipment
CN112991419B (en) Parallax data generation method, parallax data generation device, computer equipment and storage medium
US10255674B2 (en) Surface reflectance reduction in images using non-specular portion replacement
CN113313642A (en) Image denoising method and device, storage medium and electronic equipment
CN108447107B (en) Method and apparatus for generating video
CN113252678A (en) Appearance quality inspection method and equipment for mobile terminal
CN111899239A (en) Image processing method and device
CN113117341B (en) Picture processing method and device, computer readable storage medium and electronic equipment
CN111598053B (en) Image data processing method and device, medium and system thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230815

Address after: Room 201, Building A, Integrated Circuit Design Industrial Park, No. 858, Jianshe 2nd Road, Economic and Technological Development Zone, Xiaoshan District, Hangzhou City, Zhejiang Province, 311215

Applicant after: Zhejiang Huagan Technology Co.,Ltd.

Address before: Hangzhou City, Zhejiang province Binjiang District 310053 shore road 1187

Applicant before: ZHEJIANG DAHUA TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant