CN117994165A - Intelligent campus management method and system based on big data - Google Patents
Intelligent campus management method and system based on big data Download PDFInfo
- Publication number
- CN117994165A CN117994165A CN202410390162.4A CN202410390162A CN117994165A CN 117994165 A CN117994165 A CN 117994165A CN 202410390162 A CN202410390162 A CN 202410390162A CN 117994165 A CN117994165 A CN 117994165A
- Authority
- CN
- China
- Prior art keywords
- pixel point
- haze
- image
- initial window
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000007726 management method Methods 0.000 title claims abstract description 35
- 230000000694 effects Effects 0.000 claims abstract description 33
- 238000000034 method Methods 0.000 claims description 26
- 230000006399 behavior Effects 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- 238000005457 optimization Methods 0.000 claims description 5
- 238000004458 analytical method Methods 0.000 abstract description 10
- 238000012545 processing Methods 0.000 abstract description 4
- 230000008859 change Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000002708 enhancing effect Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000011166 aliquoting Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000011273 social behavior Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of image defogging, in particular to a smart campus management method and system based on big data. According to the position characteristics, the channel value and the gray level difference of the local pixel points in the preset direction of each suspected haze area pixel point, the possibility that each suspected haze area pixel point is a non-haze interference pixel point is obtained; constructing an initial window to traverse and slide in the gray level image, and obtaining a contrast characteristic value in the initial window corresponding to each pixel point according to gray level distribution characteristics in the initial window corresponding to each pixel point; further obtaining the final degree of the haze interference in the initial window corresponding to each pixel point; adjusting the initial window to obtain an optimized window; defogging processing is carried out on each frame of image, and defogging video frame images are obtained; and managing campus student behavior activities. According to the invention, the haze concentration is accurately estimated, so that a proper window is obtained, the defogging effect is improved, and the accuracy of personnel behavior analysis is improved.
Description
Technical Field
The invention relates to the technical field of image defogging, in particular to a smart campus management method and system based on big data.
Background
In the campus management system, according to the collected video image data such as student activity behaviors and the like, and the collected video content is analyzed by a computer system, so that the activity mode and social behavior of students in a campus can be known, whether the students observe the campus regulation or not can be monitored, and the improvement of teaching and campus management can be facilitated; however, when the video of the student activity behavior is acquired, the content in the acquired video is fuzzy due to the fact that haze possibly exists in weather, the content in the video is difficult to analyze by a computer analysis system, and the analysis result possibly has deviation. Therefore, enhancement of video images in which haze exists is required.
In the prior art, when defogging is performed on an image by using a defogging algorithm of a dark channel image prior, a proper window is generally selected to calculate the statistical information of the surrounding area of each pixel point in the image, and the size of the window is selected to be related to the haze concentration in the image; however, the traditional algorithm only estimates the haze concentration through the dark channel image, and a proper window cannot be obtained, so that the defogging effect on the video frame image is poor, and the accurate analysis and management of the activity behaviors of students by the campus management system are not facilitated.
Disclosure of Invention
In order to solve the technical problem that the defogging effect on video frame images is poor due to the fact that a proper window cannot be obtained due to the fact that haze concentration estimation is not accurate enough, the invention aims to provide a smart campus management method and system based on big data, and the adopted technical scheme is as follows:
the invention provides a smart campus management method based on big data, which comprises the following steps:
Acquiring a dark channel image and a gray level image of each frame of image in a foggy campus student activity video;
Obtaining suspected haze area pixel points according to the channel value characteristics of each pixel point in the dark channel image; obtaining the possibility that each suspected haze area pixel point is a non-haze interference pixel point according to the position characteristics, the channel value and the local pixel point gray level difference in the preset direction of each suspected haze area pixel point; constructing an initial window on the gray image, traversing and sliding, and obtaining a contrast characteristic value in the initial window corresponding to each pixel point according to gray distribution characteristics in the initial window corresponding to each pixel point;
Obtaining the final degree of haze interference in the initial window corresponding to each pixel point according to the contrast characteristic value in the initial window corresponding to each pixel point and the possibility that each pixel point is a non-haze interference pixel point; adjusting the initial window according to the final degree of the haze interference in the initial window corresponding to each pixel point to obtain an optimized window;
Carrying out defogging treatment on each frame of image according to the optimized window to obtain defogging video frame images;
And managing the campus student behavior activities according to the defogging video frame images.
Further, the method for acquiring the suspected haze area pixel points comprises the following steps:
acquiring a channel median value of a channel value range of the dark channel image; and taking the pixel point with the channel value of each pixel point larger than the corresponding pixel point of the channel median as the pixel point of the suspected haze area.
Further, the method for obtaining the gray level difference of the local pixel point comprises the following steps:
Obtaining image blocks which equally divide the dark channel image;
Acquiring the highest channel value of the pixel point in each other image block in the preset direction of the image block where the pixel point of each suspected haze area is located, and taking the highest channel value as a first channel value;
Calculating the difference between the channel value of each suspected haze area pixel point and each first channel value to be used as a first difference value; and solving an average value of all the first difference values to obtain the gray level difference of the local pixel points in the preset direction of the pixel points of each suspected haze area.
Further, the method for acquiring the possibility that each suspected haze region pixel point is a non-haze interference pixel point comprises the following steps:
Calculating the vertical distance between each suspected haze area pixel point and the lower boundary of the dark channel image to obtain a position characteristic;
obtaining the possibility that each suspected haze area pixel point is a non-haze interference pixel point according to the position characteristics, the channel value and the local pixel point gray level difference of each suspected haze area pixel point;
the position feature and the likelihood are in a negative correlation; and the channel value, the local pixel gray level difference and the possibility are in positive correlation.
Further, the method for obtaining the contrast characteristic value comprises the following steps:
counting gray level histograms of pixel points in an initial window corresponding to each pixel point on a gray level image to obtain the number of the pixel points corresponding to each gray level;
obtaining a contrast characteristic value according to an obtaining formula of the contrast characteristic value, wherein the obtaining formula of the contrast characteristic value is as follows:
; wherein/> Represents the/>Contrast characteristic values in windows corresponding to the pixel points; /(I)Represents the/>The pixel points correspond to the maximum number of the pixel points corresponding to the same gray level in the initial window; /(I)Represents the/>The pixel points correspond to the first/>, within the initial windowThe number of the gray levels corresponds to the number of the pixel points; /(I)Represents the/>The pixel points correspond to the first/>, within the initial windowThe number of the gray levels corresponds to the number of the pixel points; /(I)Represents the/>The pixel points correspond to the maximum gray level value in the initial window; /(I)Represents the/>The pixel points correspond to the minimum gray level value in the initial window; /(I)Represents the/>Each pixel corresponds to the number of gray levels in the initial window.
Further, the method for obtaining the final degree of the haze interference comprises the following steps:
Obtaining the final degree of the haze interference according to an obtaining formula of the final degree of the haze interference, wherein the obtaining formula of the final degree of the haze interference is as follows:
; wherein/> Represents the/>The pixel points correspond to the final degree of haze interference in the initial window; /(I)Represents the/>The pixel points correspond to contrast characteristic values in the initial window; /(I)Represents the/>The pixel points correspond to the first/>, within the initial windowThe possibility that each pixel point is a non-haze interference pixel point; /(I)Represents the/>The number of the pixel points corresponds to the number of the pixel points in the initial window; /(I)Representing natural constants; /(I)Representing a linear normalization function; /(I)To adjust the parameters.
Further, the method for acquiring the optimization window comprises the following steps:
Acquiring the initial size of the initial window; and calculating the product of the final degree of the interference of the haze and the initial size, adding the product with the initial size of the initial window, and rounding down to obtain an optimized window as the optimized size.
Further, the initial window is a window with the same size and width.
Further, the preset direction is a horizontal direction.
The invention also provides a smart campus management system based on big data, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the steps of any one of the smart campus management methods based on big data when executing the computer program.
The invention has the following beneficial effects:
According to the method, the scattering effect of fog is considered, so that dark pixels become brighter, suspected haze area pixel points are obtained according to the channel value characteristics of each pixel point in a dark channel image, and the pixel points affected by haze are analyzed preliminarily; according to the position characteristics, the channel value and the local pixel gray level difference in the preset direction of each suspected haze area pixel point, the possibility that each suspected haze area pixel point is a non-haze interference pixel point is obtained, the pixel points which are not caused by haze are further evaluated, and the defogging accuracy is improved; considering that haze analysis is not accurate enough only for dark channel images, constructing an initial window on a gray level image, traversing and sliding, acquiring a contrast characteristic value in the initial window corresponding to each pixel point according to gray level distribution characteristics in the initial window corresponding to each pixel point, and reflecting whether the pixel point is interfered by haze or not; further obtaining the final degree of the haze interference in the initial window corresponding to each pixel point, and estimating the haze degree comprehensively and accurately; according to the final degree of the haze interference in the initial window corresponding to each pixel point, the initial window is adjusted to obtain an optimized window, and the characteristic information of each pixel point can be better extracted, so that the accuracy and precision of haze analysis are improved; defogging treatment is carried out on each frame image to obtain defogging video frame images, clearer and defogging-free video frame images are obtained, and detail integrity of the content is improved; and managing campus student behavior activities. According to the invention, by accurately estimating the haze concentration, a proper initial window is obtained, the defogging effect is improved, and the accuracy of personnel behavior analysis is improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a smart campus management method based on big data according to an embodiment of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following detailed description is given below of a smart campus management method and system based on big data according to the invention, and the specific implementation, structure, characteristics and effects thereof are as follows. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The invention provides a smart campus management method and a smart campus management system based on big data.
Referring to fig. 1, a flowchart of a smart campus management method based on big data according to an embodiment of the present invention is shown, and the specific method includes:
Step S1: and acquiring a dark channel image and a gray level image of each frame of image in the foggy campus student activity video.
In the embodiment of the invention, in order to monitor the student activity behaviors in the hazy weather in real time, cameras are arranged in a campus environment to collect videos of the student activity behaviors, and the videos are split into a plurality of frames of images to be processed.
It should be noted that, in one embodiment of the present invention, the frequency of capturing video is 30 frames, and the duration is 1 minute; in other embodiments of the present invention, the frequency and duration of video capturing may be specifically set according to specific situations, which are not limited and described herein. In the embodiment of the present invention, the processing method of each frame of image is the same, and is not described herein, and only one frame of image is used for example in the following.
In one embodiment of the invention, in order to facilitate the subsequent image processing process, preprocessing operation is performed on each acquired frame of image, a dark channel image and a gray level image of each frame of image are obtained, the quality of the image is enhanced, and then the processed image is analyzed. It should be noted that the image preprocessing operation is a technical means well known to those skilled in the art, and can be specifically set according to a specific implementation scene, in one embodiment of the present invention, a gray level image of each frame of image is obtained by using a gray level algorithm, so that the outline and details of the image are highlighted, and the image is clearer, and the operations such as feature extraction, image identification, etc. are easier to perform; and acquiring a dark channel image of each frame of image by adopting a dark channel prior algorithm, namely selecting the value with the lowest color as a gray level image formed by the values of the pixel points at the corresponding positions in the gray level image through the channel of each pixel point in the original image, further revealing hidden information in the image, and improving the definition and the readability of the image. The specific graying algorithm and dark channel prior algorithm are well known to those skilled in the art, and will not be described in detail herein.
Step S2: obtaining suspected haze area pixel points according to the channel value characteristics of each pixel point in the dark channel image; obtaining the possibility that each suspected haze area pixel point is a non-haze interference pixel point according to the position characteristics, the channel value and the local pixel point gray level difference in the preset direction of each suspected haze area pixel point; and constructing an initial window on the gray image, traversing and sliding, and obtaining a contrast characteristic value in the initial window corresponding to each pixel point according to the gray distribution characteristic in the initial window corresponding to each pixel point.
In the dark channel image, the more likely the pixel points with brighter pixels are the areas with higher haze concentration; and obtaining the pixel points of the suspected haze area according to the channel value characteristics of each pixel point in the dark channel image.
Preferably, in one embodiment of the present invention, the method for acquiring the suspected haze area pixel points includes:
Acquiring a channel median value of a channel value range of a dark channel image; and taking the pixel point with the channel value of each pixel point larger than the corresponding pixel point of the channel median as the pixel point of the suspected haze area.
In campus environment, reflective pixels existing in some devices such as lights, sports equipment, buildings and the like can also be represented as brighter pixels in dark channel images, and the higher the channel value is; in order to avoid interference, considering that haze generally causes a blurry feel in an image, objects with a smaller distance may be less affected by the haze, the image content is displayed more fresh, and positions with a greater distance in the image may be more affected by the haze; the distribution and concentration of the haze are affected by geographic positions, certain diffusion characteristics are provided in certain directions, similar regional ranges possibly exist, and the smaller the variation of the channel value is, the more the haze is likely to be affected; and obtaining the possibility that each suspected haze area pixel point is a non-haze interference pixel point according to the position characteristic, the channel value characteristic and the local pixel point gray level difference in the preset direction of each suspected haze area pixel point.
Preferably, in one embodiment of the present invention, the method for acquiring the gray level difference of the local pixel includes:
In order to ensure the accuracy and effect of analysis, image blocks which equally divide the dark channel image are acquired; acquiring the highest channel value of the pixel point in each other image block in the preset direction of the image block where the pixel point of each suspected haze area is located, and taking the highest channel value as a first channel value; calculating the difference between the channel value of each suspected haze area pixel point and each first channel value to be used as a first difference value; and (3) averaging all the first difference values to obtain the gray level difference of the local pixel points in the preset direction of the pixel points of each suspected haze area. In one embodiment of the present invention, the formula of the local pixel gray level difference is expressed as:
;
Wherein, Represents the/>Local pixel point gray differences of the pixel points in the suspected haze area; /(I)Represents the/>Channel values of the pixel points of the suspected haze area; /(I)Represents the/>Other/>, in a preset direction, in the image block where each suspected haze area pixel point is locatedThe channel value of the pixel point with the highest channel value in each image block; /(I)Indicating the number of other image tiles in the preset direction.
In the formula of the gray level difference of the local pixel point, the firstIn the preset direction in each suspected haze area pixel point corresponding image block, the highest value of the pixel point channel value in each image block and the/>The larger the absolute value of the difference between the channel values of the pixel points in the suspected haze area is, the larger the gray level difference of the local pixel points is, and the more possible degree that the pixel points are not haze-affected is indicated to be the pixel points in the non-haze area.
In one embodiment of the present invention, the image field is a normal front view, so that the effect of haze on the image generally increases with the height, and the effect of haze on the same horizontal position in space is similar, and the preset direction is the horizontal direction.
It should be noted that, in one embodiment of the present invention, the dark channel image isAliquoting,/>Taking an experience value of 10; in other embodiments of the present invention, the obtaining of the aliquots may be specifically set according to specific situations, which are not limited and described herein.
Preferably, in an embodiment of the present invention, the method for acquiring the possibility that each suspected haze region pixel point is a non-haze interference pixel point includes:
Calculating the vertical distance between each suspected haze area pixel point and the lower boundary of the dark channel image to obtain a position characteristic; obtaining the possibility that each suspected haze area pixel point is a non-haze interference pixel point according to the position characteristics, the channel value and the local pixel point gray level difference of each suspected haze area pixel point; the position features and the possibility are in a negative correlation; the channel value and the gray level difference of the local pixel point are in positive correlation with the possibility. In one embodiment of the invention, the formula for the likelihood is expressed as:
;
Wherein, Represents the/>The suspected haze area pixel points are the possibility of non-haze interference pixel points; /(I)Represents the/>The vertical distance between each suspected haze area pixel point and the lower boundary of the image; /(I)Represents the/>Local pixel point gray differences of the pixel points in the suspected haze area; /(I)Representing natural constants; /(I)Representing a linear normalization function; /(I)To adjust the parameters, the checked value was taken to be 0.1.
In the formula of the possibility of non-haze interference pixel points, natural constant is taken as an exponential function pairPositive correlation mapping is performed,/>The larger, i.e./>The larger the channel value of each suspected haze area pixel point is; when/>When the positions of the pixel points of the suspected haze areas are far from the lower boundary of the image, the possibility of being influenced by haze is higher, the pixel points with larger channel values are more likely to be haze interference pixel points, and the possibility of being not haze interference pixel points is lower; when/>The closer the pixel points of the suspected haze area are to the lower boundary of the image, the less the possibility of being influenced by haze, and the more likely the pixel points are to be the pixel points of a highlight non-haze area caused by reflection and the like, and the greater the possibility of the non-haze interference pixel points; the larger the local pixel gray level difference is, the more similar the features in the horizontal direction are, the more likely is the change caused by haze, and the less likely is that non-haze interferes with the pixel.
It should be noted that, in other embodiments of the present invention, the positive-negative correlation and normalization method may be constructed by other basic mathematical operations, and specific means are technical means well known to those skilled in the art, and will not be described herein.
Because the affected degree of haze cannot be accurately estimated according to the dark channel image, the gray level image according to the analyzed image needs to be further expressed, and in order to comprehensively analyze each pixel point in the image, an initial window is constructed on the gray level image, and sliding is traversed; the gray level distribution characteristics describe the color and brightness change condition of the pixel points in an initial window, and the dark brightness difference is analyzed to reflect the contrast in the window; and obtaining a contrast characteristic value in the initial window corresponding to each pixel point according to the gray distribution characteristic in the initial window corresponding to each pixel point.
Preferably, in one embodiment of the present invention, the method for acquiring the contrast characteristic value includes:
Counting gray level histograms of pixel points in an initial window corresponding to each pixel point on a gray level image to obtain the number of the pixel points corresponding to each gray level; the horizontal axis of the histogram represents the gray level of the pixel points from left to right, from small to large, and the vertical axis of the histogram represents the number of the pixel points corresponding to the corresponding gray level, so that the distribution proportion of the pixel points with different gray levels can be intuitively obtained.
Obtaining a contrast characteristic value according to an obtaining formula of the contrast characteristic value, wherein the obtaining formula of the contrast characteristic value is as follows:
;
Wherein, Represents the/>The pixel points correspond to contrast characteristic values in the initial window; /(I)Represents the/>The number of the pixel points with the same gray level as the initial window corresponds to the maximum number of the pixel points; /(I)Represents the/>The pixel points correspond to the first/>, within the initial windowThe number of the gray levels corresponds to the number of the pixel points; /(I)Represents the/>The pixel points correspond to the first/>, within the initial windowThe number of the gray levels corresponds to the number of the pixel points; /(I)Represents the/>The pixel points correspond to the maximum gray level value in the initial window; /(I)Represents the/>The pixel points correspond to the minimum gray level value in the initial window; /(I)Represents the/>Each pixel corresponds to the number of gray levels in the initial window.
In the formulation of the contrast characteristic values,The larger, i.e./>The more the number of pixels corresponding to the same gray level of the initial window is, the higher the peak value in the gray level histogram corresponding to the pixels of the initial window is, which means that the contrast in the area in the initial window may be higher. /(I)Represents the/>The absolute value of the difference value of the number of the corresponding pixel points between the adjacent gray levels in the initial window corresponding to the pixel points is calculated, and the larger the value is, the more obvious the contrast is; conversely, the smaller the value, the smoother the variation between adjacent gray levels, and the lower the contrast; /(I)Represent the firstThe larger the difference is, the larger the gray scale range is, and the larger the contrast in the initial window is; the larger the number of the pixel points with the same gray level corresponding to the maximum number, the larger the difference between the adjacent gray levels, the larger the difference between the gray level maximum values, which indicates that the larger the contrast characteristic value in the corresponding initial window.
It should be noted that, in one embodiment of the present invention, the initial window is a window with equal width, and is set asThe initial window is traversed and slid from left to right from the upper left corner of the gray image,/>Taking an experience value of 7; in other embodiments of the present invention, the size of the initial window may be specifically set according to specific situations, which are not limited and described herein.
Step S3: obtaining the final degree of haze interference in the initial window corresponding to each pixel point according to the contrast characteristic value in the initial window corresponding to each pixel point and the possibility that each pixel point is a non-haze interference pixel point; and adjusting the initial window according to the final degree of the haze interference in the initial window corresponding to each pixel point to obtain an optimized window.
In order to comprehensively and accurately estimate the haze degree, comprehensive analysis is carried out by combining a plurality of characteristics of the dark channel image and the gray level image, and the haze influence degree of each region in the dark channel image is further analyzed and corrected. The contrast characteristic value can better describe the detail and structure information of the region where the pixel points are located, reflect the difference or change between the pixel points and is helpful for distinguishing the haze region from the non-haze region; the larger the contrast, the larger the difference between the bright and dark areas in the image, the clearer the details and textures of the image, and the less the possibility that the area is affected by haze; the degree of influence of haze can be known by the possibility of the non-haze interference pixel points, and the degree of influence of haze is smaller as the possibility of the non-haze interference pixel points is higher; and obtaining the final degree of the haze interference in the initial window corresponding to each pixel point according to the contrast characteristic value in the initial window corresponding to each pixel point and the possibility that each pixel point is a non-haze interference pixel point.
Preferably, in one embodiment of the present invention, the method for obtaining the final degree of interference caused by haze includes:
Obtaining the final degree of the haze interference according to an obtaining formula of the final degree of the haze interference, wherein the obtaining formula of the final degree of the haze interference is as follows:
;
Wherein, Represents the/>The pixel points correspond to the final degree of haze interference in the initial window; /(I)Represents the/>The pixel points correspond to contrast characteristic values in the initial window; /(I)Represents the/>The pixel points correspond to the first/>, within the initial windowThe possibility that each pixel point is a non-haze interference pixel point; /(I)Represents the/>The number of the pixel points in the initial window corresponding to the pixel points is taken as a checked value of 49; /(I)Representing natural constants; /(I)Representing a linear normalization function; /(I)To adjust the parameters, the checked value was taken to be 0.1.
In the formula for obtaining the final degree of interference by haze, an exponential function based on a natural constantNegative correlation mapping is performed, when/>The larger, i.e./>When the contrast characteristic value in the initial window corresponding to each pixel point is larger, the gray level difference of the pixel point is larger, and the final degree of haze interference is smaller; conversely, the smaller the contrast characteristic value, the more similar the gray distribution, and the greater the final degree of haze interference; /(I)Represents the/>The probability that each pixel point in the initial window is a non-haze pixel point is accumulated and summed, and the larger the value is, the smaller the probability that the initial window contains non-haze interference pixel points is, and the larger the final degree of haze interference is.
It should be noted that, in other embodiments of the present invention, the positive-negative correlation and normalization method may be constructed by other basic mathematical operations, and specific means are technical means well known to those skilled in the art, and will not be described herein.
The size of the window can be reselected by obtaining the final degree of interference caused by haze; if the window size is set smaller, local detail information can be better captured, but global information can be ignored, so that the defogging effect is poor; while if the window size is set larger, while global information may be better considered, more noise and computational complexity may be introduced. Therefore, the initial window is adjusted according to the final degree of the haze interference in the initial window corresponding to each pixel point, and an optimized window is obtained.
Preferably, in one embodiment of the present invention, the method for obtaining the optimization window includes:
Acquiring an initial size of an initial window; and calculating the product of the final degree of the interference of the haze and the initial size, adding the product with the initial size of the initial window, rounding down, and obtaining the optimized window as the optimized size. In one embodiment of the invention, the formula for optimizing the dimensions is expressed as:
;
Wherein, Represents the/>The optimized size corresponding to each pixel point; /(I)Represents the/>The final degree of haze interference in the initial window corresponds to each pixel point; /(I)Representing the initial size corresponding to the pixel point; /(I)Representing rounding down symbols.
In the formula of the optimized size, the larger the initial size is, the larger the final degree of interference by haze is, the larger the optimized size is required to be, the larger optimized window is obtained, and more information is considered.
Step S4: and carrying out defogging treatment on each frame of image according to the optimized window to obtain defogging video frame images.
And selecting a proper optimization window to defogging and enhancing the image, enhancing detail information in the image, fully displaying the originally hidden or blurred image details, and improving the information quantity of the image. And carrying out defogging treatment on each frame of image according to the optimized window to obtain defogging video frame images.
It should be noted that, in one embodiment of the present invention, an optimization window corresponding to each frame of image is obtained, statistical information of surrounding areas of each pixel point in the image may be calculated, after the window is improved, a subsequent step of a defogging algorithm of dark channel image prior is continuously adopted to defogging enhance each frame of image in the video, so as to obtain defogging video frame images. The specific defogging algorithm is a technical means well known to those skilled in the art, and will not be described herein.
Step S5: and managing the campus student behavior activities according to the defogging video frame images.
The contrast and the color vividness of the image can be obviously improved through the defogging treated video frame image, the image is clearer and more natural, the visual effect of the image is enhanced, the detail integrity of the content in the video image is improved, the behavior activities of students can be more clearly presented, management staff is helped to monitor the campus safety better, and intelligent management is facilitated. And managing the campus student behavior activities according to the defogging video frame images.
In summary, according to the position characteristics, the channel value and the local pixel gray level difference in the preset direction of each suspected haze area pixel point, the possibility that each suspected haze area pixel point is a non-haze interference pixel point is obtained; constructing an initial window to traverse and slide in the gray level image, and obtaining a contrast characteristic value in the initial window corresponding to each pixel point according to gray level distribution characteristics in the initial window corresponding to each pixel point; further obtaining the final degree of the haze interference in the initial window corresponding to each pixel point; adjusting the initial window to obtain an optimized window; defogging processing is carried out on each frame of image, and defogging video frame images are obtained; and managing campus student behavior activities. According to the invention, by accurately estimating the haze concentration, a proper initial window is obtained, the defogging effect is improved, and the accuracy of personnel behavior analysis is improved.
The invention also provides a smart campus management system based on big data, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes any one step of the smart campus management method based on the big data when executing the computer program.
An embodiment of a defogging method for intelligent campus management video comprises the following steps:
In the prior art, when defogging is performed on an image in the prior defogging algorithm of a dark channel, a proper window is generally selected to calculate the statistical information of the surrounding area of each pixel point in the image, and the size of the window is selected to be related to the haze concentration in the image; however, the conventional algorithm only estimates the haze concentration through the dark channel image, and a proper window cannot be acquired, so that the technical problem of poor defogging effect on the video frame image is caused. In order to solve the technical problem, this embodiment provides a smart campus management video defogging method, including:
Step S1: and acquiring a dark channel image and a gray level image of each frame of image in the foggy campus student activity video.
Step S2: obtaining suspected haze area pixel points according to the channel value characteristics of each pixel point in the dark channel image; obtaining the possibility that each suspected haze area pixel point is a non-haze interference pixel point according to the position characteristics, the channel value and the local pixel point gray level difference in the preset direction of each suspected haze area pixel point; and constructing an initial window on the gray image, traversing and sliding, and obtaining a contrast characteristic value in the initial window corresponding to each pixel point according to the gray distribution characteristic in the initial window corresponding to each pixel point.
Step S3: obtaining the final degree of haze interference in the initial window corresponding to each pixel point according to the contrast characteristic value in the initial window corresponding to each pixel point and the possibility that each pixel point is a non-haze interference pixel point; and adjusting the initial window according to the final degree of the haze interference in the initial window corresponding to each pixel point to obtain an optimized window.
Step S4: and carrying out defogging treatment on each frame of image according to the optimized window to obtain defogging video frame images.
Because the specific implementation process of steps S1-S4 is already described in detail in the foregoing smart campus management method based on big data, no further description is given.
The technical effects of this embodiment are:
According to the method, the possibility that each suspected haze area pixel point is a non-haze interference pixel point is obtained according to the position characteristics, the channel value and the local pixel point gray level difference in the preset direction of each suspected haze area pixel point; constructing an initial window to traverse and slide in the gray level image, and obtaining a contrast characteristic value in the initial window corresponding to each pixel point according to gray level distribution characteristics in the initial window corresponding to each pixel point; further obtaining the final degree of the haze interference in the initial window corresponding to each pixel point; adjusting the initial window to obtain an optimized window; and carrying out defogging treatment on each frame of image to obtain defogging video frame images. According to the invention, the haze concentration is accurately estimated, so that a proper window is obtained, and the defogging effect is improved.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. The processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.
Claims (10)
1. An intelligent campus management method based on big data, which is characterized by comprising the following steps:
Acquiring a dark channel image and a gray level image of each frame of image in a foggy campus student activity video;
Obtaining suspected haze area pixel points according to the channel value characteristics of each pixel point in the dark channel image; obtaining the possibility that each suspected haze area pixel point is a non-haze interference pixel point according to the position characteristics, the channel value and the local pixel point gray level difference in the preset direction of each suspected haze area pixel point; constructing an initial window on the gray image, traversing and sliding, and obtaining a contrast characteristic value in the initial window corresponding to each pixel point according to gray distribution characteristics in the initial window corresponding to each pixel point;
Obtaining the final degree of haze interference in the initial window corresponding to each pixel point according to the contrast characteristic value in the initial window corresponding to each pixel point and the possibility that each pixel point is a non-haze interference pixel point; adjusting the initial window according to the final degree of the haze interference in the initial window corresponding to each pixel point to obtain an optimized window;
Carrying out defogging treatment on each frame of image according to the optimized window to obtain defogging video frame images;
And managing the campus student behavior activities according to the defogging video frame images.
2. The smart campus management method based on big data according to claim 1, wherein the method for acquiring the suspected haze area pixel points comprises the following steps:
acquiring a channel median value of a channel value range of the dark channel image; and taking the pixel point with the channel value of each pixel point larger than the corresponding pixel point of the channel median as the pixel point of the suspected haze area.
3. The intelligent campus management method based on big data according to claim 1, wherein the method for acquiring the local pixel gray level difference comprises the following steps:
Obtaining image blocks which equally divide the dark channel image;
Acquiring the highest channel value of the pixel point in each other image block in the preset direction of the image block where the pixel point of each suspected haze area is located, and taking the highest channel value as a first channel value;
Calculating the difference between the channel value of each suspected haze area pixel point and each first channel value to be used as a first difference value; and solving an average value of all the first difference values to obtain the gray level difference of the local pixel points in the preset direction of the pixel points of each suspected haze area.
4. The smart campus management method based on big data according to claim 1, wherein the method for acquiring the possibility that each suspected haze area pixel point is a non-haze interference pixel point comprises the following steps:
Calculating the vertical distance between each suspected haze area pixel point and the lower boundary of the dark channel image to obtain a position characteristic;
obtaining the possibility that each suspected haze area pixel point is a non-haze interference pixel point according to the position characteristics, the channel value and the local pixel point gray level difference of each suspected haze area pixel point;
the position feature and the likelihood are in a negative correlation; and the channel value, the local pixel gray level difference and the possibility are in positive correlation.
5. The smart campus management method based on big data according to claim 1, wherein the method for obtaining the contrast characteristic value comprises:
counting gray level histograms of pixel points in an initial window corresponding to each pixel point on a gray level image to obtain the number of the pixel points corresponding to each gray level;
obtaining a contrast characteristic value according to an obtaining formula of the contrast characteristic value, wherein the obtaining formula of the contrast characteristic value is as follows:
; wherein/> Represents the/>Contrast characteristic values in windows corresponding to the pixel points; /(I)Represents the/>The pixel points correspond to the maximum number of the pixel points corresponding to the same gray level in the initial window; Represents the/> The pixel points correspond to the first/>, within the initial windowThe number of the gray levels corresponds to the number of the pixel points; /(I)Represents the/>The pixel points correspond to the first/>, within the initial windowThe number of the gray levels corresponds to the number of the pixel points; /(I)Represents the/>The pixel points correspond to the maximum gray level value in the initial window; /(I)Represents the/>The pixel points correspond to the minimum gray level value in the initial window; /(I)Represents the/>Each pixel corresponds to the number of gray levels in the initial window.
6. The intelligent campus management method based on big data according to claim 1, wherein the method for obtaining the final degree of the haze interference comprises the following steps:
Obtaining the final degree of the haze interference according to an obtaining formula of the final degree of the haze interference, wherein the obtaining formula of the final degree of the haze interference is as follows:
; wherein/> Represents the/>The pixel points correspond to the final degree of haze interference in the initial window; /(I)Represents the/>The pixel points correspond to contrast characteristic values in the initial window; /(I)Represents the/>The pixel points correspond to the first/>, within the initial windowThe possibility that each pixel point is a non-haze interference pixel point; /(I)Represents the/>The number of the pixel points corresponds to the number of the pixel points in the initial window; /(I)Representing natural constants; /(I)Representing a linear normalization function; /(I)To adjust the parameters.
7. The smart campus management method based on big data according to claim 1, wherein the method for obtaining the optimization window comprises:
Acquiring the initial size of the initial window; and calculating the product of the final degree of the interference of the haze and the initial size, adding the product with the initial size of the initial window, and rounding down to obtain an optimized window as the optimized size.
8. The intelligent campus management method based on big data according to claim 1, wherein the initial window is a window with equal size and width.
9. The intelligent campus management method based on big data according to claim 1, wherein the preset direction is a horizontal direction.
10. A big data based smart campus management system comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor, when executing the computer program, implements the steps of a big data based smart campus management method as claimed in any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410390162.4A CN117994165B (en) | 2024-04-02 | 2024-04-02 | Intelligent campus management method and system based on big data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410390162.4A CN117994165B (en) | 2024-04-02 | 2024-04-02 | Intelligent campus management method and system based on big data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117994165A true CN117994165A (en) | 2024-05-07 |
CN117994165B CN117994165B (en) | 2024-06-14 |
Family
ID=90901416
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410390162.4A Active CN117994165B (en) | 2024-04-02 | 2024-04-02 | Intelligent campus management method and system based on big data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117994165B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118628574A (en) * | 2024-08-14 | 2024-09-10 | 广州五所环境仪器有限公司 | Evaporator position correction method and system for environmental test chamber |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102999883A (en) * | 2011-09-08 | 2013-03-27 | 富士通株式会社 | Image haze-removing method and system |
CN103020921A (en) * | 2013-01-10 | 2013-04-03 | 厦门大学 | Single image defogging method based on local statistical information |
CN103198459A (en) * | 2013-04-10 | 2013-07-10 | 成都国腾电子技术股份有限公司 | Haze image rapid haze removal method |
CN104899833A (en) * | 2014-03-07 | 2015-09-09 | 安凯(广州)微电子技术有限公司 | Image defogging method and device |
US9349170B1 (en) * | 2014-09-04 | 2016-05-24 | The United States Of America As Represented By The Secretary Of The Navy | Single image contrast enhancement method using the adaptive wiener filter |
CN105931220A (en) * | 2016-04-13 | 2016-09-07 | 南京邮电大学 | Dark channel experience and minimal image entropy based traffic smog visibility detection method |
CN107103591A (en) * | 2017-03-27 | 2017-08-29 | 湖南大学 | A kind of single image to the fog method based on image haze concentration sealing |
KR20200028598A (en) * | 2018-09-07 | 2020-03-17 | 고려대학교 산학협력단 | Metohd and device of single image dehazing based on dark channel, recording medium for performing the method |
CN111353959A (en) * | 2020-03-02 | 2020-06-30 | 莫登奎 | Efficient method suitable for removing haze of large-scale optical remote sensing image |
CN115496693A (en) * | 2022-11-17 | 2022-12-20 | 南通鼎勇机械有限公司 | Sintering flame image smog removing method based on dark channel algorithm |
CN117094914A (en) * | 2023-10-18 | 2023-11-21 | 广东申创光电科技有限公司 | Smart city road monitoring system based on computer vision |
US20240037901A1 (en) * | 2022-07-29 | 2024-02-01 | Ali Corporation | Image dehazing method and video dehazing method |
-
2024
- 2024-04-02 CN CN202410390162.4A patent/CN117994165B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102999883A (en) * | 2011-09-08 | 2013-03-27 | 富士通株式会社 | Image haze-removing method and system |
CN103020921A (en) * | 2013-01-10 | 2013-04-03 | 厦门大学 | Single image defogging method based on local statistical information |
CN103198459A (en) * | 2013-04-10 | 2013-07-10 | 成都国腾电子技术股份有限公司 | Haze image rapid haze removal method |
CN104899833A (en) * | 2014-03-07 | 2015-09-09 | 安凯(广州)微电子技术有限公司 | Image defogging method and device |
US9349170B1 (en) * | 2014-09-04 | 2016-05-24 | The United States Of America As Represented By The Secretary Of The Navy | Single image contrast enhancement method using the adaptive wiener filter |
CN105931220A (en) * | 2016-04-13 | 2016-09-07 | 南京邮电大学 | Dark channel experience and minimal image entropy based traffic smog visibility detection method |
CN107103591A (en) * | 2017-03-27 | 2017-08-29 | 湖南大学 | A kind of single image to the fog method based on image haze concentration sealing |
KR20200028598A (en) * | 2018-09-07 | 2020-03-17 | 고려대학교 산학협력단 | Metohd and device of single image dehazing based on dark channel, recording medium for performing the method |
CN111353959A (en) * | 2020-03-02 | 2020-06-30 | 莫登奎 | Efficient method suitable for removing haze of large-scale optical remote sensing image |
US20240037901A1 (en) * | 2022-07-29 | 2024-02-01 | Ali Corporation | Image dehazing method and video dehazing method |
CN115496693A (en) * | 2022-11-17 | 2022-12-20 | 南通鼎勇机械有限公司 | Sintering flame image smog removing method based on dark channel algorithm |
CN117094914A (en) * | 2023-10-18 | 2023-11-21 | 广东申创光电科技有限公司 | Smart city road monitoring system based on computer vision |
Non-Patent Citations (2)
Title |
---|
SEBASTIAN SALAZAR-COLORES ET AL.: "《Fast single image defogging with robust sky detection》", 《IEEE ACCESS》, 31 December 2020 (2020-12-31) * |
杨爱萍 等: "《利用中通道补偿的单幅图像去雾》", 《东北大学学报( 自然科学版)》, 28 February 2021 (2021-02-28) * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118628574A (en) * | 2024-08-14 | 2024-09-10 | 广州五所环境仪器有限公司 | Evaporator position correction method and system for environmental test chamber |
Also Published As
Publication number | Publication date |
---|---|
CN117994165B (en) | 2024-06-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110490914B (en) | Image fusion method based on brightness self-adaption and significance detection | |
CN111539273B (en) | Traffic video background modeling method and system | |
CN117994165B (en) | Intelligent campus management method and system based on big data | |
US10963993B2 (en) | Image noise intensity estimation method, image noise intensity estimation device, and image recognition device | |
EP2981934B1 (en) | Logo presence detector based on blending characteristics | |
CN107784651B (en) | Fuzzy image quality evaluation method based on fuzzy detection weighting | |
Pan et al. | No-reference assessment on haze for remote-sensing images | |
CN106327488B (en) | Self-adaptive foreground detection method and detection device thereof | |
CN117876971B (en) | Building construction safety monitoring and early warning method based on machine vision | |
CN108898132A (en) | A kind of terahertz image dangerous material recognition methods based on Shape context description | |
CN115797607B (en) | Image optimization processing method for enhancing VR real effect | |
CN116385316B (en) | Multi-target image dynamic capturing method and related device | |
CN118037722B (en) | Copper pipe production defect detection method and system | |
CN111340749A (en) | Image quality detection method, device, equipment and storage medium | |
CN115083008A (en) | Moving object detection method, device, equipment and storage medium | |
Chen et al. | Edge preservation ratio for image sharpness assessment | |
CN102509414B (en) | Smog detection method based on computer vision | |
CN116958880A (en) | Video flame foreground segmentation preprocessing method, device, equipment and storage medium | |
Fang et al. | Image quality assessment on image haze removal | |
CN115511814A (en) | Image quality evaluation method based on region-of-interest multi-texture feature fusion | |
CN113744326B (en) | Fire detection method based on seed region growth rule in YCRCB color space | |
Li et al. | Laplace dark channel attenuation-based single image defogging in ocean scenes | |
CN115984973B (en) | Human body abnormal behavior monitoring method for peeping-preventing screen | |
Wang et al. | Adaptive Bright and Dark Channel Combined with Defogging Algorithm Based on Depth of Field | |
CN116703755A (en) | Omission risk monitoring system for medical waste refrigeration house |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |