CN118470071A - Intelligent vision monitoring and tracking method and system for general network - Google Patents
Intelligent vision monitoring and tracking method and system for general network Download PDFInfo
- Publication number
- CN118470071A CN118470071A CN202410918075.1A CN202410918075A CN118470071A CN 118470071 A CN118470071 A CN 118470071A CN 202410918075 A CN202410918075 A CN 202410918075A CN 118470071 A CN118470071 A CN 118470071A
- Authority
- CN
- China
- Prior art keywords
- image
- gray
- detected
- pixel
- change
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 106
- 238000000034 method Methods 0.000 title claims abstract description 54
- 230000008859 change Effects 0.000 claims abstract description 133
- 238000004458 analytical method Methods 0.000 claims abstract description 78
- 230000033001 locomotion Effects 0.000 claims abstract description 48
- 238000013441 quality evaluation Methods 0.000 claims abstract description 33
- 238000009826 distribution Methods 0.000 claims abstract description 28
- 230000000007 visual effect Effects 0.000 claims abstract description 17
- 238000011156 evaluation Methods 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 11
- 230000003287 optical effect Effects 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 9
- 230000009471 action Effects 0.000 claims description 5
- 238000005206 flow analysis Methods 0.000 claims description 4
- 238000000265 homogenisation Methods 0.000 claims description 4
- 238000012216 screening Methods 0.000 claims description 4
- 230000000694 effects Effects 0.000 abstract description 24
- 230000001419 dependent effect Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000007781 pre-processing Methods 0.000 description 4
- 230000007423 decrease Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000003860 storage Methods 0.000 description 3
- 238000012806 monitoring device Methods 0.000 description 2
- 238000012300 Sequence Analysis Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010924 continuous production Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of image enhancement, in particular to a method and a system for intelligent visual monitoring and tracking of a through network. The method comprises the following steps: acquiring a night monitoring gray level image and determining a quality evaluation parameter; determining the pixel change degree according to the quality evaluation parameters and the gray level difference of the contrast image and the image to be detected; determining a change characteristic degree according to the pixel change degree of the image to be detected and the contrast image, the local gray distribution of the target point and the gray distribution in the analysis image; dividing an image to be detected into a target area according to the change characteristic degree; determining a movement analysis coefficient of the target area; image enhancement is carried out on the image to be detected according to all the target area movement analysis coefficients, and an enhanced image is obtained; and monitoring and tracking the moving target according to the enhanced images of all the frames. The invention can effectively improve the identification definition of the moving object and enhance the monitoring and tracking effects.
Description
Technical Field
The invention relates to the technical field of image enhancement, in particular to a method and a system for intelligent visual monitoring and tracking of a through network.
Background
The intelligent monitoring system has become an important tool in the fields of safety precaution, resource management, production monitoring and the like. By connecting a plurality of monitoring devices together, information sharing and centralized management are realized, a network monitoring system can be constructed, and automatic identification, tracking and analysis of targets are realized. However, in the night scene, the whole image is dark, so that the image acquired by the monitoring equipment is blurred, and the monitoring and tracking effects on the moving object are poor.
In the related art, the image is enhanced, and the monitoring and tracking of the moving object are realized based on the enhanced image combined with the optical flow method, in this way, because the image enhancement in the related art is based on the image enhancement of the whole image, the image enhancement mode is easy to ignore the characteristics of more whole dark parts in the night scene, so that the over-enhancement phenomenon is generated on the moving object, the display effect of the moving object is still blurred, and the monitoring and tracking effects are poor.
Disclosure of Invention
In order to solve the technical problems that in the related art, the image enhancement based on the full graph is easy to ignore the characteristic of more whole dark parts in a night scene, so that the over-enhancement phenomenon is generated on a moving object, the display effect of the moving object is still fuzzy, and the monitoring tracking effect is poor, the invention provides a method and a system for intelligent vision monitoring tracking through a network, which adopts the following technical scheme:
the invention provides a method for intelligent visual monitoring and tracking of a communication network, which comprises the following steps:
acquiring night monitoring gray level images of continuous frames, and determining quality evaluation parameters of the night monitoring gray level images of each frame according to gray level distribution of all pixel points in the night monitoring gray level images of each frame;
Taking the former frame in the continuous two-frame night monitoring gray level image as a contrast image and the latter frame as an image to be detected; determining the pixel change degree of the image to be detected according to quality evaluation parameters of the contrast image and the image to be detected and the gray level difference of pixel points at the same image position;
Taking all frames of night monitoring gray images as analysis images within a preset time sequence range taking the moment corresponding to the image to be detected as the center, and optionally taking one pixel point in the image to be detected as a target point; determining the change characteristic degree of the target point according to the pixel change degree of the image to be detected and the contrast image, the gray distribution of the target point in a corresponding preset neighborhood range and the gray distribution of the pixel point at the same image position as the target point in the analysis image;
Dividing the image to be detected into at least two target areas according to the numerical distribution of the variation characteristic degree of all pixel points in the image to be detected; determining a movement analysis coefficient of each target area according to the change characteristic degree of all pixel points in the target area;
Performing image enhancement on the image to be detected according to the movement analysis coefficients of all the target areas to obtain an enhanced image; and monitoring and tracking the moving target according to the enhanced images of all the frames.
Further, the method for acquiring the quality evaluation parameter of the night monitoring gray level image comprises the following steps:
determining the minimum gray value of a pixel point in the night monitoring gray image of the same frame;
And carrying out homogenization treatment on the difference value between the gray value and the gray minimum value of all pixel points of the night monitoring gray image of the same frame to obtain the quality evaluation parameter of the corresponding night monitoring gray image.
Further, the method for acquiring the pixel variation degree of the image to be detected comprises the following steps:
Determining the absolute value of the difference value of the quality evaluation parameters of the image to be detected and the contrast image as an evaluation difference index;
performing frame difference processing on the contrast image and the image to be detected to obtain a difference image, and determining gray values and values of all pixel points in the difference image to obtain a gray change index;
And determining the pixel change degree according to the evaluation difference index and the gray level change index, wherein the evaluation difference index and the gray level change index are in positive correlation with the pixel change degree.
Further, the method for acquiring the variation characteristic degree of the target point comprises the following steps:
Determining the gray value difference between the target point in the image to be detected and each pixel point in a preset neighborhood range, and solving a mean value to obtain a neighborhood gray difference index;
Taking pixel points in the same image position with the target point in all the analysis images as analysis points, calculating gray value differences between the target point and each analysis point respectively, and solving a mean value to obtain a time sequence gray difference index;
Determining the absolute value of the difference value of the pixel change degree of the image to be detected and the contrast image as a pixel change difference index of the image to be detected;
And determining the change characteristic degree of the target point by combining the neighborhood gray scale difference index, the time sequence gray scale difference index and the pixel change difference index, wherein the pixel change difference index and the change characteristic degree form a negative correlation, and the time sequence gray scale difference index and the neighborhood gray scale difference index form a positive correlation with the change characteristic degree.
Further, dividing the image to be measured into at least two target areas includes:
Based on a DBSCAN density clustering algorithm, clustering is carried out according to the variation characteristic degree of all pixel points in the image to be detected, clustering clusters are obtained, and each clustering cluster is used as a target area.
Further, the method for acquiring the mobile analysis coefficient comprises the following steps:
and calculating the average value of the variation characteristic degrees of all the pixel points in the same target area, and taking the average value as a movement analysis coefficient of the corresponding target area.
Further, the image enhancement is performed on the image to be detected according to the motion analysis coefficients of all the target areas to obtain an enhanced image, which includes:
Screening the target area according to the movement analysis coefficient to obtain a movement area;
and taking the movement analysis coefficient as an image enhancement weight, carrying out image enhancement processing on the movement region to obtain an enhancement region, and traversing all the movement regions to obtain an enhanced image.
Further, the method for acquiring the mobile area includes:
And taking the target area with the movement analysis coefficient larger than a preset coefficient threshold value as a movement area.
Further, the monitoring and tracking of the moving target are realized according to the enhanced images of all frames, including:
And carrying out optical flow analysis on the enhanced images of all frames based on an optical flow method, and determining the action track of the moving target according to the analysis result.
In a second aspect, the present invention also provides a system for intelligent visual monitoring and tracking through the internet, the system comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of an intelligent visual monitoring and tracking method through the internet when executing the computer program.
The invention has the following beneficial effects:
According to the invention, the quality evaluation parameters of the night monitoring gray level images of each frame are determined by acquiring the night monitoring gray level images of continuous frames; the quality evaluation parameters can accurately represent the gray level distribution of the night monitoring gray level image, and reliable gray level distribution analysis is realized. Determining a contrast image and an image to be detected; determining the pixel change degree of the image to be detected according to the quality evaluation parameters of the contrast image and the image to be detected and the gray level difference of the pixel points at the same image position; the pixel change degree characterizes the instantaneous pixel change condition of the image to be detected. Determining an analysis image and a target point; further determining the change characteristic degree of the target point; the change characteristics are the change condition of the analysis image and the pixel change degree of the image to be detected, so that the change analysis of the image can be realized in a plurality of dimensions of time sequence and space, and the change characteristics on the time sequence are further endowed to the pixel points in the image to be detected, so that the numerical distribution of the change characteristic degree of all the pixel points is combined, and a target area is obtained; determining a movement analysis coefficient of the target area; by representing the movement condition in time sequence on the corresponding pixel points, the areas with the same movement characteristics can be accurately identified, and further more reliable movement analysis is realized. Finally, carrying out image enhancement on the image to be detected according to all the target area movement analysis coefficients to obtain an enhanced image; and monitoring and tracking the moving target according to the enhanced images of all the frames. Because the same target area represents the similar moving characteristics, namely the more likely it represents a complete moving individual, the self-adaptive image enhancement is carried out on the target area, so that the effective enhancement of the moving object is realized, the influence of the whole darkness of the moving object at night is eliminated, meanwhile, the moving object is obviously distinguished from the background area, the phenomenon of over-enhancement is avoided, the texture effect of the moving object in the enhanced image is clearer, the identification of the moving object is further effectively promoted, and the monitoring tracking effect is enhanced.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for intelligent visual monitoring and tracking of a through-network according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a night scene according to an embodiment of the present invention;
FIG. 3 is a flowchart of a method for obtaining a pixel variation degree according to an embodiment of the present invention;
FIG. 4 is a flowchart of a method for obtaining a degree of variation characteristics of a target point according to an embodiment of the present invention;
fig. 5 is a block diagram of a network intelligent vision monitoring tracking system according to an embodiment of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following detailed description is given below of a method and a system for intelligent visual monitoring and tracking of a through network according to the invention, which are specific embodiments, structures, features and effects thereof, with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the intelligent visual monitoring and tracking method for the through network provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a method for intelligent visual monitoring and tracking through a network according to an embodiment of the present invention is shown, where the method includes:
s101: and acquiring night monitoring gray level images of continuous frames, and determining quality evaluation parameters of the night monitoring gray level images of each frame according to gray level distribution of all pixel points in the night monitoring gray level images of each frame.
The embodiment of the invention can be configured with the monitoring equipment for monitoring areas such as roads, factories, parks and the like, and can not meet the real-time monitoring requirement of a large-scale and complex scene because the monitoring equipment is usually static monitoring equipment. Therefore, a general network monitoring system can be constructed, a plurality of monitoring devices are connected together, information sharing and centralized management are realized, and automatic identification, tracking and analysis of a moving object are realized.
The moving object in the embodiment of the present invention may be, for example, a moving animal, an automobile, a mechanical moving part, or the like, which is not limited.
It should be noted that, a specific implementation scenario of the embodiment of the present invention is a night scenario, and is analyzed in conjunction with fig. 2, and fig. 2 is a schematic diagram of the night scenario provided by one embodiment of the present invention; in the night scene, the light is darker, and when the difference between the moving object and the background is smaller, the monitoring picture cannot be accurately identified, so that the moving object can be tracked and disappeared, and the night monitoring video image needs to be enhanced. The image enhancement in the related art is based on the image enhancement of the whole image, and the image enhancement mode is easy to ignore the characteristic of more whole dark parts, so that the over-enhancement phenomenon is generated on a moving object.
The night monitoring gray level image of the continuous frames is the image information shot and generated by the monitoring equipment, the original image of the continuous frames can be acquired at the acquisition speed of 30 frames per second, and then the original image is subjected to image preprocessing to obtain the night monitoring gray level image.
The image preprocessing in the embodiment of the invention can be specifically preprocessing means such as image denoising, image graying and semantic segmentation, wherein the image denoising can be specifically mean denoising, the image graying can be specifically mean graying, and of course, in other embodiments of the invention, the technical means used in preprocessing can be adjusted according to the requirement so as to meet the actual use scene, and the details and the limitation are not repeated.
The quality evaluation coefficient corresponds to the quality evaluation condition of the night monitoring gray level image, and can be used for quantifying the image quality of the night monitoring gray level image, and the gray level distribution in the night monitoring gray level image is an evaluation standard for quantifying the image quality.
Further, in some embodiments of the present invention, a method for acquiring a quality evaluation parameter of a night monitoring gray scale image includes: determining the minimum gray value of a pixel point in the night monitoring gray image of the same frame; and carrying out homogenization treatment on the difference value between the gray value and the gray minimum value of all pixel points of the night monitoring gray image of the same frame to obtain the quality evaluation parameter of the corresponding night monitoring gray image.
The minimum gray value is the darkest pixel gray characteristic in the night monitoring gray image. The more the bright part information is in the night monitoring gray level image, the more obvious the bright part texture is, the better the quality of image analysis is performed on the night monitoring gray level image, namely the image has better quality evaluation effect.
In the embodiment of the invention, the difference value between the gray value and the gray minimum value of the pixel point is calculated, and the difference value is used as the brightness information of the corresponding pixel point, namely, the larger the difference value is, the brighter the texture corresponding to the pixel point is, so that the average value of the difference values corresponding to all the pixel points is calculated and used as the whole brightness information, and the quality evaluation parameter of the night monitoring gray image is obtained.
Of course, in other embodiments of the present invention, the mean value and the variance of gray values of all pixels may be blended in the forward direction, which further characterizes the texture distribution of the image, and the larger the variance, the more obvious the texture distribution of the image, that is, the better the quality. Therefore, the product of the average value obtained by the homogenization processing of the difference between the gray values of all the pixel points and the gray minimum value and the gray value variance of all the pixel points is used as the quality evaluation effect.
S102: taking the former frame in the continuous two-frame night monitoring gray level image as a contrast image and the latter frame as an image to be detected; and determining the pixel change degree of the image to be detected according to the quality evaluation parameters of the contrast image and the image to be detected and the gray level difference of the pixel points at the same image position.
The embodiment of the invention aims to realize monitoring and tracking of a moving object, so that specific analysis needs to be carried out on night monitoring gray level images of adjacent frames, and the integral monitoring process is a continuous process.
In the embodiment of the invention, two continuous frames (the t frame and the t-1 frame, t is a positive integer and t is more than 1) can be arranged for monitoring gray level images at night, the t-1 frame is used as a contrast image, and the t frame is used as an image to be detected.
In the embodiment of the invention, because the contrast image and the image to be detected are two adjacent frames of night monitoring gray images, namely, the image changes, namely, the moving object is generated, the pixel change degree of the image to be detected is analyzed through the quality evaluation parameters of the contrast image and the image to be detected and the gray differences of pixel points at the same image position.
The pixel change degree represents the change condition of the pixel point under the condition that the image to be detected is compared with the contrast image, and because the contrast image and the image to be detected are two frames of adjacent night monitoring gray images, the change can be used as instantaneous change, the larger the pixel change degree is, the larger the instantaneous change characteristic is, and compared with the contrast image, the moving object generates larger instantaneous displacement.
Further, in some embodiments of the present invention, referring to fig. 3, fig. 3 is a flowchart of a method for obtaining a pixel variation degree according to an embodiment of the present invention; comprising the following steps:
s301: and determining the absolute value of the difference value of the quality evaluation parameters of the image to be measured and the contrast image as an evaluation difference index.
And the evaluation difference index is the absolute value of the difference value of the quality evaluation parameters of the image to be tested and the contrast image. The larger the evaluation difference index is, the larger the difference between the quality evaluation parameters of the image to be tested and the contrast image is.
In the embodiment of the invention, the quality evaluation parameters appear in the continuous two-frame night monitoring gray level images with larger difference, which means that the integral background in the continuous two-frame night monitoring gray level images generates larger scene changes, such as light changes, the mode adjustment of the shooting equipment, and the like.
S302: and carrying out frame difference processing on the contrast image and the image to be detected to obtain a difference image, and determining gray values and values of all pixel points in the difference image to obtain gray change indexes.
The frame difference processing is an image difference means known to those skilled in the art, and by overlapping two different images, the absolute value of the difference value of the overlapped pixel point is calculated to obtain a difference image, and the difference image can specifically represent the gray scale difference of the pixel point at the same position between the two frame images.
The gray value of the pixel point in the differential image, that is, the absolute value of the gray difference value representing the pixel point of the image to be detected and the pixel point of the contrast image at the same position, needs to be explained, because the larger the absolute value of the gray difference value is, the more obvious the change of the pixel point from the contrast image to the image to be detected is.
In the embodiment of the invention, the gray level change index is obtained by calculating the gray level values and values of all pixel points in the differential image, so that the larger the gray level change index is, the larger the integral image change from the contrast image to the image to be detected is.
S303: and determining the pixel change degree according to the evaluation difference index and the gray level change index, wherein the evaluation difference index and the gray level change index are in positive correlation with the pixel change degree.
It can be appreciated that when the quality evaluation parameter is greatly different, it means that a larger scene change is generated corresponding to the entire background of two consecutive frames. The larger the gradation change index is, the larger the overall image change from the contrast image to the image to be measured is.
Therefore, the embodiment of the invention can combine the evaluation difference index and the gray level change index to realize the specific analysis of the pixel change degree.
The positive correlation relationship indicates that the dependent variable increases along with the increase of the independent variable, the dependent variable decreases along with the decrease of the independent variable, and the specific relationship can be multiplication relationship, addition relationship, idempotent of an exponential function and is determined by practical application; the negative correlation indicates that the dependent variable decreases with increasing independent variable, and the dependent variable increases with decreasing independent variable, which may be a subtraction relationship, a division relationship, or the like, and is determined by the actual application.
Therefore, the invention can normalize the evaluation difference index and the gray level change index respectively, and then multiply the normalized values to obtain the pixel change degree. Or the product of the evaluation difference index and the gray level change index can be directly calculated to obtain the pixel change degree, or the sum of the evaluation difference index and the gray level change index can be calculated to obtain the pixel change degree, that is, when the numerical value of the evaluation difference index and the gray level change index is ensured to be larger, the numerical value of the corresponding pixel change degree is correspondingly larger, and the specific positive correlation step is not limited.
According to the embodiment of the invention, the pixel change degree is analyzed by combining the evaluation difference index and the gray level change index, so that the change condition of the image to be measured in the adjacent frame change can be accurately represented, and further the effective change analysis is carried out on the image to be measured.
S103: taking all frames of night monitoring gray images as analysis images within a preset time sequence range taking the moment corresponding to the image to be detected as the center, and optionally taking one pixel point in the image to be detected as a target point; and determining the change characteristic degree of the target point according to the pixel change degree of the image to be detected and the contrast image, the gray distribution of the target point in a corresponding preset neighborhood range, and the gray distribution of the pixel points in the same image position as the target point in the analysis image.
The preset time sequence range can be a range of 1 second before and after the time sequence of the image to be detected, namely, the corresponding 2 seconds are taken as the preset time sequence range of the image to be detected, and all the frames of night monitoring gray images acquired within the corresponding 2 seconds are taken as analysis images for carrying out specific analysis on long-time change in time sequence.
The embodiment of the invention can analyze from the dimension of the image position, and optionally, one pixel point in the image to be detected is taken as a target point.
Further, in some embodiments of the present invention, with reference to fig. 4, fig. 4 is a flowchart of a method for obtaining a degree of variation characteristics of a target point according to an embodiment of the present invention; comprising the following steps:
S401: and determining the gray value difference between the target point in the image to be detected and each pixel point in the preset neighborhood range, and solving the average value to obtain a neighborhood gray difference index.
The preset neighborhood range in the embodiment of the present invention may specifically be, for example, a pixel point range with a size of 7×7 centered on the target point.
It can be understood that, because the overall gray level distribution of the night scene is biased to be dark, the difference between the moving object and the background is smaller, so that the characteristic of the moving object is lost in the process of tracking the pixel points, namely, the difference between the pixel points corresponding to the moving object and the surrounding neighborhood points is small, and the confusion occurs, when the corresponding target point and the pixel points in the preset neighborhood range have larger gray level difference, the recognition effect in the local range where the target point is located is better, and the reliability in the subsequent moving analysis is stronger.
Therefore, the invention calculates the gray value difference between the target point and each pixel point in the preset neighborhood range, and takes the average value of all the gray value differences as the neighborhood gray difference index of the target point. The larger the neighborhood gray scale difference index value is, the more obvious the local internal texture distribution of the target point is.
S402: and taking pixel points which are positioned at the same image position with the target point in all the analysis images as analysis points, calculating gray value differences between the target point and each analysis point respectively, and solving an average value to obtain a time sequence gray difference index.
Wherein, the analysis points are determined from the analysis images, and the analysis points are pixel points which are positioned at the same image position with the target point, and each analysis image is provided with one analysis point.
In the embodiment of the invention, the time sequence gray scale difference index is obtained by calculating the difference absolute value of the gray scale value of each analysis point and the difference absolute value of all gray scale values. The time sequence gray scale difference index can represent the gray scale change of the target point in the corresponding time sequence range, and the larger the numerical value of the time sequence gray scale difference index is, the larger the gray scale change of the target point in the corresponding preset time sequence range is.
S403: and determining the absolute value of the difference value of the pixel change degree of the image to be detected and the contrast image as a pixel change difference index of the image to be detected.
In the embodiment of the invention, the absolute value of the difference value of the pixel change degree of the image to be detected and the contrast image represents the difference of the pixel change degree of the image to be detected and the contrast image, and when the pixel change difference index is larger, the difference value represents that the pixel change of the image to be detected and the contrast image is not necessarily caused by object movement or is caused by light change (such as the sudden starting of light of a background vehicle), so the difference value is used as a weight, namely, the larger the value of the pixel change difference index is, the more likely the influence caused by background change is represented.
S404: and determining the change characteristic degree of the target point by combining the neighborhood gray level difference index, the time sequence gray level difference index and the pixel change difference index.
The neighborhood gray scale difference index represents the distribution characteristic in the space dimension, the time sequence gray scale difference index represents the change characteristic in the time dimension, and therefore the time sequence gray scale difference index and the neighborhood gray scale difference index have positive correlation with the change characteristic degree. The larger the value of the pixel change difference index is, the more likely the influence is caused by background change, so that the pixel change difference index and the change characteristic degree have a negative correlation.
In the embodiment of the invention, the product of the time sequence gray scale difference index and the neighborhood gray scale difference index can be calculated, and the ratio of the product to the pixel change difference index is used as the change characteristic degree.
According to the embodiment of the invention, each pixel point in the image to be detected can be analyzed in the space dimension and the time sequence dimension by combining the neighborhood gray level difference index, the time sequence gray level difference index and the pixel change difference index, so that more accurate and effective change characteristic analysis is realized by combining the time sequence change and the space distribution of each pixel point with the scene change of the image to be detected, and the change characteristic degree of the target point can more accurately and objectively represent the change condition of the position of the target point.
S104: dividing the image to be detected into at least two target areas according to the numerical distribution of the variation characteristic degree of all pixel points in the image to be detected; and determining a movement analysis coefficient of the target area according to the change characteristic degree of all the pixel points in each target area.
The change characteristic degree characterizes the change condition of the target point position, namely, the closer the values of the change characteristic degree are, the more likely the position distribution is, the more likely the change characteristic degree belongs to the same object and is in the same moving state.
Further, in some embodiments of the present invention, dividing the image to be measured into at least two target areas includes: based on a DBSCAN density clustering algorithm, clustering is carried out according to the variation characteristic degree of all pixel points in the image to be detected, clustering clusters are obtained, and each clustering cluster is used as a target area.
The DBSCAN density clustering algorithm is an algorithm well known in the art, and the numerical value of the pixel points is used as the numerical characteristic of density clustering, so that all the pixel points in the image to be detected are clustered, a cluster is obtained, and all the pixel points contained in each cluster form a target area.
The invention aims to take pixel points which are close in distance and close in variation characteristic degree as a target area, and of course, in other embodiments of the invention, a region growing mode can be used, the variation characteristic degree is taken as a judging index of region growth, all images to be detected are traversed, and region division is realized, so that a plurality of target areas are obtained.
In the embodiment of the invention, the same target area, namely the area with the wanted gray level characteristic and the similar motion characteristic, can realize the change analysis of the target area according to the change characteristic degree of all the pixel points in the target area after the target area is confirmed.
Further, in some embodiments of the present invention, a method for obtaining a mobile analysis coefficient includes: and calculating the average value of the variation characteristic degrees of all the pixel points in the same target area, and taking the average value as a movement analysis coefficient of the corresponding target area.
That is, the larger the average value of the variation characteristic degree of all the pixel points in the target area, that is, the larger the movement analysis coefficient, the more obvious the variation condition of the target area, and the more required the enhancement effect of higher intensity, so as to improve the distinguishing condition with the background, and realize the clear display effect.
S105: image enhancement is carried out on the image to be detected according to all the target area movement analysis coefficients, and an enhanced image is obtained; and monitoring and tracking the moving target according to the enhanced images of all the frames.
In the embodiment of the invention, as the movement analysis coefficient is larger, the change condition of the target area is more obvious, the enhancement effect with higher intensity is needed, and the clear display effect is realized.
Alternatively, in other embodiments of the present invention, the moving area may be obtained by screening from the target area according to the moving analysis coefficient; and taking the motion analysis coefficient as an image enhancement weight, carrying out image enhancement processing on the motion region to obtain an enhancement region, and traversing all the motion regions to obtain an enhanced image.
That is, the moving area is obtained through screening, so that analysis on the background area with insignificant movement or no movement caused by root pressing is avoided, and the image enhancement efficiency is improved.
In the embodiment of the present invention, the image enhancement method may specifically be, for example, gamma transformation, where the corresponding data expression is:
In the method, in the process of the invention, Represents the gray value of a pixel point in the moving area after image enhancement,Represents the gray value of the corresponding pixel before the image enhancement,Representing the image enhancement weights, i.e. the motion analysis coefficients,Representing the presentation to beLinear mapping into the [0,255] range and rounding up.
According to the embodiment of the invention, through gamma conversion, the region with larger moving analysis coefficient value can be subjected to stronger image enhancement effect, so that the definition of the region is ensured, meanwhile, the region is effectively distinguished from the background region, and the overall display effect of the image to be detected is enhanced.
Further, in some embodiments of the present invention, the monitoring and tracking of the moving object is implemented according to the enhanced images of all frames, including: and carrying out optical flow analysis on the enhanced images of all frames based on an optical flow method, and determining the action track of the moving target according to the analysis result.
The optical flow method is a method capable of effectively analyzing the movement direction of an object and realizing target tracking, in the embodiment of the invention, gray level images can be monitored at night for all frames, image enhancement is carried out based on the method to obtain enhanced images corresponding to each frame respectively, and then optical flow analysis is carried out on each frame of enhanced image in time sequence by using the optical flow method, so that the moving target and the action track of the moving target on the time sequence corresponding to the gray level images monitored at night for all frames are determined.
Of course, various other time sequence analysis can be used in the embodiment of the invention to determine the moving target and the action track corresponding to the moving target, and the embodiment of the invention aims to adaptively enhance the image with poor image display effect caused by the dim scene at night, so that the display effect of the moving target is clearer, and further, a more accurate and stable target monitoring and tracking effect is realized.
According to the invention, the quality evaluation parameters of the night monitoring gray level images of each frame are determined by acquiring the night monitoring gray level images of continuous frames; the quality evaluation parameters can accurately represent the gray level distribution of the night monitoring gray level image, and reliable gray level distribution analysis is realized. Determining a contrast image and an image to be detected; determining the pixel change degree of the image to be detected according to the quality evaluation parameters of the contrast image and the image to be detected and the gray level difference of the pixel points at the same image position; the pixel change degree characterizes the instantaneous pixel change condition of the image to be detected. Determining an analysis image and a target point; further determining the change characteristic degree of the target point; the change characteristics are the change condition of the analysis image and the pixel change degree of the image to be detected, so that the change analysis of the image can be realized in a plurality of dimensions of time sequence and space, and the change characteristics on the time sequence are further endowed to the pixel points in the image, so that the numerical distribution of the change characteristic degree of all the pixel points is combined, and a target area is obtained; determining a movement analysis coefficient of the target area; by representing the movement condition in time sequence on the corresponding pixel points, the areas with the same movement characteristics can be accurately identified, and further more reliable movement analysis is realized. Finally, carrying out image enhancement on the image to be detected according to all the target area movement analysis coefficients to obtain an enhanced image; and monitoring and tracking the moving target according to the enhanced images of all the frames. The same target area represents similar moving characteristics, namely the more likely a complete individual is represented, and the target area is subjected to self-adaptive image enhancement, so that effective enhancement of a moving object is realized, the influence of the whole darkness of the moving object at night is eliminated, meanwhile, the moving object is obviously distinguished from a background area, the phenomenon of over-enhancement is avoided, the texture effect of the moving object in an enhanced image is clearer, the identification of the moving object is effectively promoted, and the monitoring tracking effect is enhanced.
The invention also provides a system for intelligent visual monitoring and tracking through the internet, referring to fig. 5, fig. 5 is a block diagram of the system for intelligent visual monitoring and tracking through the internet provided by an embodiment of the invention, the system 600 comprises a memory 601, a processor 602 and a computer program 603 stored in the memory 601 and capable of running on the processor 602, and the steps of the method for intelligent visual monitoring and tracking through the internet are realized when the processor 602 executes the computer program 603.
The present embodiment also provides a computer readable storage medium, in which a computer program code is stored, which when run on a computer, causes the computer to execute the above related method steps to implement a network-connected intelligent vision monitoring tracking method provided in the above embodiment.
The embodiment also provides a computer program product, when the computer program product runs on a computer, the computer is caused to execute the related steps so as to realize the intelligent visual monitoring and tracking method for the internet access.
The system, the computer readable storage medium, or the computer program product provided in this embodiment are used to execute the corresponding method provided above, and therefore, the advantages achieved by the system, the computer readable storage medium, or the computer program product can refer to the advantages of the corresponding method provided above, which are not described herein.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. The processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.
Claims (6)
1. The intelligent vision monitoring and tracking method for the through network is characterized by comprising the following steps of:
acquiring night monitoring gray level images of continuous frames, and determining quality evaluation parameters of the night monitoring gray level images of each frame according to gray level distribution of all pixel points in the night monitoring gray level images of each frame;
Taking the former frame in the continuous two-frame night monitoring gray level image as a contrast image and the latter frame as an image to be detected; determining the pixel change degree of the image to be detected according to quality evaluation parameters of the contrast image and the image to be detected and the gray level difference of pixel points at the same image position;
taking all frames of night monitoring gray images as analysis images in a preset time sequence range with the corresponding moment of the image to be detected as the center; optionally, taking one pixel point in the image to be detected as a target point; determining the change characteristic degree of the target point according to the pixel change degree of the image to be detected and the contrast image, the gray distribution of the target point in a corresponding preset neighborhood range and the gray distribution of the pixel point at the same image position as the target point in the analysis image;
Dividing the image to be detected into at least two target areas according to the numerical distribution of the variation characteristic degree of all pixel points in the image to be detected; determining a movement analysis coefficient of each target area according to the change characteristic degree of all pixel points in the target area;
performing image enhancement on the image to be detected according to the movement analysis coefficients of all the target areas to obtain an enhanced image; monitoring and tracking of the moving target are realized according to the enhanced images of all frames;
The method for acquiring the quality evaluation parameters of the night monitoring gray level image comprises the following steps:
determining the minimum gray value of a pixel point in the night monitoring gray image of the same frame;
carrying out homogenization treatment on the difference values of the gray values and the gray minimum values of all pixel points of the night monitoring gray image of the same frame to obtain quality evaluation parameters of the corresponding night monitoring gray image;
the method for acquiring the pixel change degree of the image to be detected comprises the following steps:
Determining the absolute value of the difference value of the quality evaluation parameters of the image to be detected and the contrast image as an evaluation difference index;
performing frame difference processing on the contrast image and the image to be detected to obtain a difference image, and determining gray values and values of all pixel points in the difference image to obtain a gray change index;
Determining the pixel change degree according to the evaluation difference index and the gray level change index, wherein the evaluation difference index and the gray level change index are in positive correlation with the pixel change degree;
the method for acquiring the change characteristic degree of the target point comprises the following steps:
Determining the gray value difference between the target point in the image to be detected and each pixel point in a preset neighborhood range, and solving a mean value to obtain a neighborhood gray difference index;
Taking pixel points in the same image position with the target point in all the analysis images as analysis points, calculating gray value differences between the target point and each analysis point respectively, and solving a mean value to obtain a time sequence gray difference index;
Determining the absolute value of the difference value of the pixel change degree of the image to be detected and the contrast image as a pixel change difference index of the image to be detected;
Determining the change characteristic degree of the target point by combining the neighborhood gray scale difference index, the time sequence gray scale difference index and the pixel change difference index, wherein the pixel change difference index and the change characteristic degree form a negative correlation, and the time sequence gray scale difference index and the neighborhood gray scale difference index form a positive correlation with the change characteristic degree; calculating the product of the time sequence gray scale difference index and the neighborhood gray scale difference index, and taking the ratio of the product to the pixel change difference index as the change characteristic degree;
The method for acquiring the mobile analysis coefficient comprises the following steps:
and calculating the average value of the variation characteristic degrees of all the pixel points in the same target area, and taking the average value as a movement analysis coefficient of the corresponding target area.
2. The method for intelligent visual monitoring and tracking of a through-network as claimed in claim 1, wherein dividing the image to be measured into at least two target areas comprises:
Based on a DBSCAN density clustering algorithm, clustering is carried out according to the variation characteristic degree of all pixel points in the image to be detected, clustering clusters are obtained, and each clustering cluster is used as a target area.
3. The method for intelligent visual monitoring and tracking of a through network according to claim 1, wherein the image enhancement of the image to be detected according to the movement analysis coefficients of all target areas to obtain an enhanced image comprises:
Screening the target area according to the movement analysis coefficient to obtain a movement area;
and taking the movement analysis coefficient as an image enhancement weight, carrying out image enhancement processing on the movement region to obtain an enhancement region, and traversing all the movement regions to obtain an enhanced image.
4. The method for intelligent visual monitoring and tracking of a through-network as claimed in claim 3, wherein the method for acquiring the mobile area comprises the following steps:
And taking the target area with the movement analysis coefficient larger than a preset coefficient threshold value as a movement area.
5. The method for intelligent visual monitoring and tracking of a through-network according to claim 1, wherein the monitoring and tracking of the moving target are realized according to the enhanced images of all frames, comprising:
And carrying out optical flow analysis on the enhanced images of all frames based on an optical flow method, and determining the action track of the moving target according to the analysis result.
6. A network-enabled intelligent vision monitoring and tracking system, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any one of claims 1-5 when executing the computer program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410918075.1A CN118470071B (en) | 2024-07-10 | 2024-07-10 | Intelligent vision monitoring and tracking method and system for general network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410918075.1A CN118470071B (en) | 2024-07-10 | 2024-07-10 | Intelligent vision monitoring and tracking method and system for general network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118470071A true CN118470071A (en) | 2024-08-09 |
CN118470071B CN118470071B (en) | 2024-09-17 |
Family
ID=92164109
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410918075.1A Active CN118470071B (en) | 2024-07-10 | 2024-07-10 | Intelligent vision monitoring and tracking method and system for general network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118470071B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022027444A1 (en) * | 2020-08-06 | 2022-02-10 | 深圳市大疆创新科技有限公司 | Event detection method and device, movable platform, and computer-readable storage medium |
WO2022027931A1 (en) * | 2020-08-07 | 2022-02-10 | 东南大学 | Video image-based foreground detection method for vehicle in motion |
CN117690085A (en) * | 2023-12-13 | 2024-03-12 | 济南福深兴安科技有限公司 | Video AI analysis system, method and storage medium |
CN117935177A (en) * | 2024-03-25 | 2024-04-26 | 东莞市杰瑞智能科技有限公司 | Road vehicle dangerous behavior identification method and system based on attention neural network |
CN117934355A (en) * | 2024-01-23 | 2024-04-26 | 苏州世航智能科技有限公司 | Visual positioning method for underwater robot |
-
2024
- 2024-07-10 CN CN202410918075.1A patent/CN118470071B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022027444A1 (en) * | 2020-08-06 | 2022-02-10 | 深圳市大疆创新科技有限公司 | Event detection method and device, movable platform, and computer-readable storage medium |
WO2022027931A1 (en) * | 2020-08-07 | 2022-02-10 | 东南大学 | Video image-based foreground detection method for vehicle in motion |
CN117690085A (en) * | 2023-12-13 | 2024-03-12 | 济南福深兴安科技有限公司 | Video AI analysis system, method and storage medium |
CN117934355A (en) * | 2024-01-23 | 2024-04-26 | 苏州世航智能科技有限公司 | Visual positioning method for underwater robot |
CN117935177A (en) * | 2024-03-25 | 2024-04-26 | 东莞市杰瑞智能科技有限公司 | Road vehicle dangerous behavior identification method and system based on attention neural network |
Also Published As
Publication number | Publication date |
---|---|
CN118470071B (en) | 2024-09-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101827204B (en) | Method and system for detecting moving object | |
CN110610150B (en) | Tracking method, device, computing equipment and medium of target moving object | |
CN115661669B (en) | Method and system for monitoring illegal farmland occupancy based on video monitoring | |
US11700457B2 (en) | Flicker mitigation via image signal processing | |
CN116188328B (en) | Parking area response lamp linked system based on thing networking | |
Jia et al. | A two-step approach to see-through bad weather for surveillance video quality enhancement | |
CN117593592B (en) | Intelligent scanning and identifying system and method for foreign matters at bottom of vehicle | |
CN106887002B (en) | A kind of infrared image sequence conspicuousness detection method | |
CN117132510A (en) | Monitoring image enhancement method and system based on image processing | |
CN117994165B (en) | Intelligent campus management method and system based on big data | |
CN115393774A (en) | Lightweight fire smoke detection method, terminal equipment and storage medium | |
CN116310993A (en) | Target detection method, device, equipment and storage medium | |
Raikwar et al. | Adaptive dehazing control factor based fast single image dehazing | |
CN116263942A (en) | Method for adjusting image contrast, storage medium and computer program product | |
CN118470071B (en) | Intelligent vision monitoring and tracking method and system for general network | |
Li et al. | Grain depot image dehazing via quadtree decomposition and convolutional neural networks | |
CN114598849B (en) | Building construction safety monitoring system based on thing networking | |
Soumya et al. | Self-organized night video enhancement for surveillance systems | |
CN114581475A (en) | Laser stripe segmentation method based on multi-scale saliency features | |
Wang et al. | Low-light traffic objects detection for automated vehicles | |
CN114549340A (en) | Contrast enhancement method, computer program product, storage medium, and electronic device | |
CN114708544A (en) | Intelligent violation monitoring helmet based on edge calculation and monitoring method thereof | |
Huang et al. | Efficient image dehazing algorithm using multiple priors constraints | |
CN108596893B (en) | Image processing method and system | |
Rajasekaran et al. | Image dehazing algorithm based on artificial multi-exposure image fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |