CN117690085A - Video AI analysis system, method and storage medium - Google Patents
Video AI analysis system, method and storage medium Download PDFInfo
- Publication number
- CN117690085A CN117690085A CN202311709746.5A CN202311709746A CN117690085A CN 117690085 A CN117690085 A CN 117690085A CN 202311709746 A CN202311709746 A CN 202311709746A CN 117690085 A CN117690085 A CN 117690085A
- Authority
- CN
- China
- Prior art keywords
- pixel point
- change
- image frame
- monitoring image
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000004458 analytical method Methods 0.000 title claims abstract description 61
- 238000000034 method Methods 0.000 title claims abstract description 38
- 230000008859 change Effects 0.000 claims abstract description 221
- 238000012544 monitoring process Methods 0.000 claims abstract description 204
- 230000033001 locomotion Effects 0.000 claims abstract description 65
- 238000012545 processing Methods 0.000 claims abstract description 11
- 230000005856 abnormality Effects 0.000 claims abstract description 6
- 230000000875 corresponding effect Effects 0.000 claims description 51
- 229910052500 inorganic mineral Inorganic materials 0.000 claims description 28
- 239000011707 mineral Substances 0.000 claims description 28
- 230000011218 segmentation Effects 0.000 claims description 17
- 238000010606 normalization Methods 0.000 claims description 12
- 230000000694 effects Effects 0.000 claims description 11
- 230000002596 correlated effect Effects 0.000 claims description 9
- 230000002708 enhancing effect Effects 0.000 claims description 8
- 238000013528 artificial neural network Methods 0.000 claims description 7
- 238000001514 detection method Methods 0.000 claims description 7
- 230000003287 optical effect Effects 0.000 claims description 5
- 238000013507 mapping Methods 0.000 claims description 3
- 230000002159 abnormal effect Effects 0.000 description 14
- 230000006870 function Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 7
- 238000013527 convolutional neural network Methods 0.000 description 6
- 230000032258 transport Effects 0.000 description 5
- 239000000428 dust Substances 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of video monitoring and processing, in particular to a video AI analysis system, a video AI analysis method and a storage medium. According to the method, the change degree of each pixel point is obtained according to the motion change condition of each pixel point among a plurality of monitoring image frames, the pixel points are classified based on the change trend condition of the change degree, and the region category of each pixel point is obtained; according to the gray level deviation of each pixel point and the gray level deviation between the classes of the areas, gray level influence indexes of each pixel point in each monitoring image frame are obtained, and the intensity enhancement of each pixel point in each monitoring image frame is further obtained by combining the distribution of the variation degree; and obtaining an enhanced monitoring video according to the enhancement degree of each pixel point in each monitoring image to perform anomaly analysis. According to the video monitoring method, the video is accurately enhanced by combining the motion change and the gray level condition of the pixel points, so that the video with higher quality is obtained, and the result of abnormality analysis on video monitoring is more accurate.
Description
Technical Field
The invention relates to the technical field of video monitoring and processing, in particular to a video AI analysis system, a video AI analysis method and a storage medium.
Background
With the development of the internet, the field of artificial intelligence is rapidly rising, and is applied to many fields due to its characteristics of low cost and intelligence. In the mineral transportation process of a mine for mineral exploitation, in order to ensure the transportation safety, videos collected by on-site monitoring can be analyzed and monitored on line, so that the detection and early warning of the abnormality during mineral transportation are facilitated.
Because the mine transportation environment is darker, the existing video monitoring image processing technology generally does not consider the enhancement degree required by different areas in the mine transportation environment when the image is enhanced as a whole, so that the enhancement effect of partial areas is poor and even the detail content is lost, and when the video image is analyzed later, the video quality is lower due to the poor enhancement effect of the image frame, and the abnormal analysis of video monitoring cannot be accurately performed.
Disclosure of Invention
In order to solve the technical problems that in the prior art, the enhancement effect of image frames is poor and video monitoring anomaly analysis cannot be accurately performed, the invention aims to provide a video AI analysis system, a video AI analysis method and a storage medium, and the adopted technical scheme is as follows:
the invention provides a video AI analysis method, which comprises the following steps:
Acquiring more than two monitoring image frames according to the mineral transportation monitoring video;
obtaining a movement change value of each pixel point between every two adjacent monitoring image frames according to the movement change condition of each pixel point between every two adjacent monitoring image frames; obtaining the change degree of each pixel point according to the change degree and the overall distribution condition of all the movement change values corresponding to each pixel point; classifying the pixel points according to the distribution change trend of the change degree among all the pixel points to obtain the region category of the pixel points;
in each monitoring image frame, according to the gray value deviation degree of each pixel point and the gray difference condition between the region category of the corresponding pixel point and other region categories, acquiring a gray influence index of each pixel point in the corresponding monitoring image frame; obtaining the intensity enhancement of each pixel point in each monitoring image frame according to the distribution condition of the change degree of the region category of each pixel point and the gray scale influence index of the corresponding pixel point in each monitoring image frame;
each monitoring image frame in the mineral transportation monitoring video is subjected to image enhancement according to the enhancement degree of the pixel points, so that an enhanced monitoring video is obtained; and carrying out anomaly analysis by enhancing the monitoring video.
Further, the method for obtaining the degree of change includes:
sequentially taking each pixel point as a target pixel point, and taking the average value of all movement change values corresponding to the target pixel point as an integral change index of the target pixel point;
taking the difference between the maximum value and the minimum value of the movement change values as the change very bad of the target pixel points in all the movement change values corresponding to the target pixel points; taking the sum of the variation range of the target pixel point and a preset adjusting parameter as a variation floating value of the target pixel point; calculating the product of the change floating value of the target pixel point and the maximum value of the corresponding movement change value of the target pixel point to obtain the change degree index of the target pixel point; the preset adjusting parameter is a positive number;
obtaining the change degree of the target pixel point according to the change degree index and the overall change index of the target pixel point; the change degree index and the overall change index are positively correlated with the change degree.
Further, the method for acquiring the region category includes:
ordering the pixel points according to the sequence from small to large by all the change degrees to obtain a pixel change sequence; sequentially taking pixel points in the pixel change sequence as points to be segmented, calculating the difference of the degree of change between the points to be segmented and the next adjacent pixel points, and carrying out normalization processing to obtain numerical difference indexes of the points to be segmented;
Calculating the average value of the variation degrees of all pixel points before the next pixel point of the points to be segmented, and taking the average value as a first trend index; calculating an average value of the change degrees of a plurality of pixel points of the preset trend after the points to be segmented, and taking the average value as a second trend index; taking the difference between the first trend index and the second trend index as a trend difference index of the points to be segmented;
normalizing the product of the numerical difference index and the trend difference index of the points to be segmented to obtain the segmentation degree of the points to be segmented;
traversing the pixel points in the pixel change sequence according to the arrangement sequence, stopping traversing and taking the current pixel point as a division point when the division degree of the current pixel point is smaller than that of the adjacent previous pixel point, taking all the pixel points in front of the division point as an area category, dividing and removing the pixel points in the area category from the pixel change sequence, obtaining a new pixel change sequence, and carrying out iterative division to obtain the area category; stopping iteration until the segmentation points cannot be obtained, and taking all the rest pixel points as one region category.
Further, the method for acquiring the gray scale influence index comprises the following steps:
for any one monitoring image frame, taking each pixel point in the monitoring image frame as a reference pixel point in sequence; calculating the gray difference between the pixel value of the reference pixel point and the average gray value of the monitoring image frame to obtain the gray deviation degree of the reference pixel point in the monitoring image frame;
Calculating the average gray value of the pixel point corresponding to each region category in the monitoring image frame, and obtaining the region gray value of each region category in the monitoring image frame; calculating the difference of the region gray value between the region category of the reference pixel point and each other region category to obtain the region difference of the reference pixel point; taking the average value of all the regional differences of the reference pixel points as the regional deviation degree of the reference pixel points in the monitoring image frame;
and carrying out negative correlation mapping and normalization processing on the product of the gray level deviation degree and the area deviation degree of the reference pixel point in the monitoring image frame, and obtaining the gray level influence index of the reference pixel point in the monitoring image frame.
Further, the method for obtaining the strength comprises the following steps:
taking the average value of all the variation degrees in the region category of the reference pixel point as a variation influence index of the reference pixel point;
obtaining the enhancement influence degree of the reference pixel point in the monitoring image frame according to the gray scale influence index and the change influence index of the reference pixel point in the monitoring image frame; the gray scale influence index and the change influence index are positively correlated with the enhancement influence;
taking the accumulated value of the enhancement effect of all pixel points in the monitoring image frame as the influence sum value of the monitoring image frame; and calculating the ratio of the enhancement influence degree to the influence sum value of the reference pixel point in the monitoring image frame to obtain the enhancement strength of the reference pixel point in the monitoring image frame.
Further, the method for acquiring the enhanced surveillance video comprises the following steps:
taking each monitoring image frame as an image frame to be enhanced, and calculating the product of the enhancement degree and the gray value of any pixel point in the image frame to be enhanced to obtain the adjustment value of the pixel point; the sum value of the gray value and the adjustment value of the pixel point is rounded downwards to obtain a new gray value of the pixel point;
obtaining a final enhanced image frame according to the new gray values of all pixel points in the image frame to be enhanced; and arranging all final enhanced image frames according to a time sequence order to obtain the enhanced monitoring video.
Further, the anomaly analysis by enhancing the monitoring video includes:
and taking the enhanced monitoring video as input, inputting the enhanced monitoring video into a trained neural network, and outputting an abnormality detection result.
Further, the method for obtaining the movement change value includes:
taking two adjacent monitoring image frames as input of an optical flow method, and outputting the two adjacent monitoring image frames as the moving distance of each pixel point between the two adjacent monitoring image frames; and taking the moving distance as a moving change value of each pixel point between two adjacent monitoring image frames.
The invention also provides a video AI analysis system, comprising:
The image frame acquisition module is used for acquiring more than two monitoring image frames according to the mineral transportation monitoring video;
the regional category acquisition module is used for acquiring a movement change value of each pixel point between every two adjacent monitoring image frames according to the movement change condition of each pixel point between every two adjacent monitoring image frames; obtaining the change degree of each pixel point according to the change degree and the overall distribution condition of all the movement change values corresponding to each pixel point; classifying the pixel points according to the distribution change trend of the change degree among all the pixel points to obtain the region category of the pixel points;
the pixel point enhancement degree acquisition module is used for acquiring gray scale influence indexes of each pixel point in the corresponding monitoring image frame according to the gray scale value deviation degree of each pixel point and the gray scale difference condition between the region category of the corresponding pixel point and other region categories; obtaining the intensity enhancement of each pixel point in each monitoring image frame according to the distribution condition of the change degree of the region category of each pixel point and the gray scale influence index of the corresponding pixel point in each monitoring image frame;
The video enhancement analysis module is used for carrying out image enhancement on each monitoring image frame in the mineral transportation monitoring video according to the enhancement degree of the pixel points to obtain an enhanced monitoring video; and carrying out anomaly analysis by enhancing the monitoring video.
The present invention also provides a computer readable storage medium storing a computer readable program or instructions that, when executed by a processor, enable implementation of steps in a video AI analysis method according to any one of the above implementations.
The invention has the following beneficial effects:
according to the method, each monitoring image frame is analyzed, the change degree of each pixel point is obtained by combining the motion change condition of each pixel point between the monitoring image frames, the overall motion change condition of each pixel point between all the monitoring image frames is reflected, the pixel points are classified based on the change trend condition of motion change, the region category of each pixel point is obtained, the pixel points are classified according to the motion condition of the pixel point between the monitoring image frames, and different enhancement degrees are carried out on different regions. The gray level deviation of each pixel point and the gray level deviation between the types of the areas are further combined to obtain gray level influence indexes of each pixel point in each monitoring image frame, the fuzzy condition caused by small gray level difference is considered, the area with smaller contrast needs stronger enhancement condition, the enhancement of each pixel point in each monitoring image frame is obtained by combining the distribution of the variation degree and the gray level influence indexes, and the more accurate enhancement degree of each pixel point in each monitoring image is obtained according to the different gray level difference degrees and the motion variation condition of the pixel point in each monitoring image. And finally, obtaining an enhanced monitoring video according to the enhancement degree of each pixel point in each monitoring image to perform anomaly analysis. According to the method, the video is accurately enhanced by combining the motion change and the gray level condition of the pixel points, so that the monitoring video with higher quality is obtained, and the abnormal analysis result of the monitoring video is more accurate.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a video AI analysis method according to one embodiment of the invention;
fig. 2 is a block diagram of a video AI analysis system according to an embodiment of the invention.
Detailed Description
In order to further describe the technical means and effects adopted by the present invention to achieve the preset purposes, the following detailed description refers to specific implementation, structure, features and effects of a video AI analysis system, method and storage medium according to the present invention with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of a video AI analysis system, a method and a storage medium provided by the present invention with reference to the accompanying drawings.
An embodiment of a video AI analysis method:
referring to fig. 1, a flowchart of a video AI analysis method according to an embodiment of the invention is shown, the method includes the following steps:
s1: and acquiring more than two monitoring image frames according to the mineral transportation monitoring video.
In the mineral exploitation process, abnormal conditions, such as ore falling or excessive environmental dust, can occur when the exploited minerals are transported, so that monitoring analysis is needed, and the main means is that the minerals are subjected to video acquisition through cameras arranged beside a transportation line in the transportation process, and then the videos are transmitted to an online monitoring system to analyze the minerals, so that the abnormal conditions are detected and early warned. However, when the acquired video has poor definition and brightness, a certain interference is caused to the abnormal detection accuracy of the monitoring system, so that the acquired video needs to be enhanced. And acquiring more than two monitoring image frames according to the mineral transportation monitoring video.
In one embodiment of the invention, in order to ensure the integrity of video analysis, the duration of the analyzed mineral transportation monitoring video is set to 30 seconds, in order to ensure the accuracy of subsequent video analysis, each image frame in each second of mineral transportation monitoring video is enhanced, each image frame in the acquired mineral transportation monitoring video is subjected to graying, and more than two monitoring image frames are obtained. The frame rate of the image frames in the video is set to be 30 frames per second, it should be noted that the method for acquiring the image frames and graying the image in the video is a technical means well known to those skilled in the art, and the specific numerical value implementation person can adjust according to the specific implementation situation, which is not limited herein.
So far, the monitoring image frames in the mineral transportation monitoring video are obtained for enhancement analysis.
S2: obtaining a movement change value of each pixel point between every two adjacent monitoring image frames according to the movement change condition of each pixel point between every two adjacent monitoring image frames; obtaining the change degree of each pixel point according to the change degree and the overall distribution condition of all the movement change values corresponding to each pixel point; and classifying the pixel points according to the distribution change trend of the change degree among all the pixel points to obtain the region category of the pixel points.
In the mineral transportation process, moving objects exist in the monitored continuous image frames, so that the pixel points also have movement change conditions, the pixel points can be initially classified according to the movement degree, and the moving part is possibly more blurred relative to the static part, so that different enhancement degrees can be further carried out on the combination gray value conditions of different areas.
According to the motion change condition of each pixel point between every two adjacent monitoring image frames, a motion change value of each pixel point between every two adjacent monitoring image frames is obtained, preferably, the two adjacent monitoring image frames are used as input of an optical flow method, the motion distance of each pixel point between the two adjacent monitoring image frames is output as the motion change value of each pixel point between the two adjacent monitoring image frames. The optical flow method is a motion estimation method based on gray scale variation, which estimates the moving distance of a pixel by calculating the gray scale difference and position variation between successive frames. It should be noted that, the optical flow method is a technical means well known to those skilled in the art, and will not be described herein. In other embodiments of the present invention, the moving distance of the pixel point may also be estimated by using a block matching method, which is not limited herein.
The moving change value can reflect the change condition of the pixel points, one moving change value exists between every two adjacent monitoring image frames of each pixel point, and the integral change degree of the pixel points is obtained according to the change condition of the pixel points in the continuous monitoring image frames, namely, the change degree of each pixel point is obtained according to the change degree and the integral distribution condition of all the moving change values corresponding to each pixel point.
Preferably, each pixel point is sequentially taken as a target pixel point, the average value of all movement change values corresponding to the target pixel point is taken as an integral change index of the target pixel point, and the integral position change of the target pixel point is reflected through the integral change index. In all the movement change values corresponding to the target pixel point, the difference between the maximum value and the minimum value of the movement change values is taken as the change range of the target pixel point, the fluctuation degree of the change of the pixel point is reflected by the change range, and as the pixel point possibly has uniform change, namely the condition of uniform motion of the transport vehicle, the sum value of the change range of the target pixel point and the preset adjustment parameter is taken as the change floating value of the target pixel point, the change floating value reflecting the discrete degree of the movement change value is obtained by adjusting the preset adjustment parameter, and in the embodiment of the invention, the preset adjustment parameter is positive number and is set to be 0.001, and a specific numerical value implementer can adjust according to specific implementation conditions.
Further, the product of the change floating value of the target pixel point and the maximum value of the corresponding movement change value of the target pixel point is calculated, the change degree index of the target pixel point is obtained, and the change degree of the pixel value movement distance is amplified through the maximum value, so that the change degree index reflecting the change degree more obviously is obtained. And obtaining the change degree of the target pixel point according to the change degree index and the overall change index of the target pixel point, wherein the change degree index and the overall change index are positively correlated with the change degree.
In the embodiment of the invention, the expression of the degree of change of the pixel point is:
in which Q i Denoted as the degree of change of the i-th pixel point,expressed as the maximum value of the corresponding movement change value of the ith pixel point, < >>Expressed as the minimum value of the motion change value corresponding to the ith pixel point, +.>Expressed as the value of the change of the j-th movement corresponding to the i-th pixel point, V i The total number of the i-th pixel point corresponding to all the movement change values is represented, alpha is represented as a preset adjustment parameter, and I is represented as an absolute value extraction function.
Wherein,denoted as the i-th pixel varies very poorly,/->The variation float expressed as the ith pixel pointDynamic value (I)>A change degree index expressed as the ith pixel point,/ >Indicated as the overall change index for the i-th pixel. The change degree index and the integral change index are reflected in a product form and are positively correlated with the change degree, when the change degree index and the integral change index are larger, the change degree is larger, the position change of the pixel point is larger in degree and numerical value, the pixel point corresponds to a complex motion part, and the position change condition of the pixel point is reflected through the change degree. In other embodiments of the present invention, other basic mathematical operations may be used to reflect that the change degree index and the overall change index are both positively correlated with the change degree, such as addition, without limitation.
Therefore, the pixel points can be further divided into corresponding region categories according to the change degree of each pixel point, and when the change degrees among the pixel points are similar, the greater the possibility that the pixel points correspond to the same partial region is, the greater the change degree difference among different regions is, namely, the pixel points are classified according to the distribution change trend of the change degrees among all the pixel points, and the region category of the pixel points is obtained.
Preferably, all the variation degrees are sequenced from small to large to obtain a pixel variation sequence, and the pixel variation sequence is divided and divided through the variation of the variation degrees. And sequentially taking the pixel points in the pixel change sequence as points to be segmented, calculating the difference of the change degree between the points to be segmented and the adjacent next pixel point, carrying out normalization processing to obtain a numerical value difference index of the points to be segmented, and when the numerical value difference of the change degree between the pixel points is large, indicating that the pixel points do not belong to the same type of pixel points.
Further, considering the change trend situation of the change degree between the front and rear local pixel points of the pixel points, comprehensively judging the possibility that the pixel points are the division points by overall change, calculating the average value of the change degree of all the pixel points before the next pixel point to be divided, as a first trend index, calculating the average value of the change degree of the preset trend number of the pixel points after the next pixel point to be divided, as a second trend index, taking the difference between the first trend index and the second trend index as a trend difference index of the next pixel point to be divided, and reflecting the difference situation of the change degree of the pixel points around the next pixel point to be divided by the trend difference index.
Normalizing the product of the numerical difference index and the trend difference index of the points to be segmented to obtain the segmentation degree of the points to be segmented, wherein in the embodiment of the invention, the expression of the segmentation degree is as follows:
Wherein P is l Expressed as the degree of division, Q, of the first pixel point in the pixel change sequence l Expressed as the degree of change, Q, of the first pixel point in the pixel change sequence l+1 Expressed as the degree of change of the (1+1) th pixel point in the pixel change sequence, m is expressed as the total number of all the pixel points before the (1+1) th pixel point in the pixel change sequence, and Q l+1,u The segmentation degree of the (u) th pixel point before the (1) th pixel point in the pixel change sequence is expressed, r is the total number of the preset trend number of the (Q) th pixel point after the (1) th pixel point in the pixel change sequence l,v The degree of division of the v-th pixel point after the first pixel point in the pixel change sequence is expressed, the absolute value is expressed as an absolute value extraction function, exp () is expressed as an exponential function based on a natural constant, norm () is expressed as a normalization function, it should be noted that normalization is a technical means well known to those skilled in the art, selection of the normalization function may be linear normalization or standard normalization, and the specific normalization method is not limited herein。
Wherein exp (|Q) l -Q l+1 I) is represented as a numerical difference indicator for the first pixel point in the sequence of pixel variations,a first trend index expressed as the first pixel point in the pixel change sequence,/for the first pixel point>A second trend indicator expressed as the first pixel point in the pixel change sequence,/for the first pixel point >The trend difference index expressed as the first pixel point in the pixel change sequence indicates that the larger the numerical difference index and the trend difference index are, the larger the change condition difference of the change degree of the pixel points distributed on the left and right sides of the corresponding pixel point in the sequence is, the larger the segmentation degree is, the more likely the corresponding pixel point is the segmentation point of the category, and the category is further divided according to the segmentation degree.
In the pixel change sequence, pixel points are traversed according to the arrangement sequence, the change degree of the pixel points is sequenced from small to large, and the change degree difference of the pixel points belonging to the same area category is not large, so that the segmentation degree is presented and increased, when the segmentation degree of the current pixel point is smaller than that of the adjacent previous pixel point, the segmentation possibility is maximum, namely, the change difference of the previous and subsequent change degrees reaches the maximum, namely, the previous pixel point is in the same category, the subsequent pixel point is in another category, the traversing is stopped, the current pixel point is taken as the segmentation point, all the pixel points before the segmentation point are taken as one area category, the pixel points of the area category are segmented and removed from the pixel change sequence, the new pixel change sequence is obtained, the area category is obtained, the analysis is only carried out on the previous segmentation point belonging to the category each time, until the segmentation point cannot be obtained, and the iteration is stopped, and the rest of all the pixel points are taken as one area category.
The classification of the areas of all the pixel points is completed, each area category represents the pixel points belonging to different areas in the video, particularly, for example, when a transport vehicle transports minerals, the transport vehicle and the minerals on the vehicle move continuously, other positions in a mine hole such as a wall, light and the like are not moved, and dust in the air moves slowly, so that the pixel points in the area categories can represent different areas such as the background, the dust, the transport vehicle, the minerals and the like.
S3: in each monitoring image frame, according to the gray value deviation degree of each pixel point and the gray difference condition between the region category of the corresponding pixel point and other region categories, acquiring a gray influence index of each pixel point in the corresponding monitoring image frame; and obtaining the intensity enhancement of each pixel point in each monitoring image frame according to the distribution condition of the change degree of the region category of each pixel point and the gray scale influence index of the corresponding pixel point in each monitoring image frame.
After different area categories are obtained, different pixel points can be analyzed according to the motion possibility and gray level difference of the different area categories, so that different enhancement degrees required by each pixel point in each monitoring image frame are obtained, each frame of image can be better enhanced, and further a better enhanced video is obtained.
Firstly, analyzing the gray level difference condition, wherein the gray level difference can have the condition of unobvious contrast due to the dim characteristic of a mine transportation and collection environment, so that the gray level influence condition of each pixel in different monitoring image frames is comprehensively analyzed through the gray level difference condition of the region where the pixel is positioned and the gray level deviation degree of the pixel, namely, in each monitoring image frame, according to the gray level deviation degree of each pixel and the gray level difference condition between the region category where the corresponding pixel is positioned and other region categories, the gray level influence index of each pixel in the corresponding monitoring image frame is obtained.
Preferably, for any one monitoring image frame, each pixel point in the monitoring image frame is sequentially taken as a reference pixel point, the same analysis is carried out on each pixel point in each monitoring image frame, the gray level difference between the pixel value of the reference pixel point and the average gray level value of the monitoring image frame is calculated, the gray level deviation degree of the reference pixel point in the monitoring image frame is obtained, and when the gray level deviation degree of the reference pixel point is smaller, the gray level contrast of the reference pixel point is less obvious.
Further, calculating an average gray value corresponding to each region category in the monitoring image frame, as the region gray value of each region category under the monitoring image frame, calculating the difference of the region gray value between the region category of the reference pixel point and each other region category, obtaining the region difference of the reference pixel point, reflecting the overall gray contrast condition of the region of the reference pixel point through the gray difference condition between the region categories, taking the average value of all the region differences of the reference pixel point as the region deviation degree of the reference pixel point, and indicating that the gray contrast of the region of the reference pixel point is less obvious when the region deviation degree is smaller.
And carrying out negative correlation mapping and normalization processing on the product of the gray level deviation degree of the reference pixel point and the regional deviation degree to obtain the gray level influence index of the reference pixel point in the monitoring image frame. In the embodiment of the invention, the expression of the gray scale influence index is:
wherein B is ja A gray scale influence index expressed as an a-th pixel point in a j-th monitoring image frame, H ja Represented as the gray value of the a-th pixel in the j-th monitored image frame,expressed as average gray value in the j-th monitored image frame, for>The gray value of the region expressed as the region category of the (a) pixel in the (j) th monitoring image frame, and s expressed as the number of other region categories except the region category of the (a) th pixel in the (j) th monitoring image frame>The region gray value expressed as the c-th other region class in the j-th monitored image frame, |is expressed as an absolute value extraction function, exp () is expressed as an exponential function based on a natural constant.
Wherein,expressed as the gray level deviation of the a-th pixel point in the j-th monitored image frame,expressed as the difference of the area between the area category of the (a) pixel point and the corresponding (c) other area category in the (j) th monitored image frame,/ >Expressed as the degree of area deviation of the a-th pixel point in the j-th monitored image frame. When the difference between the gray level deviation degree and the area deviation degree is smaller, the gray level difference between the pixel point and other areas is smaller, the gray level contrast is smaller, the area characterization is more fuzzy, the gray level influence index is larger, and the enhancement degree needed by the pixel point is larger.
And then, combining the distribution condition of the change degree among the region categories to obtain the enhancement degree of each pixel point to be enhanced, wherein the larger the change degree of the change degree is, the more likely the part represented by the pixel point is a motion region, and the more the part of the motion region is the region of which the condition needs to be analyzed, so that the larger the enhancement degree is required, and the enhancement degree of each pixel point in each monitoring image frame is obtained according to the distribution condition of the change degree of the region category in which the pixel point is positioned and the gray scale influence index of the corresponding pixel point in each monitoring image frame.
Preferably, an average value of all the variation degrees in the region class where the reference pixel points are located is used as a variation influence index of the reference pixel points, and the distribution condition of the overall variation degrees of the region class is reflected by the average value. And obtaining the enhancement influence degree of the reference pixel point in the monitoring image frame according to the gray scale influence index and the change influence index of the reference pixel point in the monitoring image frame, wherein the enhancement influence degree represents the enhancement degree of the pixel point obtained by analyzing the gray scale and the change degree, and the gray scale influence index and the change influence index are positively correlated with the enhancement influence degree. In the embodiment of the invention, the expression of the enhancement effect is:
In which W is ja Denoted as enhancement effect of the a-th pixel point in the j-th monitored image frame, B ja Indicated as a gray scale impact index for the a-th pixel in the j-th monitored image frame,denoted as the change affecting indicator for the a-th pixel in the j-th monitored image frame.
In other embodiments of the present invention, other basic mathematical operations may be used to reflect that the gray scale impact index and the change impact index are both positively correlated with the enhancement impact, such as addition or exponentiation, without limitation.
Further, taking the accumulated value of the enhancement influence degree of all the pixel points in the monitoring image frame as the influence sum value of the monitoring image frame, calculating the ratio of the enhancement influence degree of the reference pixel points in the monitoring image frame to the influence sum value, obtaining the enhancement degree of the reference pixel points in the monitoring image frame, and obtaining the optimal enhancement condition of the pixel points under each monitoring image frame through the occupation degree of the pixel points in the corresponding monitoring image frame, wherein in the embodiment of the invention, the expression of the enhancement degree is as follows:
wherein R is ja Represented as the intensity enhancement, W, of the a-th pixel in the j-th monitored image frame ja Denoted as the degree of enhancement effect of the a-th pixel in the j-th monitored image frame, and z is denoted as the total number of pixels in the j-th monitored image frame.
Wherein,the effect and value expressed as j-th monitored image frame, when the duty ratio of the pixel point is larger, indicates that the gray scale change of the pixel point in the monitored image frame is not obvious and is more likely to correspond to the moving area portion, and thus a stronger degree of enhancement is required in the monitored image frame.
The gray scale and the motion change of each pixel point in each monitoring image frame are analyzed, and the intensity enhancement of each pixel point in each monitoring image frame is obtained.
S4: each monitoring image frame in the mineral transportation monitoring video is subjected to image enhancement according to the enhancement degree of the pixel points, so that an enhanced monitoring video is obtained; and carrying out anomaly analysis by enhancing the monitoring video.
Each monitoring image frame is enhanced through enhancement, in one embodiment of the invention, each monitoring image frame is sequentially taken as an image frame to be enhanced, for any pixel point in the image frame to be enhanced, the product of the enhancement degree and the gray value of the pixel point is calculated, the adjustment value of the pixel point is obtained, the sum value of the gray value and the adjustment value of the pixel point is rounded down, and the new gray value of the pixel point is obtained, and in the embodiment of the invention, the calculation expression of the new gray value of the pixel point is:
Wherein H' ja New gray value, H, expressed as the a-th pixel point in the j-th monitored image frame ja Expressed as the gray value of the a pixel point in the j-th monitoring image frame, R ja Denoted as the intensity increase of the a-th pixel in the j-th monitored image frame,represented as a downward rounding function.
Wherein H is ja ×R ja The adjustment value of the (a) pixel point in the j-th monitoring image frame is expressed, and because the whole image is darker due to environmental factors in mine monitoring, the pixel point is enhanced, and the whole brightness of the image is improved while the gray contrast of different areas is improved.
Further, obtaining a final enhanced image frame according to the new gray values of all pixel points in the image frame to be enhanced, and arranging all the final enhanced image frames according to a time sequence, namely arranging each frame in sequence during acquisition to obtain the enhanced monitoring video. The analysis can be further performed according to the enhanced monitoring video, and more accurate analysis results can be obtained, namely, the abnormality analysis is performed through the enhanced monitoring video.
Preferably, the enhanced monitoring video is input into a trained neural network, and an abnormality detection result is output. In the embodiment of the invention, the neural network can select the convolutional neural network CNN in deep learning, and training is carried out through a sample set of a normal transportation video and an abnormal transportation video which are prepared in advance, wherein the training process mainly comprises two steps of forward propagation and reverse propagation, the forward propagation is to obtain a predicted result by an input frame through the CNN network, and the reverse propagation is to update network parameters according to the error between the predicted result and a real result. The trained CNN model is obtained through multiple iterations, and the abnormal detection result can be output according to the characteristics in the transportation video, wherein the abnormal detection result can comprise abnormal types, such as personnel falling, transportation vehicle faults and the like, abnormal positions, such as abnormal occurrence positioning and the like, abnormal classification, namely normal or abnormal level classification conditions and the like. It should be noted that, the neural network is mainly used for classifying, analyzing and processing the video, the CNN neural network is a technical means well known to those skilled in the art, and the convolutional neural network structure for implementing the task includes various types, such as LSTM, and the specific neural network structure and training process are not described herein.
In summary, the invention obtains the change degree of each pixel point by analyzing each monitoring image frame and combining the motion change condition of each pixel point between the monitoring image frames, reflects the overall motion change condition of each pixel point between all the monitoring image frames, classifies the pixel points based on the change trend condition of motion change, obtains the region category of each pixel point, classifies the pixel points according to the motion condition of the pixel point between the monitoring image frames, and makes different enhancement degrees aiming at different regions. Further combining the gray level deviation of each pixel point with the gray level deviation between the areas to obtain gray level influence indexes of each pixel point in each monitoring image frame, considering the fuzzy condition caused by small gray level difference, the area with smaller contrast needs stronger enhancement condition, combining the distribution of the variation degree and the gray level influence indexes to obtain the enhancement of each pixel point in each monitoring image frame, and aiming at different gray level difference degrees and motion variation conditions of the pixel points in each monitoring image, obtaining more accurate enhancement degree of each pixel point in each monitoring image. And finally, obtaining an enhanced monitoring video according to the enhancement degree of each pixel point in each monitoring image to perform anomaly analysis. According to the method, the video is accurately enhanced by combining the motion change and the gray level condition of the pixel points, so that the monitoring video with higher quality is obtained, and the abnormal analysis result of the monitoring video is more accurate.
Referring to fig. 2, a block diagram of a video AI analysis system according to an embodiment of the invention is shown, the system includes: the image frame acquisition module 101, the region category acquisition module 102, the pixel point enhancement degree acquisition module 103 and the video enhancement analysis module 104.
An image frame acquisition module 101, configured to acquire more than two monitoring image frames according to a mineral transportation monitoring video;
the region category obtaining module 102 is configured to obtain a movement change value of each pixel point between every two adjacent monitored image frames according to a movement change condition of each pixel point between every two adjacent monitored image frames; obtaining the change degree of each pixel point according to the change degree and the overall distribution condition of all the movement change values corresponding to each pixel point; classifying the pixel points according to the distribution change trend of the change degree among all the pixel points to obtain the region category of the pixel points;
the pixel point enhancement degree obtaining module 103 is configured to obtain, in each monitored image frame, a gray scale influence index of each pixel point in the corresponding monitored image frame according to a gray scale value deviation degree of each pixel point and a gray scale difference condition between a region category where the corresponding pixel point is located and other region categories; obtaining the intensity enhancement of each pixel point in each monitoring image frame according to the distribution condition of the change degree of the region category of each pixel point and the gray scale influence index of the corresponding pixel point in each monitoring image frame;
The video enhancement analysis module 104 is configured to perform image enhancement on each monitoring image frame in the mineral transportation monitoring video according to the enhancement degree of the pixel points, so as to obtain an enhanced monitoring video; and carrying out anomaly analysis by enhancing the monitoring video.
It should be noted that: the corresponding video AI analysis system provided in the foregoing embodiment may implement the technical solution described in the foregoing embodiment of the video AI analysis method, and the specific implementation principle of each module or unit may refer to the corresponding content in the foregoing embodiment of the video AI analysis method, which is not described herein again.
The present invention also provides a computer readable storage medium storing a computer readable program or instructions that, when executed by a processor, enable implementation of steps in a video AI analysis method according to any one of the above implementations.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. The processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.
Claims (10)
1. A video AI analysis method, the method comprising:
acquiring more than two monitoring image frames according to the mineral transportation monitoring video;
obtaining a movement change value of each pixel point between every two adjacent monitoring image frames according to the movement change condition of each pixel point between every two adjacent monitoring image frames; obtaining the change degree of each pixel point according to the change degree and the overall distribution condition of all the movement change values corresponding to each pixel point; classifying the pixel points according to the distribution change trend of the change degree among all the pixel points to obtain the region category of the pixel points;
in each monitoring image frame, according to the gray value deviation degree of each pixel point and the gray difference condition between the region category of the corresponding pixel point and other region categories, acquiring a gray influence index of each pixel point in the corresponding monitoring image frame; obtaining the intensity enhancement of each pixel point in each monitoring image frame according to the distribution condition of the change degree of the region category of each pixel point and the gray scale influence index of the corresponding pixel point in each monitoring image frame;
Each monitoring image frame in the mineral transportation monitoring video is subjected to image enhancement according to the enhancement degree of the pixel points, so that an enhanced monitoring video is obtained; and carrying out anomaly analysis by enhancing the monitoring video.
2. The video AI analysis method of claim 1, wherein the variation acquisition method comprises:
sequentially taking each pixel point as a target pixel point, and taking the average value of all movement change values corresponding to the target pixel point as an integral change index of the target pixel point;
taking the difference between the maximum value and the minimum value of the movement change values as the change very bad of the target pixel points in all the movement change values corresponding to the target pixel points; taking the sum of the variation range of the target pixel point and a preset adjusting parameter as a variation floating value of the target pixel point; calculating the product of the change floating value of the target pixel point and the maximum value of the corresponding movement change value of the target pixel point to obtain the change degree index of the target pixel point; the preset adjusting parameter is a positive number;
obtaining the change degree of the target pixel point according to the change degree index and the overall change index of the target pixel point; the change degree index and the overall change index are positively correlated with the change degree.
3. The video AI analysis method of claim 1, wherein the region class obtaining method includes:
ordering the pixel points according to the sequence from small to large by all the change degrees to obtain a pixel change sequence; sequentially taking pixel points in the pixel change sequence as points to be segmented, calculating the difference of the degree of change between the points to be segmented and the next adjacent pixel points, and carrying out normalization processing to obtain numerical difference indexes of the points to be segmented;
calculating the average value of the variation degrees of all pixel points before the next pixel point of the points to be segmented, and taking the average value as a first trend index; calculating an average value of the change degrees of a plurality of pixel points of the preset trend after the points to be segmented, and taking the average value as a second trend index; taking the difference between the first trend index and the second trend index as a trend difference index of the points to be segmented;
normalizing the product of the numerical difference index and the trend difference index of the points to be segmented to obtain the segmentation degree of the points to be segmented;
traversing the pixel points in the pixel change sequence according to the arrangement sequence, stopping traversing and taking the current pixel point as a division point when the division degree of the current pixel point is smaller than that of the adjacent previous pixel point, taking all the pixel points in front of the division point as an area category, dividing and removing the pixel points in the area category from the pixel change sequence, obtaining a new pixel change sequence, and carrying out iterative division to obtain the area category; stopping iteration until the segmentation points cannot be obtained, and taking all the rest pixel points as one region category.
4. The video AI analysis method according to claim 1, wherein the method for acquiring the gray scale influence index includes:
for any one monitoring image frame, taking each pixel point in the monitoring image frame as a reference pixel point in sequence; calculating the gray difference between the pixel value of the reference pixel point and the average gray value of the monitoring image frame to obtain the gray deviation degree of the reference pixel point in the monitoring image frame;
calculating the average gray value of the pixel point corresponding to each region category in the monitoring image frame, and obtaining the region gray value of each region category in the monitoring image frame; calculating the difference of the region gray value between the region category of the reference pixel point and each other region category to obtain the region difference of the reference pixel point; taking the average value of all the regional differences of the reference pixel points as the regional deviation degree of the reference pixel points in the monitoring image frame;
and carrying out negative correlation mapping and normalization processing on the product of the gray level deviation degree and the area deviation degree of the reference pixel point in the monitoring image frame, and obtaining the gray level influence index of the reference pixel point in the monitoring image frame.
5. The video AI analysis method of claim 4, wherein the enhanced strength acquisition method comprises:
Taking the average value of all the variation degrees in the region category of the reference pixel point as a variation influence index of the reference pixel point;
obtaining the enhancement influence degree of the reference pixel point in the monitoring image frame according to the gray scale influence index and the change influence index of the reference pixel point in the monitoring image frame; the gray scale influence index and the change influence index are positively correlated with the enhancement influence;
taking the accumulated value of the enhancement effect of all pixel points in the monitoring image frame as the influence sum value of the monitoring image frame; and calculating the ratio of the enhancement influence degree to the influence sum value of the reference pixel point in the monitoring image frame to obtain the enhancement strength of the reference pixel point in the monitoring image frame.
6. The video AI analysis method of claim 1, wherein the method for acquiring the enhanced surveillance video comprises:
taking each monitoring image frame as an image frame to be enhanced, and calculating the product of the enhancement degree and the gray value of any pixel point in the image frame to be enhanced to obtain the adjustment value of the pixel point; the sum value of the gray value and the adjustment value of the pixel point is rounded downwards to obtain a new gray value of the pixel point;
Obtaining a final enhanced image frame according to the new gray values of all pixel points in the image frame to be enhanced; and arranging all final enhanced image frames according to a time sequence order to obtain the enhanced monitoring video.
7. The video AI analysis method of claim 1, wherein the anomaly analysis by enhancing the surveillance video comprises:
and taking the enhanced monitoring video as input, inputting the enhanced monitoring video into a trained neural network, and outputting an abnormality detection result.
8. The video AI analysis method of claim 1, wherein the method of obtaining the motion variance value includes:
taking two adjacent monitoring image frames as input of an optical flow method, and outputting the two adjacent monitoring image frames as the moving distance of each pixel point between the two adjacent monitoring image frames; and taking the moving distance as a moving change value of each pixel point between two adjacent monitoring image frames.
9. A video AI analysis system, comprising:
the image frame acquisition module is used for acquiring more than two monitoring image frames according to the mineral transportation monitoring video;
the regional category acquisition module is used for acquiring a movement change value of each pixel point between every two adjacent monitoring image frames according to the movement change condition of each pixel point between every two adjacent monitoring image frames; obtaining the change degree of each pixel point according to the change degree and the overall distribution condition of all the movement change values corresponding to each pixel point; classifying the pixel points according to the distribution change trend of the change degree among all the pixel points to obtain the region category of the pixel points;
The pixel point enhancement degree acquisition module is used for acquiring gray scale influence indexes of each pixel point in the corresponding monitoring image frame according to the gray scale value deviation degree of each pixel point and the gray scale difference condition between the region category of the corresponding pixel point and other region categories; obtaining the intensity enhancement of each pixel point in each monitoring image frame according to the distribution condition of the change degree of the region category of each pixel point and the gray scale influence index of the corresponding pixel point in each monitoring image frame;
the video enhancement analysis module is used for carrying out image enhancement on each monitoring image frame in the mineral transportation monitoring video according to the enhancement degree of the pixel points to obtain an enhanced monitoring video; and carrying out anomaly analysis by enhancing the monitoring video.
10. A computer readable storage medium storing a computer readable program or instructions which, when executed by a processor, is capable of carrying out the steps of a video AI analysis method as claimed in any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311709746.5A CN117690085B (en) | 2023-12-13 | 2023-12-13 | Video AI analysis system, method and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311709746.5A CN117690085B (en) | 2023-12-13 | 2023-12-13 | Video AI analysis system, method and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117690085A true CN117690085A (en) | 2024-03-12 |
CN117690085B CN117690085B (en) | 2024-08-27 |
Family
ID=90134835
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311709746.5A Active CN117690085B (en) | 2023-12-13 | 2023-12-13 | Video AI analysis system, method and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117690085B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118014887A (en) * | 2024-04-10 | 2024-05-10 | 佰鸟纵横科技(天津)有限公司 | Intelligent folding house remote monitoring method based on Internet of things |
CN118470071A (en) * | 2024-07-10 | 2024-08-09 | 大连展航科技有限公司 | Intelligent vision monitoring and tracking method and system for general network |
CN118571431A (en) * | 2024-07-31 | 2024-08-30 | 天津市肿瘤医院(天津医科大学肿瘤医院) | Deep learning-based endometrial cancer risk screening method |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090060327A1 (en) * | 2007-08-28 | 2009-03-05 | Injinnix, Inc. | Image and Video Enhancement Algorithms |
CN112511790A (en) * | 2019-09-16 | 2021-03-16 | 国网山东省电力公司东营市河口区供电公司 | Coal mine high-speed image acquisition and noise reduction system based on FPGA and processing method |
CN113706439A (en) * | 2021-03-10 | 2021-11-26 | 腾讯科技(深圳)有限公司 | Image detection method and device, storage medium and computer equipment |
CN115190442A (en) * | 2022-09-05 | 2022-10-14 | 济南福深兴安科技有限公司 | Mine accurate positioning system based on UWB |
CN115861135A (en) * | 2023-03-01 | 2023-03-28 | 铜牛能源科技(山东)有限公司 | Image enhancement and identification method applied to box panoramic detection |
CN116797631A (en) * | 2022-03-18 | 2023-09-22 | Tcl科技集团股份有限公司 | Differential area positioning method, differential area positioning device, computer equipment and storage medium |
-
2023
- 2023-12-13 CN CN202311709746.5A patent/CN117690085B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090060327A1 (en) * | 2007-08-28 | 2009-03-05 | Injinnix, Inc. | Image and Video Enhancement Algorithms |
CN112511790A (en) * | 2019-09-16 | 2021-03-16 | 国网山东省电力公司东营市河口区供电公司 | Coal mine high-speed image acquisition and noise reduction system based on FPGA and processing method |
CN113706439A (en) * | 2021-03-10 | 2021-11-26 | 腾讯科技(深圳)有限公司 | Image detection method and device, storage medium and computer equipment |
CN116797631A (en) * | 2022-03-18 | 2023-09-22 | Tcl科技集团股份有限公司 | Differential area positioning method, differential area positioning device, computer equipment and storage medium |
CN115190442A (en) * | 2022-09-05 | 2022-10-14 | 济南福深兴安科技有限公司 | Mine accurate positioning system based on UWB |
CN115861135A (en) * | 2023-03-01 | 2023-03-28 | 铜牛能源科技(山东)有限公司 | Image enhancement and identification method applied to box panoramic detection |
Non-Patent Citations (2)
Title |
---|
ZHANG WEI ET AL: ""Research on image enhancement algorithm for the monitoring system in coal mine hoist"", 《MEASUREMENT AND CONTROL》, 1 November 2023 (2023-11-01) * |
薛国华: ""基于综采工作面视频监控的图像增强方法"", 《煤矿安全》, 14 October 2021 (2021-10-14) * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118014887A (en) * | 2024-04-10 | 2024-05-10 | 佰鸟纵横科技(天津)有限公司 | Intelligent folding house remote monitoring method based on Internet of things |
CN118014887B (en) * | 2024-04-10 | 2024-06-04 | 佰鸟纵横科技(天津)有限公司 | Intelligent folding house remote monitoring method based on Internet of things |
CN118470071A (en) * | 2024-07-10 | 2024-08-09 | 大连展航科技有限公司 | Intelligent vision monitoring and tracking method and system for general network |
CN118470071B (en) * | 2024-07-10 | 2024-09-17 | 大连展航科技有限公司 | Intelligent vision monitoring and tracking method and system for general network |
CN118571431A (en) * | 2024-07-31 | 2024-08-30 | 天津市肿瘤医院(天津医科大学肿瘤医院) | Deep learning-based endometrial cancer risk screening method |
CN118571431B (en) * | 2024-07-31 | 2024-10-15 | 天津市肿瘤医院(天津医科大学肿瘤医院) | Deep learning-based endometrial cancer risk screening method |
Also Published As
Publication number | Publication date |
---|---|
CN117690085B (en) | 2024-08-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN117690085B (en) | Video AI analysis system, method and storage medium | |
CN107437245B (en) | High-speed railway contact net fault diagnosis method based on deep convolutional neural network | |
CN101299275B (en) | Method and device for detecting target as well as monitoring system | |
CN110490842B (en) | Strip steel surface defect detection method based on deep learning | |
CN105044122A (en) | Copper part surface defect visual inspection system and inspection method based on semi-supervised learning model | |
CN112785560A (en) | Air tightness detection water body updating method and system based on artificial intelligence | |
CN115184359A (en) | Surface defect detection system and method capable of automatically adjusting parameters | |
CN113548419A (en) | Belt tearing detection method, device and system based on machine vision image recognition | |
CN113820326B (en) | Defect detection system of long-code zipper | |
CN114772208B (en) | Non-contact belt tearing detection system and method based on image segmentation | |
CN112001299B (en) | Tunnel vehicle finger device and lighting lamp fault identification method | |
CN107610119A (en) | The accurate detection method of steel strip surface defect decomposed based on histogram | |
CN114297264B (en) | Method and system for detecting abnormal fragments of time sequence signals | |
CN116797979A (en) | Small model traffic flow detection method, device and system based on improved YOLOv5 and deep SORT | |
CN117252851B (en) | Standard quality detection management platform based on image detection and identification | |
CN113077423B (en) | Laser selective melting pool image analysis system based on convolutional neural network | |
CN102129689A (en) | Method for modeling background based on camera response function in automatic gain scene | |
CN114049543A (en) | Automatic identification method for scrap steel unloading change area based on deep learning | |
CN116708724B (en) | Sample monitoring method and system based on machine vision | |
CN115988175A (en) | Vending machine material channel monitoring system based on machine vision | |
CN113642473A (en) | Mining coal machine state identification method based on computer vision | |
CN111401104B (en) | Classification model training method, classification method, device, equipment and storage medium | |
CN113642572B (en) | Image target detection method, system and device based on multi-level attention | |
CN118037137B (en) | Method for determining product quality accident number based on convolutional neural network | |
CN118574288B (en) | Lighting parameter detection method and device in night construction lighting |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |