CN117079197A - Intelligent building site management method and system - Google Patents

Intelligent building site management method and system Download PDF

Info

Publication number
CN117079197A
CN117079197A CN202311345497.6A CN202311345497A CN117079197A CN 117079197 A CN117079197 A CN 117079197A CN 202311345497 A CN202311345497 A CN 202311345497A CN 117079197 A CN117079197 A CN 117079197A
Authority
CN
China
Prior art keywords
illumination area
pixel point
pixel
channel
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311345497.6A
Other languages
Chinese (zh)
Other versions
CN117079197B (en
Inventor
高昂
高沛
孙善金
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Chengxiang Construction Group Co ltd
Original Assignee
Shandong Chengxiang Construction Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Chengxiang Construction Group Co ltd filed Critical Shandong Chengxiang Construction Group Co ltd
Priority to CN202311345497.6A priority Critical patent/CN117079197B/en
Publication of CN117079197A publication Critical patent/CN117079197A/en
Application granted granted Critical
Publication of CN117079197B publication Critical patent/CN117079197B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to an intelligent building site management method and system, comprising the following steps: acquiring a gray level image of a monitoring video of a construction site; dividing a gray level image of a monitoring video of a construction site into an illumination area and a non-illumination area; obtaining the target degree of each pixel point in the illumination area; obtaining the similarity between the pixel points of the illumination area and the neighborhood pixel points; obtaining enhanced pixel points of the illumination area; obtaining a channel value of each pixel point in the illumination area after enhancement according to the enhanced neighborhood pixel point of each pixel point in the illumination area and the similarity between each pixel point in the illumination area and the neighborhood pixel point; obtaining enhanced images of the illuminated region and the non-illuminated region; and combining the enhanced image of the illumination area and the enhanced image of the non-illumination area to obtain an enhanced image of the site monitoring video gray level image, and carrying out safety monitoring and management of site construction according to the enhanced image. The invention improves the efficiency of site management.

Description

Intelligent building site management method and system
Technical Field
The invention relates to the technical field of image processing, in particular to an intelligent building site management method and system.
Background
The intelligent building site management method is a method for intelligently managing various management activities of a building site by utilizing an information technology and a communication technology. The analysis result is obtained through the collection and analysis of the construction site data, and then the resource allocation of the construction site is adjusted and optimized through the data analysis result, and the safety condition of the construction site is detected in real time, so that the safety and accident prevention capability of the construction site are improved. The development of computer vision technology provides convenience for detection of team site safety in intelligent site management, and site image acquisition and real-time safety risk assessment can be achieved through site monitoring, but due to flexibility of site construction time, shadow is generated due to the influence of illumination on a monitored and photographed image, and the difference between the gray value of the shadow area and the gray value of a non-shadow area under the same background is large, so that the problem of accurate assessment results may exist when real-time safety risk assessment is conducted according to the site image.
Linear transformation is a common method for enhancing image quality, but since different locations of a construction site are affected by illumination, the degree of shadows of the acquired construction image may be different, and using global linear transformation to enhance an image may cause over-enhancement or under-enhancement problems, the embodiment enhances the construction image in regions.
Disclosure of Invention
The invention provides an intelligent building site management method and system, which are used for solving the existing problems.
The intelligent building site management method and system provided by the invention adopt the following technical scheme:
one embodiment of the invention provides an intelligent building site management method, which comprises the following steps:
collecting a site monitoring video image, and carrying out graying treatment to obtain a site monitoring video gray image;
dividing a gray level image of a site monitoring video into an illumination area and a non-illumination area according to a clustering algorithm; according to the extremely poor value of the channel value of each pixel point in the illumination area, the gray value of each pixel point in the illumination area after being replaced is obtained; acquiring the target degree of each pixel point in the illumination area according to the gray value of each pixel point in the illumination area after being replaced; acquiring a neighborhood pixel point of each pixel point in the illumination area, and acquiring the similarity between each pixel point in the illumination area and the neighborhood pixel point according to the neighborhood pixel point of each pixel point in the illumination area; enhancing the pixel points of the illumination area according to the target degree of each pixel point in the illumination area, and obtaining enhanced neighborhood pixel points of each pixel point in the illumination area; obtaining a channel value of each pixel point in the illumination area after enhancement according to the enhanced neighborhood pixel point of each pixel point in the illumination area and the similarity between each pixel point in the illumination area and the neighborhood pixel point; obtaining the channel value of each pixel point in the illumination area after the enhancement according to the channel value of each pixel point in the illumination area after the enhancement; obtaining an enhanced image of the illumination area according to the channel values of all the pixel points in the illumination area after enhancement;
determining a linear transformation enhancement coefficient of a non-illumination area according to the enhancement image of the illumination area; enhancing the non-illumination region according to the linear transformation enhancement coefficient of the non-illumination region to obtain an enhanced image of the non-illumination region; combining the enhanced image of the illumination area and the enhanced image of the non-illumination area to obtain an enhanced image of the site monitoring video gray level image;
and carrying out safety monitoring on construction of the construction site according to the enhanced image of the video gray level image of the construction site.
Preferably, the dividing the gray level image of the monitoring video of the construction site into an illumination area and a non-illumination area according to a clustering algorithm comprises the following specific steps:
clustering a plurality of block areas of the site monitoring video gray level image according to a clustering algorithm to obtain two clusters of the site monitoring video gray level image, marking an area where the cluster with high gray level average value is located as an illumination area, and marking an area where the cluster with low gray level average value is located as a non-illumination area.
Preferably, the specific method includes the following steps:
for the pixel point of the illumination areaThe gray value of the pixel point is replaced by the polar difference value of each channel value, and the specific calculation formula is as follows:
wherein,representing pixel dot +.>Gray value after substitution +.>Is pixel dot +.>At->Channel value in channel->Representing pixel dot +.>Maximum of the three channel values, +.>Representing pixel dot +.>Maximum of the three channel values, +.>The channel means that the pixel point in the monitoring video image of the construction site is +.>、/>Andany one of the channels.
Preferably, the method for obtaining the target degree of each pixel point in the illumination area according to the gray value of each pixel point in the illumination area after replacement includes the following specific steps:
combining each pixel pointThe calculation formula for obtaining the target degree of each pixel point in the illumination area by the difference value between the channel value and the maximum gray value of the channel is as follows:
wherein,representing pixel point in gray level image of monitoring video of construction site +.>Is (are) target degree of->Representing pixel point in gray level image of monitoring video of construction site +.>At->Channel value in channel->Monitoring the maximum gray value in the video gray image for the worksite,/for the construction site>Representing pixel points in monitoring video image of construction site>And the replaced gray value.
Preferably, the method for obtaining the similarity between each pixel point in the illumination area and its neighboring pixel point according to the neighboring pixel point of each pixel point in the illumination area includes the following specific steps:
a window with a fixed size is constructed by taking each pixel point in the illumination area as a central pixel point, the pixel points except the central pixel point in the window are regarded as neighborhood pixel points of each pixel point in the illumination area, and a similarity formula between each pixel point and the neighborhood pixel points is obtained according to each pixel point in the illumination area as follows:
wherein,representing pixel dot +.>And (2) neighborhood pixel point thereof>Similarity between->Representing pixel pointsAt->Channel value of channel->Representing pixel dot +.>At->The channel value of the channel,/>representing pixel dot +.>At->Channel value of channel->Representing pixel dot +.>At->Channel value of channel->For the side length of the built fixed-size window, +.>And->Respectively representing pixel points in gray level images of monitoring video of a construction site as +.>、/>And->Any one of the channels.
Preferably, the enhancing the pixel points of the illumination area according to the target degree of each pixel point in the illumination area, and obtaining the enhanced neighborhood pixel point of each pixel point in the illumination area, includes the following specific methods:
and taking the pixel point with the smallest target degree in the illumination area as an initial point, enhancing the pixel points in the illumination area according to the sequence of small target degree and large target degree, and marking the pixel points with the target degree smaller than that of the central pixel point in the window with the fixed size as enhanced neighborhood pixel points.
Preferably, the method for obtaining the enhanced channel value of each pixel in the illumination area according to the enhanced neighborhood pixel of each pixel in the illumination area and the similarity between each pixel in the illumination area and the neighborhood pixel thereof includes the following specific calculation formulas:
wherein,is pixel dot +.>After reinforcement at->Channel value of channel->Is pixel dot +.>After reinforcement at->Channel value of channel->Representing pixel point in gray level image of monitoring video of construction site +.>At->The value of the channel in the channel,representing pixel dot +.>And pixel dot->Similarity of->Representing pixel dot +.>Is +.>The enhanced pixel points, n represents pixel points +.>Neighborhood pixel number, +.>An exponential function based on a natural constant is represented.
Preferably, the specific calculation formula for determining the linear transformation enhancement coefficient of the non-illumination area according to the enhanced image of the illumination area is as follows:
wherein,for the illumination area +.>Channel value mean value after channel bottom enhancement, +.>Enhancing for non-illuminated areas prior toMean value of channel values under channel, +.>Is in the non-illumination area->The linear transformation of the channel enhances the coefficients.
Preferably, the method for safety monitoring of construction site according to the enhanced image of the video gray level image of the construction site comprises the following specific steps:
and enhancing each frame of image of the collected site monitoring video image, acquiring the position of a target in the enhanced image of the company monitoring video gray level image by using a neural network, judging whether the running tracks of the target are intersected in a preset time according to the position of the target, and carrying out safety early warning if the running tracks are intersected.
The invention provides an intelligent building site management system, which comprises an image preprocessing module, an illumination area enhancement module, a non-illumination area enhancement module and an implementation module, wherein:
the image preprocessing module is used for acquiring a site monitoring video image and carrying out graying processing to obtain a site monitoring video gray image;
the illumination area enhancement module divides the gray level image of the building site monitoring video into an illumination area and a non-illumination area according to a clustering algorithm; according to the extremely poor value of the channel value of each pixel point in the illumination area, the gray value of each pixel point in the illumination area after being replaced is obtained; acquiring the target degree of each pixel point in the illumination area according to the gray value of each pixel point in the illumination area after being replaced; acquiring a neighborhood pixel point of each pixel point in the illumination area, and acquiring the similarity between each pixel point in the illumination area and the neighborhood pixel point according to the neighborhood pixel point of each pixel point in the illumination area; enhancing the pixel points of the illumination area according to the target degree of each pixel point in the illumination area, and obtaining enhanced neighborhood pixel points of each pixel point in the illumination area; obtaining a channel value of each pixel point in the illumination area after enhancement according to the enhanced neighborhood pixel point of each pixel point in the illumination area and the similarity between each pixel point in the illumination area and the neighborhood pixel point; obtaining the channel value of each pixel point in the illumination area after the enhancement according to the channel value of each pixel point in the illumination area after the enhancement; obtaining an enhanced image of the illumination area according to the channel values of all the pixel points in the illumination area after enhancement;
the non-illumination region enhancement module is used for determining linear transformation enhancement coefficients of the non-illumination region according to the enhancement image of the illumination region; enhancing the non-illumination region according to the linear transformation enhancement coefficient of the non-illumination region to obtain an enhanced image of the non-illumination region; combining the enhanced image of the illumination area and the enhanced image of the non-illumination area to obtain an enhanced image of the site monitoring video gray level image;
and the implementation module is used for carrying out safety monitoring on construction of the construction site according to the enhanced image of the video gray level image of the construction site.
The technical scheme of the invention has the beneficial effects that: according to the method, the gray level image of the site monitoring video is divided into an illumination area and a non-illumination area according to a clustering algorithm, and is clustered after being divided into a plurality of blocks, so that confusion caused by dividing all shadow areas together is avoided; according to the gray value of each pixel point in the illumination area after being replaced according to the extremely poor value of the channel value of each pixel point in the illumination area, the pixel points in the shadow area of the illumination area are more obvious in the image; determining the enhancement sequence of each pixel point in the illumination area according to the target degree of each pixel point in the illumination area, and enhancing the pixel points with small target degree after enhancing the pixel points with large target degree, so that the pixel points which are enhanced first can be used as references to enhance other pixel points, and the contrast ratio between the pixel points under the same background is prevented from being too large; the channel value of each pixel in the illumination area after the enhancement is obtained according to the similarity between the enhanced neighborhood pixel and each pixel in the illumination area and the neighborhood pixel, the enhancement of each pixel in the illumination area can be adaptively adjusted according to the difference between the pixel and the neighborhood pixel, and the over enhancement or under enhancement of the illumination area is avoided; and determining a linear transformation enhancement coefficient of the non-illumination area according to the enhancement image of the illumination area, and ensuring the smoothness of the surface of the enhancement image of the obtained site monitoring video gray level image under the condition of ensuring the uniform gray level distribution of the whole non-illumination area.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of steps of an intelligent worksite management method of the present invention;
FIG. 2 is a block diagram of an intelligent worksite management system of the present invention;
FIG. 3 is a gray scale image of a worksite monitoring video of an intelligent worksite management method of the present invention;
fig. 4 is a gray scale image of a worksite monitoring video image after replacement in accordance with one embodiment of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the present invention to achieve the preset purpose, the following detailed description refers to the specific implementation, structure, characteristics and effects of an intelligent building site management method and system according to the present invention with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the intelligent building site management method and system provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of steps of a smart worksite management method according to an embodiment of the present invention is shown, the method includes the steps of:
s001, collecting a site monitoring video image, and preprocessing to obtain a site monitoring video gray level image.
It should be noted that, the capture of the monitoring video image of the construction site is greatly affected by weather, so that extreme weather should be avoided in order to prevent the image from being too dark or too noisy, and the monitoring video image is captured under the condition of sufficient illumination. In order to facilitate subsequent processing, the gray scale processing is performed on the site monitoring video image, so as to obtain the site monitoring video gray scale image, as shown in fig. 3.
S002, dividing the gray level image of the building site monitoring video into an illumination area and a non-illumination area according to a K-Means clustering algorithm, enhancing the illumination area, and obtaining an enhanced image of the illumination area.
It should be noted that, due to the influence of illumination and the relatively complex structure of the construction site, a large number of shadow areas may be generated, and meanwhile, the construction site usually occupies a large area and is complex in distribution, so that there may be a large-area non-illumination area, and the gray level difference of the non-illumination area only refers to the gray level difference between the non-illumination area and the background area, that is, the shadow area formed by the non-illumination area is only caused by the complex structure of the construction site. However, for the illumination area, due to illumination influence and complexity of the construction site structure, equipment and the like placed on the construction site generate a shadow area on the ground, so that the shadow area of the illumination area in the video gray scale image monitored on the construction site can be a shadow area formed on the construction site, or can be a shadow area formed by illumination influence. In order to facilitate analysis of shadow areas in different areas, the gray scale image of the site monitoring video should be first divided into an illuminated area and a non-illuminated area.
If it is directly according toThe clustering algorithm divides the gray level image of the monitoring video of the construction site into an illumination area and a non-illumination area, and all shadow areas are possibly divided together, so that confusion exists in the subsequent analysis of the shadow areas of different areas, and therefore the embodiment carries out the blocking processing on the gray level image of the monitoring video of the construction site.
In the present embodiment, the presetThe block size of (2) blocks the gray scale image of the monitoring video of the construction site, in the embodiment +.>For the purposes of illustration, for->Without limitation, other embodiments may be set according to practical situations. Then the gray average value of each block area is calculated, and the gray average value is preset +.>Clustering number of clustering algorithm->And then selecting two partitioned areas with the minimum gray average value and the maximum gray average value as initial center points for iteration, distributing each partitioned area to the nearest cluster, updating the center point of the cluster according to the distribution result, stopping iteration until the cluster center point is not changed, obtaining a site monitoring video gray image to be divided into two clusters, calculating the gray average value of the two clusters, marking the clusters with high gray average value as illumination areas, and marking the clusters with low gray average value as non-illumination areas.
Thus, the illumination area and the non-illumination area of the gray level image of the monitoring video of the construction site are obtained.
Since the shadow of the illumination region is different, the pixel points of the illumination region need to be enhanced to different extents because of the over-enhancement or under-enhancement caused by the global linear transformation enhancement. When three channel values are equal in the RGB image, the color of the pixel point is gray, and as the color of the shadow area in the site monitoring video gray scale image is generally similar to gray, the three channel values of the shadow area are similar and are lower, and the gray value of each pixel point is replaced by the extremum of the three channel values, so that the contrast ratio of the shadow area and other areas can be enhanced visually.
Therefore, in this embodiment, in order to make the pixel points of the shadow area of the illumination area appear more obvious in the image, the gray value of each pixel point is replaced by the difference between the maximum channel value and the minimum channel value in the three channels of each pixel point of the illumination area, and the specific calculation formula is as follows:
wherein,representing pixel dot +.>Gray value after substitution +.>Is pixel dot +.>At->Channel value in channel->Representing pixel dot +.>Maximum of the three channel values, +.>Representing pixel dot +.>Maximum of the three channel values, +.>The channel means that the pixel point in the monitoring video image of the construction site is +.>、/>、/>Any one of the channels.
The gray value of each pixel point in the illumination area after replacement is obtained through replacement operation, so that a gray image of the construction site monitoring video image after replacement is obtained, and as shown in fig. 4, the pixel points in the gray image of the construction site monitoring video image after replacement are recordedIs +.>
In order to prevent the problem of over-enhancement or under-enhancement, the enhancement degree of the shadow area of the illumination area is larger, and after the gray value of the pixel point of the illumination area is replaced by the channel value in a very poor manner, the gray value of the pixel point in the shadow area is smaller, so that the enhancement degree of each pixel point in the illumination area can be determined according to the replaced gray value of the pixel point, and the degree of each pixel point in the illumination area is quantized by using the target degree.
It should be further noted that, for the shadow area caused by illumination, it is inaccurate to obtain a higher target degree only by using the three-channel polar value, because part of the areas with stronger illumination are close to white in color, the corresponding three-channel polar value may be smaller, and in order to avoid confusion, the average value of the differences between the gray maximum value and the three-channel value is used as a weight coefficient, so that the pixel points with smaller three-channel values are given higher weight, and the more accurate target degree of each pixel point in the illumination area is obtained. The specific calculation formula is as follows:
wherein,representing worksite supervisionControl pixel point in video gray scale image +.>Is (are) target degree of->Representing pixel point in gray level image of monitoring video of construction site +.>At->Channel value in channel->Is the maximum gray value +.>Pixel point in gray level image of monitoring video image of representing construction site +.>In order to prevent the denominator from being 0, the gray value after replacement is calculated according to the embodimentIs denominator.
It should be noted that when the denominatorWhen smaller, it is stated that the channel values of the pixels are similar, but due to the strong illumination effect of the illumination area, the partial areas in the illumination area are nearly white, and the channel values of the pixels of these areas are similar, therefore when denominator +.>The pixel corresponding to the smaller pixel may be the shadow region of the illumination region or the pixel of the region with too strong illumination, so in order to avoid giving the pixel with too strong illumination a larger target degree, the average value of the difference between the gray maximum value and the three-channel value +.>The target extent of the pixels of the area that is too strongly illuminated is limited. When the denominator is->When the difference between the three channel values of the pixel points is larger, the pixel points are the shadow region pixel points to a smaller extent, so that the average value of the differences between the corresponding maximum value and the three channel values is->The smaller the target, the smaller the degree of goal.
Thus, the target degree of each pixel point of the illumination area in the gray level image of the monitoring video of the construction site is obtained.
It should be noted that, the channel values of the pixels in the same background in the image are generally increased or decreased in parallel, so that the channel values of the pixels in the shadow area of the illumination area in the video gray scale image are increased or decreased together.
For pixel pointsPixel dot +.>Construct for the center pixel +.>In the present embodiment, byFor illustration, the pixels in the window other than the center pixel are described as +.>Other pixels->May also be referred to as pixel +.>Is the neighborhood pixel point of (2), then pixel point +.>And (2) neighborhood pixel point thereof>The similarity calculation formula between the two is as follows:
wherein,representing pixel dot +.>And (2) neighborhood pixel point thereof>Similarity between->Representing pixel pointsAt->Channel value of channel->Representing pixel dot +.>At->Channel value of channel->Representation ofPixel dot +.>At->Channel value of channel->Representing pixel dot +.>At->Channel value of channel->For the side length of the built fixed-size window, +.>And->Respectively representing pixel points in gray level images of monitoring video of a construction site as +.>、/>、/>Any one of the channels, and +.>To prevent the denominator from being 0, add +.>Is denominator.
If the pixel points areAnd (2) neighborhood pixel point thereof>Is increased or decreased in parallel, the pixel is +.>And (2) neighborhood pixel point thereof>Corresponding characteristics exist in other channel values of the two pixels, so that the similarity of the two pixels is quantified by the ratio of the channel values of the two pixels in different channels, if the ratio is closer to 1, the similarity of the two pixels is stronger and more likely to be in the same background area, and if the ratio is farther from 1, the similarity of the two pixels is weaker and the probability of being in the same background area is not great.
So far, the similarity between the pixel point and the neighborhood pixel point is obtained.
It should be noted that, the target degree formula gives a larger target degree to the shadow region of the illumination region, and the target degree of the normal region is relatively smaller, but only using the target degree as the enhancement coefficient of the pixel point results in large difference of gray values after the pixel point is enhanced, and the gray values of the pixel points in the same background region cannot be made similar. By adopting an iteration idea, the pixel points can be enhanced according to the sequence from small to large in the target degree of the pixel points, and then the similarity between the pixel points and the enhanced pixel points in the neighborhood is quantified.
It should be further noted that, if the similarity between the pixel and the pixel that has been enhanced in the neighborhood is closer to 1, the probability that the pixel and the pixel that has been enhanced in the neighborhood are in the same background is higher, and when the gray value of the pixel is adjusted by using the difference between the pixel and the pixel that has been enhanced in the neighborhood, the gray value of the pixel needs to be adjusted to be closer to the gray value of the pixel that has been enhanced in the neighborhood.
In this embodiment, after the target degree of each pixel point of the illumination area is obtained, the pixel points are enhanced according to the order from the small target degree to the large target degree, and for any pixel point of the illumination areaPixel dot +.>Building for a center pixel pointIn this embodiment +.>For illustration, in the neighborhood pixel of the center pixel, the target degree is less than +.>The number of neighborhood pixels is marked as +.>These neighborhood pixels are noted as target neighborhood pixels.
It should be noted that, the pixel points are enhanced according to the order from small to large of the target degree, first, the target neighborhood pixel point of each center pixel point can be obtained after enhancementChannel value of channel, noted->Wherein->The +.>Target neighborhood pixel points->The +.>The pixel points of the target neighborhood are enhanced and then are in +.>Channel value for the channel. Therefore, the target neighborhood pixel point according to the central pixel point can be enhanced and then is in +.>The channel value of the channel is adjusted to the channel value of the central pixel point, and the pixel point can be +.>Reinforcing before->The channel value of the channel is more adjacent to the channel value after the pixel points in the same background in the adjacent area are enhanced, so that the enhanced local uniformity is ensured.
Therefore, in this embodiment, after obtaining the channel value after the enhancement of the target neighborhood pixel of the central pixel of each window, the central pixel is enhanced based on the original channel value of the central pixel by quantifying the similarity between the central pixel and the target neighborhood pixel and the difference between the target neighborhood pixel before and after enhancement, so as to obtain the enhanced pixelAt->The channel value of the channel is specifically calculated as follows:
wherein,is pixel dot +.>After reinforcement at->Channel value of channel->Is pixel dot +.>After reinforcement at->Channel value of channel->Is pixel dot +.>At->Channel value under channel,/>Representing pixel dot +.>And pixel dot->Similarity of->Representing pixel dot +.>Is>The +.>N represents pixel dot +>Neighborhood pixel number, +.>Representing bases on natural constantsAn exponential function.
In the formulaRepresenting that the enhanced target neighborhood pixel is +.>Channel value and center pixel of channel +.>At->The difference value of the original channel value under the channel is used for quantifying the difference between the central pixel point and the enhanced pixel point in the neighborhood of the central pixel point, and if the difference is smaller, the adjustment of the central pixel point is less; if the difference is large, the adjustment of the center pixel point is more.
It should be further described that, when determining the adjustment of the central pixel point by using the difference between the enhanced target neighborhood pixel point and the central pixel point, the similarity between the central pixel point and the enhanced target neighborhood pixel point in the neighborhood thereof needs to be considered, if the similarity is larger, the similarity between the central pixel point and the target neighborhood pixel point is higher, the higher the similarity isThe closer to 1, the>The closer to 1, utilizeAs the difference weight between the center pixel point and the enhanced target neighborhood pixel point, when the similarity is higher, a weight that the difference between the enhanced target neighborhood pixel point and the center pixel point is close to 1 is given. Finally according to->Solving the channel value difference between the central pixel point and all the target neighborhood pixel pointsDifferent mean value as +.>The added value based on the original channel value of (2) to obtain the central pixel point after enhancement>At->Channel value for the channel.
Thereby obtaining the enhanced center pixel pointThe channel value in each channel is then the center pixel point +.>And giving a weight of 1/3 to the channel value (R channel, G channel and B channel) of each channel, and obtaining the enhanced gray value of each pixel point through weighted summation, thereby obtaining the enhanced image of the illumination area in the gray image of the monitoring video of the construction site.
So far, the enhanced image of the illumination area in the gray level image of the monitoring video of the construction site is obtained.
S003, obtaining an enhanced image of the non-illumination area, and combining the enhanced image of the illumination area and the enhanced image of the non-illumination area to obtain an enhanced image of the site monitoring video gray level image.
After the enhanced image of the illumination area in the gray level image of the monitoring video of the construction site is obtained, the non-illumination area needs to be enhanced to ensure the image quality, and as the non-illumination area is less influenced by the shadow area and the gray level distribution is more average, the non-illumination area can be enhanced through global linear transformation enhancement, and excessive contrast is generated for the surface illumination area and the non-illumination area, and the linear transformation enhancement coefficient of the non-illumination area is defined according to the average value of the channel values of the enhanced illumination area:
wherein,for the illumination area +.>Channel value mean value after channel bottom enhancement, +.>Enhancing for non-illuminated areas prior toMean value of channel values under channel, +.>Is in the non-illumination area->The linear transformation of the channel enhances the coefficients.
Acquiring pixel point enhancement of a non-illumination area according to a linear transformation enhancement coefficient and then carrying out enhancement on the pixel pointChannel value of channel:
wherein,pixel point representing non-illuminated area +.>After reinforcement at->Channel value of channel->Pixel point representing non-illuminated area +.>Before reinforcement->Channel value of channel->Is in the non-illumination area->The linear transformation of the channel enhances the coefficients. And calculating the channel value of the pixel point of the non-illumination area after the enhancement of each channel through a formula, obtaining an enhanced image of the non-illumination area, and finally merging the enhanced illumination area and the non-illumination area to obtain an enhanced image of the site monitoring video gray level image.
Thus, the enhanced image of the gray level image of the monitoring video of the construction site is obtained.
S004, safety monitoring and management of construction on the construction site are carried out according to the enhanced image of the video gray level image of the construction site.
Each frame of the collected site monitoring video image is enhanced, the boundary frame position information of potential safety hazard targets is marked manually, and the obtained video is used as a training set to be input into a YOLOV3 neural network for training; and detecting targets by using the difference between the output of the mean square error measurement model and the real labels, wherein the target detection is carried out on the subsequent video image serving as the input of the neural network, the output set comprises information such as the category, the position and the like of each detected target in the image, the running speed and the track of each target are predicted according to the position change of each target under continuous frames, and if the running speed and the track are intersected in the running track within a subsequent shorter time interval (within 5 seconds), the safety early warning is carried out.
Through the steps, the establishment of the intelligent building site management method is completed.
Referring to fig. 2, a block diagram of an intelligent worksite management system according to an embodiment of the present invention is shown, where the system includes the following modules:
the image preprocessing module is used for acquiring a site monitoring video image and preprocessing to obtain a site monitoring video gray level image;
the illumination area enhancement module is used for dividing the gray level image of the site monitoring video into an illumination area and a non-illumination area according to a clustering algorithm; according to the extremely poor value of the channel value of each pixel point in the illumination area, the gray value of each pixel point in the illumination area after being replaced is obtained; acquiring the target degree of each pixel point in the illumination area according to the gray value of each pixel point in the illumination area after being replaced; acquiring a neighborhood pixel point of each pixel point in the illumination area, and acquiring the similarity between each pixel point in the illumination area and the neighborhood pixel point according to the neighborhood pixel point of each pixel point in the illumination area; determining the enhancement sequence of each pixel point in the illumination area according to the target degree of each pixel point in the illumination area, and acquiring the enhanced neighborhood pixel point of each pixel point in the illumination area; obtaining a channel value of each pixel point in the illumination area after enhancement according to the enhanced neighborhood pixel point of each pixel point in the illumination area and the similarity between each pixel point in the illumination area and the neighborhood pixel point; obtaining the channel value of each pixel point in the illumination area after the enhancement according to the channel value of each pixel point in the illumination area after the enhancement; obtaining an enhanced image of the illumination area according to the channel values of all the pixel points in the illumination area after enhancement;
the non-illumination area enhancement module is used for determining linear transformation enhancement coefficients of the non-illumination area according to the enhancement image of the illumination area; enhancing the non-illumination region according to the linear transformation enhancement coefficient of the non-illumination region to obtain an enhanced image of the non-illumination region; and combining the enhanced image of the illumination area and the enhanced image of the non-illumination area to obtain the enhanced image of the site monitoring video gray level image.
And the implementation module is used for carrying out safety monitoring and management on construction of the construction site according to the enhanced image of the video gray level image of the construction site to complete establishment of the intelligent construction site method and system.
According to the embodiment, the gray level image of the monitoring video of the construction site is divided into an illumination area and a non-illumination area according to a clustering algorithm, and is clustered after being divided into a plurality of blocks, so that confusion caused by dividing all shadow areas together is avoided; according to the gray value of each pixel point in the illumination area after being replaced according to the extremely poor value of the channel value of each pixel point in the illumination area, the pixel points in the shadow area of the illumination area are more obvious in the image; determining the enhancement sequence of each pixel point in the illumination area according to the target degree of each pixel point in the illumination area, and enhancing the pixel points with small target degree after enhancing the pixel points with large target degree, so that the pixel points which are enhanced first can be used as references to enhance other pixel points, and the contrast ratio between the pixel points under the same background is prevented from being too large; the channel value of each pixel in the illumination area after the enhancement is obtained according to the similarity between the enhanced neighborhood pixel and each pixel in the illumination area and the neighborhood pixel, the enhancement of each pixel in the illumination area can be adaptively adjusted according to the difference between the pixel and the neighborhood pixel, and the over enhancement or under enhancement of the illumination area is avoided; and determining a linear transformation enhancement coefficient of the non-illumination area according to the enhancement image of the illumination area, and ensuring the smoothness of the surface of the enhancement image of the obtained site monitoring video gray level image under the condition of ensuring the uniform gray level distribution of the whole non-illumination area.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the invention, but any modifications, equivalent substitutions, improvements, etc. within the principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. An intelligent building site management method, which is characterized by comprising the following steps:
collecting a site monitoring video image, and carrying out graying treatment to obtain a site monitoring video gray image;
dividing a gray level image of a site monitoring video into an illumination area and a non-illumination area according to a clustering algorithm; according to the extremely poor value of the channel value of each pixel point in the illumination area, the gray value of each pixel point in the illumination area after being replaced is obtained; acquiring the target degree of each pixel point in the illumination area according to the gray value of each pixel point in the illumination area after being replaced; acquiring a neighborhood pixel point of each pixel point in the illumination area, and acquiring the similarity between each pixel point in the illumination area and the neighborhood pixel point according to the neighborhood pixel point of each pixel point in the illumination area; enhancing the pixel points of the illumination area according to the target degree of each pixel point in the illumination area, and obtaining enhanced neighborhood pixel points of each pixel point in the illumination area; obtaining a channel value of each pixel point in the illumination area after enhancement according to the enhanced neighborhood pixel point of each pixel point in the illumination area and the similarity between each pixel point in the illumination area and the neighborhood pixel point; obtaining the channel value of each pixel point in the illumination area after the enhancement according to the channel value of each pixel point in the illumination area after the enhancement; obtaining an enhanced image of the illumination area according to the channel values of all the pixel points in the illumination area after enhancement;
determining a linear transformation enhancement coefficient of a non-illumination area according to the enhancement image of the illumination area; enhancing the non-illumination region according to the linear transformation enhancement coefficient of the non-illumination region to obtain an enhanced image of the non-illumination region; combining the enhanced image of the illumination area and the enhanced image of the non-illumination area to obtain an enhanced image of the site monitoring video gray level image;
and carrying out safety monitoring on construction of the construction site according to the enhanced image of the video gray level image of the construction site.
2. The intelligent building site management method according to claim 1, wherein the dividing the gray level image of the building site monitoring video into the illumination area and the non-illumination area according to the clustering algorithm comprises the following specific steps:
clustering a plurality of block areas of the site monitoring video gray level image according to a clustering algorithm to obtain two clusters of the site monitoring video gray level image, marking an area where the cluster with high gray level average value is located as an illumination area, and marking an area where the cluster with low gray level average value is located as a non-illumination area.
3. The intelligent building site management method according to claim 1, wherein the gray value after each pixel point replacement in the illumination area according to the extreme difference value of the channel value of each pixel point in the illumination area comprises the following specific steps:
for the pixel point of the illumination areaThe gray value of the pixel point is replaced by the polar difference value of each channel value, and the specific calculation formula is as follows:
wherein,representing pixel dot +.>Gray value after substitution +.>Is pixel dot +.>At->The value of the channel in the channel,representing pixel dot +.>Maximum of the three channel values, +.>Representing pixel dot +.>Maximum of the three channel values, +.>The channel means that the pixel point in the monitoring video image of the construction site is +.>、/>And->Any one of the channels.
4. The intelligent building site management method according to claim 1, wherein the obtaining the target degree of each pixel point in the illumination area according to the gray value of each pixel point after replacement in the illumination area comprises the following specific steps:
combining each pixel pointThe calculation formula for obtaining the target degree of each pixel point in the illumination area by the difference value between the channel value and the maximum gray value of the channel is as follows:
wherein,representing pixel point in gray level image of monitoring video of construction site +.>Is (are) target degree of->Representing pixel point in gray level image of monitoring video of construction site +.>At->Channel value in channel->Monitoring the maximum gray value in the video gray image for the worksite,/for the construction site>Representing pixel points in monitoring video image of construction site>And the replaced gray value.
5. The intelligent building site management method according to claim 1, wherein the method for obtaining the similarity between each pixel point in the illumination area and the neighboring pixel point according to the neighboring pixel point of each pixel point in the illumination area comprises the following specific steps:
a window with a fixed size is constructed by taking each pixel point in the illumination area as a central pixel point, the pixel points except the central pixel point in the window are regarded as neighborhood pixel points of each pixel point in the illumination area, and a similarity formula between each pixel point and the neighborhood pixel points is obtained according to each pixel point in the illumination area as follows:
wherein,representing pixel dot +.>And (2) neighborhood pixel point thereof>Similarity between->Representing pixel dot +.>At the position ofChannel value of channel->Representing pixel dot +.>At->Channel value of channel->Representing pixel dot +.>At->Channel value of channel->Representing pixel dot +.>At->Channel value of channel->For the side length of the built fixed-size window, +.>And->Respectively representing pixel points in gray level images of monitoring video of a construction site as +.>、/>And->Any one of the channels.
6. The intelligent building site management method according to claim 1, wherein the enhancing the pixels of the illumination area according to the target degree of each pixel in the illumination area, and obtaining the enhanced neighborhood pixels of each pixel in the illumination area, comprises the following specific steps:
and taking the pixel point with the smallest target degree in the illumination area as an initial point, enhancing the pixel points in the illumination area according to the sequence of small target degree and large target degree, and marking the pixel points with the target degree smaller than that of the central pixel point in the window with the fixed size as enhanced neighborhood pixel points.
7. The intelligent building site management method according to claim 1, wherein the obtaining the enhanced channel value of each pixel in the illumination area according to the enhanced neighborhood pixel of each pixel in the illumination area and the similarity between each pixel in the illumination area and the neighborhood pixel thereof comprises the following specific calculation formulas:
wherein,is pixel dot +.>After reinforcement at->Channel value of channel->Is pixel dot +.>After reinforcement at->Channel value of channel->Representing pixel point in gray level image of monitoring video of construction site +.>At->Channel value in channel->Representing pixel dot +.>And pixel dot->Similarity of->Representing pixel dot +.>Is +.>The enhanced pixel points, n represents pixel points +.>Neighborhood pixel number, +.>An exponential function based on a natural constant is represented.
8. The intelligent building site management method according to claim 1, wherein the specific calculation formula for determining the linear transformation enhancement coefficient of the non-illumination area according to the enhanced image of the illumination area is:
wherein,for the illumination area +.>Channel value mean value after channel bottom enhancement, +.>Enhancement for non-illuminated areas is preceded by +.>Mean value of channel values under channel, +.>Is in the non-illumination area->The linear transformation of the channel enhances the coefficients.
9. The intelligent building site management method according to claim 1, wherein the safety monitoring of the building site construction according to the enhanced image of the building site monitoring video gray level image comprises the following specific steps:
and enhancing each frame of image of the collected site monitoring video image, acquiring the position of a target in the enhanced image of the company monitoring video gray level image by using a neural network, judging whether the running tracks of the target are intersected in a preset time according to the position of the target, and carrying out safety early warning if the running tracks are intersected.
10. An intelligent worksite management system, comprising the following modules:
the image preprocessing module is used for acquiring a site monitoring video image and carrying out graying processing to obtain a site monitoring video gray image;
the illumination area enhancement module divides the gray level image of the building site monitoring video into an illumination area and a non-illumination area according to a clustering algorithm; according to the extremely poor value of the channel value of each pixel point in the illumination area, the gray value of each pixel point in the illumination area after being replaced is obtained; acquiring the target degree of each pixel point in the illumination area according to the gray value of each pixel point in the illumination area after being replaced; acquiring a neighborhood pixel point of each pixel point in the illumination area, and acquiring the similarity between each pixel point in the illumination area and the neighborhood pixel point according to the neighborhood pixel point of each pixel point in the illumination area; enhancing the pixel points of the illumination area according to the target degree of each pixel point in the illumination area, and obtaining enhanced neighborhood pixel points of each pixel point in the illumination area; obtaining a channel value of each pixel point in the illumination area after enhancement according to the enhanced neighborhood pixel point of each pixel point in the illumination area and the similarity between each pixel point in the illumination area and the neighborhood pixel point; obtaining the channel value of each pixel point in the illumination area after the enhancement according to the channel value of each pixel point in the illumination area after the enhancement; obtaining an enhanced image of the illumination area according to the channel values of all the pixel points in the illumination area after enhancement;
the non-illumination region enhancement module is used for determining linear transformation enhancement coefficients of the non-illumination region according to the enhancement image of the illumination region; enhancing the non-illumination region according to the linear transformation enhancement coefficient of the non-illumination region to obtain an enhanced image of the non-illumination region; combining the enhanced image of the illumination area and the enhanced image of the non-illumination area to obtain an enhanced image of the site monitoring video gray level image;
and the implementation module is used for carrying out safety monitoring on construction of the construction site according to the enhanced image of the video gray level image of the construction site.
CN202311345497.6A 2023-10-18 2023-10-18 Intelligent building site management method and system Active CN117079197B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311345497.6A CN117079197B (en) 2023-10-18 2023-10-18 Intelligent building site management method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311345497.6A CN117079197B (en) 2023-10-18 2023-10-18 Intelligent building site management method and system

Publications (2)

Publication Number Publication Date
CN117079197A true CN117079197A (en) 2023-11-17
CN117079197B CN117079197B (en) 2024-03-05

Family

ID=88706532

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311345497.6A Active CN117079197B (en) 2023-10-18 2023-10-18 Intelligent building site management method and system

Country Status (1)

Country Link
CN (1) CN117079197B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180338086A1 (en) * 2017-05-16 2018-11-22 Apple Inc. Synthetic Long Exposure Image with Optional Enhancement Using a Guide Image
CN112200747A (en) * 2020-10-16 2021-01-08 展讯通信(上海)有限公司 Image processing method and device and computer readable storage medium
CN114723701A (en) * 2022-03-31 2022-07-08 南通博莹机械铸造有限公司 Gear defect detection method and system based on computer vision
CN115393406A (en) * 2022-08-17 2022-11-25 武汉华中天经通视科技有限公司 Image registration method based on twin convolution network
CN115409833A (en) * 2022-10-28 2022-11-29 一道新能源科技(衢州)有限公司 Hot spot defect detection method of photovoltaic panel based on unsharp mask algorithm
CN116071807A (en) * 2023-03-06 2023-05-05 深圳市网联天下科技有限公司 Campus card intelligent early warning method and system based on video monitoring
WO2023134791A2 (en) * 2022-12-16 2023-07-20 苏州迈创信息技术有限公司 Environmental security engineering monitoring data management method and system
CN116805316A (en) * 2023-08-25 2023-09-26 深圳市鹏顺兴包装制品有限公司 Degradable plastic processing quality detection method based on image enhancement

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180338086A1 (en) * 2017-05-16 2018-11-22 Apple Inc. Synthetic Long Exposure Image with Optional Enhancement Using a Guide Image
CN112200747A (en) * 2020-10-16 2021-01-08 展讯通信(上海)有限公司 Image processing method and device and computer readable storage medium
CN114723701A (en) * 2022-03-31 2022-07-08 南通博莹机械铸造有限公司 Gear defect detection method and system based on computer vision
CN115393406A (en) * 2022-08-17 2022-11-25 武汉华中天经通视科技有限公司 Image registration method based on twin convolution network
CN115409833A (en) * 2022-10-28 2022-11-29 一道新能源科技(衢州)有限公司 Hot spot defect detection method of photovoltaic panel based on unsharp mask algorithm
WO2023134791A2 (en) * 2022-12-16 2023-07-20 苏州迈创信息技术有限公司 Environmental security engineering monitoring data management method and system
CN116071807A (en) * 2023-03-06 2023-05-05 深圳市网联天下科技有限公司 Campus card intelligent early warning method and system based on video monitoring
CN116805316A (en) * 2023-08-25 2023-09-26 深圳市鹏顺兴包装制品有限公司 Degradable plastic processing quality detection method based on image enhancement

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
QINGLI CHEN等: "An Improved R-L Fractional Differential Image Enhancement Algorithm", 2021 INTERNATIONAL CONFERENCE ON COMPUTER INFORMATION SCIENCE AND ARTIFICIAL INTELLIGENCE (CISAI), pages 1 - 5 *
陈生奇等: "基于 DM6467的视频图像增强系统研究", 光学与光电技术, vol. 21, no. 4, pages 67 - 74 *

Also Published As

Publication number Publication date
CN117079197B (en) 2024-03-05

Similar Documents

Publication Publication Date Title
CN108615226B (en) Image defogging method based on generation type countermeasure network
CN107742099A (en) A kind of crowd density estimation based on full convolutional network, the method for demographics
CN112183471A (en) Automatic detection method and system for standard wearing of epidemic prevention mask of field personnel
CN105160297B (en) Masked man's event automatic detection method based on features of skin colors
CN108564085B (en) Method for automatically reading of pointer type instrument
CN105279772B (en) A kind of trackability method of discrimination of infrared sequence image
CN104966304A (en) Kalman filtering and nonparametric background model-based multi-target detection tracking method
CN109389569B (en) Monitoring video real-time defogging method based on improved DehazeNet
CN112396635B (en) Multi-target detection method based on multiple devices in complex environment
CN109685045A (en) A kind of Moving Targets Based on Video Streams tracking and system
CN109726649B (en) Remote sensing image cloud detection method and system and electronic equipment
CN113763427B (en) Multi-target tracking method based on coarse-to-fine shielding processing
CN112749654A (en) Deep neural network model construction method, system and device for video fog monitoring
CN116758081B (en) Unmanned aerial vehicle road and bridge inspection image processing method
CN115311623A (en) Equipment oil leakage detection method and system based on infrared thermal imaging
CN116703787B (en) Building construction safety risk early warning method and system
CN117079197B (en) Intelligent building site management method and system
CN110503092B (en) Improved SSD monitoring video target detection method based on field adaptation
CN110135274B (en) Face recognition-based people flow statistics method
CN110765900A (en) DSSD-based automatic illegal building detection method and system
CN115995097A (en) Deep learning-based safety helmet wearing standard judging method
CN110163081A (en) Regional invasion real-time detection method, system and storage medium based on SSD
Li et al. Image object detection algorithm based on improved Gaussian mixture model
CN111145219B (en) Efficient video moving target detection method based on Codebook principle
CN114419018A (en) Image sampling method, system, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant