CN113344878A - Image processing method and system - Google Patents

Image processing method and system Download PDF

Info

Publication number
CN113344878A
CN113344878A CN202110641215.1A CN202110641215A CN113344878A CN 113344878 A CN113344878 A CN 113344878A CN 202110641215 A CN202110641215 A CN 202110641215A CN 113344878 A CN113344878 A CN 113344878A
Authority
CN
China
Prior art keywords
scene
image
sample
pixel point
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110641215.1A
Other languages
Chinese (zh)
Other versions
CN113344878B (en
Inventor
任永建
师天磊
许志强
孙昌勋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ronglian Yitong Information Technology Co ltd
Original Assignee
Beijing Ronglian Yitong Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Ronglian Yitong Information Technology Co ltd filed Critical Beijing Ronglian Yitong Information Technology Co ltd
Priority to CN202110641215.1A priority Critical patent/CN113344878B/en
Publication of CN113344878A publication Critical patent/CN113344878A/en
Application granted granted Critical
Publication of CN113344878B publication Critical patent/CN113344878B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image processing method and system, comprising the following steps: obtaining sample scene information and a sample scene image shot in a sample scene, and performing model training according to the sample scene information and the sample scene image to obtain a sample cv model; acquiring scene information of a scene to be identified, and shooting an actual scene image of the scene to be identified; comparing the scene information of the scene to be identified with the sample scene information to obtain a comparison result, and adjusting the identification range of the pixels of the sample cv model according to the comparison result to obtain a corrected cv model; and inputting the actual scene image into the corrected cv model, and outputting an identification result of the actual scene image. Has the advantages that: and adjusting the maximum identification pixel and the minimum identification pixel of the sample cv model according to the scene information of the scene to be identified and the sample scene information, so as to increase the accuracy of the final identification result.

Description

Image processing method and system
Technical Field
The invention belongs to the technical field of visual inspection, and particularly relates to an image processing method and system.
Background
The model algorithm can generate an image recognition model according to a model training data set, and due to the fact that the scenes of the training data set cannot be infinite and the physical reason that an object is imaged, the size of the object in a picture is smaller/larger when the size of the object exceeds a certain range, and the distinguishability of the object is poorer, so that the algorithm model has the maximum recognition pixel and the minimum recognition pixel which are suitable for the recognizable pixels of the object, and the recognition accuracy is greatly reduced when the size of the recognizable pixels exceeds the maximum recognition pixel and is smaller than the minimum recognition pixel. At present, the identification parameters of the CV model are internalized and fixed and cannot be flexibly adjusted, and the CV model is not identified if the target size in a picture exceeds the maximum identification pixel and does not reach the minimum identification pixel, and is not reflected in an algorithm identification result.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the art described above. Therefore, a first objective of the present invention is to provide an image processing method, which adjusts the maximum recognition pixel and the minimum recognition pixel of the sample cv model according to the scene information of the scene to be recognized, so as to increase the accuracy of the final detection result.
A second object of the present invention is to provide an image processing system.
In order to achieve the above object, an embodiment of a first aspect of the present invention provides an image processing method, including:
obtaining sample scene information and a sample scene image shot in a sample scene, and performing model training according to the sample scene information and the sample scene image to obtain a sample cv model; the configuration parameters of the sample cv model include an identification range of pixels; the identification range comprises a maximum identification pixel and a minimum identification pixel;
acquiring scene information of a scene to be identified, and shooting an actual scene image of the scene to be identified;
comparing the scene information of the scene to be identified with the sample scene information to obtain a comparison result, and adjusting the identification range of the pixels of the sample cv model according to the comparison result to obtain a corrected cv model;
and inputting the actual scene image into the corrected cv model, and outputting an identification result of the actual scene image.
Further, the sample scene information includes the light intensity of the sample scene, the angle and the height of the camera when the image of the sample scene is shot.
Further, after outputting the recognition result of the actual scene image, the method further includes:
and analyzing the recognition result, and sending out an alarm prompt when determining that the corresponding target area in the actual scene image is abnormal.
Further, the comparing the scene information of the scene to be identified with the sample scene information to obtain a comparison result, and adjusting the identification range of the pixels of the sample cv model according to the comparison result includes:
according to the comparison result, when the light intensity of the scene to be identified is determined to be greater than that of a sample scene, and the angle and the height of a camera shooting the scene to be identified are respectively smaller than those of the camera shooting the sample scene, determining that the scene to be identified is a close scene with good light relative to the sample scene, and carrying out enlargement processing on the maximum identification pixel and the minimum identification pixel of the sample cv model;
according to the comparison result, when the light intensity of the scene to be identified is determined to be smaller than that of a sample scene, and the angle and the height of the camera shooting the scene to be identified are respectively larger than those of the camera shooting the sample scene, determining a long-range scene with the scene to be identified being poor in light relative to the sample scene, and carrying out minimization processing on the maximum identification pixel and the minimum identification pixel of the sample cv model.
Further, after outputting the recognition result of the actual scene image, the method further includes:
the recognition result comprises sub-recognition results of a plurality of regions to be recognized which are included in the actual scene image; the method comprises the steps of obtaining the areas of a plurality of regions to be identified, comparing the areas with a preset area, determining the regions to be identified which are smaller than or equal to the preset area as a first target region, and determining the regions to be identified which are larger than the preset area as a second target region;
determining a first number of recognition errors according to the sub-recognition result of the first target area, and performing enlargement processing on the minimum recognition range of the corrected cv model when the first number is determined to be larger than a preset number;
and determining a second number of recognition errors according to the sub-recognition result of the second target area, and performing reduction processing on the maximum recognition range of the corrected cv model when the second number is determined to be larger than a preset number.
Further, the configuration parameters of the sample cv model further include at least one of a start time, an end time, a detection period, an alarm period, an algorithm threshold, and a detection region setting.
Further, before the actual scene image is input into the modified cv model, the method further includes:
acquiring the size of the actual scene image, judging whether the size is the same as a preset size or not, and carrying out normalization processing on the size of the actual scene image when the size is determined to be different from the preset size;
acquiring the absolute value of the pixel difference value of each pixel point and the adjacent pixel point in the actual scene image after normalization processing to obtain the absolute values of a plurality of pixel difference values, screening out the absolute value of the minimum pixel difference value, multiplying the absolute value of the minimum pixel difference value by a preset smoothing coefficient to obtain a smooth pixel value, and adding the pixel value of each pixel point in the actual scene image and the smooth pixel value to obtain the actual scene image after smoothing processing; the adjacent pixel point is any pixel point within a preset distance range;
carrying out image graying on the actual scene image after the smoothing processing to obtain a grayscale image;
acquiring a first gradient value of each pixel point in the gray level image in the horizontal direction;
acquiring a second gradient value of each pixel point in the gray level image in the vertical direction;
calculating according to a first gradient value and a second gradient value of each pixel point to obtain a gradient amplitude of each pixel point, calculating according to the gradient amplitude of each pixel point to obtain an average amplitude, screening out the pixel points of which the gradient amplitudes are larger than the average amplitude, and generating a first pixel point set; the pixel points in the first pixel point set are edge pixel points in the gray level image;
acquiring the gray value of each pixel point in the gray image, screening out the pixel points with the gray values larger than a preset gray value, and generating a second pixel point set;
acquiring the intersection of the first pixel point set and the second pixel point set;
acquiring a union set of the first pixel point set and the second pixel point set;
calculating to obtain the definition of the gray image according to the intersection and the union, judging whether the definition is smaller than a preset definition, inputting the gray image into a depth information acquisition model trained in advance when the definition is determined to be smaller than the preset definition, outputting the depth value of each pixel point in the gray image, calculating to obtain the diameter of a diffusion circle of each pixel point according to the depth value of each pixel point, and performing defuzzification processing on the corresponding pixel point in the gray image according to the diameter of the diffusion circle of each pixel point.
Further, before the actual scene image is input into the modified cv model, the method further includes:
calculating the signal-to-noise ratio of the actual scene image, judging whether the signal-to-noise ratio is smaller than a preset signal-to-noise ratio or not, and carrying out filtering processing on the actual scene image when the signal-to-noise ratio is smaller than the preset signal-to-noise ratio;
the signal-to-noise ratio C of the actual scene image is calculated as shown in formula (1):
Figure BDA0003107845790000041
g is the maximum gray value of a pixel point in the actual scene image; m is the length of the actual scene image; n is the width of the actual scene image; h (i, j) is the gray value of the pixel point (i, j) in the actual scene image.
Further, after the filtering process is performed on the actual scene image, the method further includes:
calculating a filter coefficient K for the actual scene image, as shown in formula (2):
Figure BDA0003107845790000042
wherein, W is the size of the filtering window; λ is laplace operator; l (i, j) is the weight of the pixel point (i, j) in the actual scene image; the weight is calculated according to the gradient information of the pixel points;
calculating the gray value f (i, j) of the pixel point (i, j) in the actual scene image after filtering processing according to the filtering coefficient K of the actual scene image, as shown in formula (3):
Figure BDA0003107845790000051
calculating the gray value of each pixel point in the actual scene image after filtering;
screening out the pixel points with the gray value larger than a preset gray value to generate a third pixel point set, screening out the pixel points with the gray value smaller than the preset gray value to generate a fourth pixel point set;
carrying out reduction processing on the gray value of each pixel point in the third pixel point set;
and carrying out increasing processing on the gray value of each pixel point in the fourth pixel point set.
In order to achieve the above object, a second embodiment of the present invention provides an image processing system, including:
the model training module is used for acquiring sample scene information and a sample scene image shot in a sample scene, and performing model training according to the sample scene information and the sample scene image to obtain a sample cv model; the configuration parameters of the sample cv model include an identification range of pixels; the identification range comprises a maximum identification pixel and a minimum identification pixel;
the system comprises an image acquisition module, a scene recognition module and a scene recognition module, wherein the image acquisition module is used for acquiring scene information of a scene to be recognized and shooting an actual scene image of the scene to be recognized;
the model correction module is used for comparing the scene information of the scene to be identified with the sample scene information to obtain a comparison result, and adjusting the identification range of the pixels of the sample cv model according to the comparison result to obtain a corrected cv model;
and the image identification module is used for inputting the actual scene image into the corrected cv model and outputting an identification result of the actual scene image.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow chart of an image processing method of the present invention;
fig. 2 is a diagram of adjustment of configuration parameters of the sample cv model;
FIG. 3 is a block diagram of an image processing system according to the present invention.
Reference numerals:
the device comprises a training module 1, an acquisition module 2, a correction module 3 and an identification module 4.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
An image processing method and system according to an embodiment of the present invention are described with reference to fig. 1 and fig. 3.
As shown in fig. 1, an image processing method includes steps S1-S4:
s1, obtaining sample scene information and a sample scene image shot in a sample scene, and performing model training according to the sample scene information and the sample scene image to obtain a sample cv model; the configuration parameters of the sample cv model include an identification range of pixels; the identification range comprises a maximum identification pixel and a minimum identification pixel;
s2, acquiring scene information of a scene to be identified and shooting an actual scene image of the scene to be identified;
s3, comparing the scene information of the scene to be identified with the sample scene information to obtain a comparison result, and adjusting the identification range of the pixels of the sample cv model according to the comparison result to obtain a corrected cv model;
and S4, inputting the actual scene image into the correction cv model and outputting the identification result of the actual scene image.
The working principle of the scheme is as follows: obtaining sample scene information and a sample scene image shot in a sample scene, and performing model training according to the sample scene information and the sample scene image to obtain a sample cv model; the configuration parameters of the sample cv model include an identification range of pixels; the identification range comprises a maximum identification pixel and a minimum identification pixel; acquiring scene information of a scene to be identified, and shooting an actual scene image of the scene to be identified; comparing the scene information of the scene to be identified with the sample scene information to obtain a comparison result, and adjusting the identification range of the pixels of the sample cv model according to the comparison result to obtain a corrected cv model; and inputting the actual scene image into the corrected cv model, and outputting an identification result of the actual scene image. Performing model training according to the sample scene information and the sample scene images to obtain a sample cv model, wherein the model training comprises the steps of performing image analysis on a plurality of sample scene images to respectively obtain a foreground image of each sample scene image, segmenting the foreground image from the sample scene images, and leaving a blank frame in the sample scene images; splicing the foreground images into blank frames in different sample scene images respectively to obtain a plurality of spliced images; and performing model training according to the sample scene image, the spliced image and the sample scene information to obtain a cv model.
The beneficial effect of above-mentioned scheme: the scene information of the scene to be identified is compared with the sample scene information to obtain a comparison result, and the identification range of the pixels of the sample cv model is adjusted according to the comparison result, so that the adjustability of the configuration parameters of the cv model is increased, the accuracy of the identification result of the cv model is improved, and the practicability of the cv model is improved.
According to some embodiments of the invention, the sample scene information comprises a light intensity of a sample scene, an angle and a height of a camera when the sample scene image is taken.
The working principle and the beneficial effects of the scheme are as follows: the light intensity of the sample scene, the angle and the height of the camera when the image of the sample scene is shot are important factors influencing the image quality and the image type of the actual scene, and finally the accuracy of the identification result of the cv model is influenced.
According to some embodiments of the present invention, after outputting the recognition result of the actual scene image, the method further includes:
and analyzing the recognition result, and sending out an alarm prompt when determining that the corresponding target area in the actual scene image is abnormal.
The working principle and the beneficial effects of the scheme are as follows: and analyzing the recognition result, sending an alarm prompt when determining that the corresponding target area in the actual scene image is abnormal, and reminding a worker to adjust the configuration parameters of the cv model in time so as to increase the accuracy of the final recognition result.
According to some embodiments of the present invention, the comparing the scene information of the scene to be identified with the sample scene information to obtain a comparison result, and adjusting the identification range of the pixel of the sample cv model according to the comparison result includes:
according to the comparison result, when the light intensity of the scene to be identified is determined to be greater than that of a sample scene, and the angle and the height of a camera shooting the scene to be identified are respectively smaller than those of the camera shooting the sample scene, determining that the scene to be identified is a close scene with good light relative to the sample scene, and carrying out enlargement processing on the maximum identification pixel and the minimum identification pixel of the sample cv model;
according to the comparison result, when the light intensity of the scene to be identified is determined to be smaller than that of a sample scene, and the angle and the height of the camera shooting the scene to be identified are respectively larger than those of the camera shooting the sample scene, determining a long-range scene with the scene to be identified being poor in light relative to the sample scene, and carrying out minimization processing on the maximum identification pixel and the minimum identification pixel of the sample cv model.
The working principle of the scheme is as follows: according to the comparison result, when the light intensity of the scene to be identified is determined to be greater than that of a sample scene, and the angle and the height of a camera shooting the scene to be identified are respectively smaller than those of the camera shooting the sample scene, determining that the scene to be identified is a close scene with good light relative to the sample scene, and carrying out enlargement processing on the maximum identification pixel and the minimum identification pixel of the sample cv model; according to the comparison result, when the light intensity of the scene to be identified is determined to be smaller than that of a sample scene, and the angle and the height of the camera shooting the scene to be identified are respectively larger than those of the camera shooting the sample scene, determining a long-range scene with the scene to be identified being poor in light relative to the sample scene, and carrying out minimization processing on the maximum identification pixel and the minimum identification pixel of the sample cv model.
The beneficial effect of above-mentioned scheme: the scene type of the scene to be identified is accurately positioned according to the light intensity of the scene to be identified, the angle of a camera shooting the scene to be identified and the height, and the maximum identification pixel and the minimum identification pixel of the sample cv model are adjusted according to the scene type, so that the accuracy of the identification result of the cv model is greatly improved, and the situations of missing identification and false alarm are reduced.
According to some embodiments of the present invention, after outputting the recognition result of the actual scene image, the method further includes:
the recognition result comprises sub-recognition results of a plurality of regions to be recognized which are included in the actual scene image; the method comprises the steps of obtaining the areas of a plurality of regions to be identified, comparing the areas with a preset area, determining the regions to be identified which are smaller than or equal to the preset area as a first target region, and determining the regions to be identified which are larger than the preset area as a second target region;
determining a first number of recognition errors according to the sub-recognition result of the first target area, and performing enlargement processing on the minimum recognition range of the corrected cv model when the first number is determined to be larger than a preset number;
and determining a second number of recognition errors according to the sub-recognition result of the second target area, and performing reduction processing on the maximum recognition range of the corrected cv model when the second number is determined to be larger than a preset number.
The working principle of the scheme is as follows: the recognition result comprises sub-recognition results of a plurality of regions to be recognized which are included in the actual scene image; the method comprises the steps of obtaining the areas of a plurality of regions to be identified, comparing the areas with a preset area, determining the regions to be identified which are smaller than or equal to the preset area as a first target region, and determining the regions to be identified which are larger than the preset area as a second target region; determining a first number of recognition errors according to the sub-recognition result of the first target area, and performing enlargement processing on the minimum recognition range of the corrected cv model when the first number is determined to be larger than a preset number; and determining a second number of recognition errors according to the sub-recognition result of the second target area, and performing reduction processing on the maximum recognition range of the corrected cv model when the second number is determined to be larger than a preset number.
The beneficial effect of above-mentioned scheme: when more small targets are reported in the picture by mistake, the minimum recognition range is properly enlarged; when more large targets are mistakenly reported in the picture, the maximum recognition range is properly reduced, so that the mistaken reporting and the missing reporting of the cv model are controlled, and the accuracy of the final recognition result is improved.
As shown in fig. 2, according to some embodiments of the present invention, the configuration parameters of the sample cv model further include at least one of a start time, an end time, a detection period, an alarm period, an algorithm threshold, a detection region setting.
The working principle and the beneficial effects of the scheme are as follows: the configuration parameters of the sample cv model further comprise at least one of start time, end time, detection period, alarm period, algorithm threshold and detection area setting, wherein the start time is the start identification time of the cv model, the end time is the end identification time of the cv model, and the algorithm threshold is the identification range of the cv model.
According to some embodiments of the invention, before inputting the actual scene image into the modified cv model, the method further comprises:
acquiring the size of the actual scene image, judging whether the size is the same as a preset size or not, and carrying out normalization processing on the size of the actual scene image when the size is determined to be different from the preset size;
acquiring the absolute value of the pixel difference value of each pixel point and the adjacent pixel point in the actual scene image after normalization processing to obtain the absolute values of a plurality of pixel difference values, screening out the absolute value of the minimum pixel difference value, multiplying the absolute value of the minimum pixel difference value by a preset smoothing coefficient to obtain a smooth pixel value, and adding the pixel value of each pixel point in the actual scene image and the smooth pixel value to obtain the actual scene image after smoothing processing; the adjacent pixel point is any pixel point within a preset distance range;
carrying out image graying on the actual scene image after the smoothing processing to obtain a grayscale image;
acquiring a first gradient value of each pixel point in the gray level image in the horizontal direction;
acquiring a second gradient value of each pixel point in the gray level image in the vertical direction;
calculating according to a first gradient value and a second gradient value of each pixel point to obtain a gradient amplitude of each pixel point, calculating according to the gradient amplitude of each pixel point to obtain an average amplitude, screening out the pixel points of which the gradient amplitudes are larger than the average amplitude, and generating a first pixel point set; the pixel points in the first pixel point set are edge pixel points in the gray level image;
acquiring the gray value of each pixel point in the gray image, screening out the pixel points with the gray values larger than a preset gray value, and generating a second pixel point set;
acquiring the intersection of the first pixel point set and the second pixel point set;
acquiring a union set of the first pixel point set and the second pixel point set;
calculating to obtain the definition of the gray image according to the intersection and the union, judging whether the definition is smaller than a preset definition, inputting the gray image into a depth information acquisition model trained in advance when the definition is determined to be smaller than the preset definition, outputting the depth value of each pixel point in the gray image, calculating to obtain the diameter of a diffusion circle of each pixel point according to the depth value of each pixel point, and performing defuzzification processing on the corresponding pixel point in the gray image according to the diameter of the diffusion circle of each pixel point.
The working principle of the scheme is as follows: acquiring the size of the actual scene image, judging whether the size is the same as a preset size or not, and carrying out normalization processing on the size of the actual scene image when the size is determined to be different from the preset size; acquiring the absolute value of the pixel difference value of each pixel point and the adjacent pixel point in the actual scene image after normalization processing to obtain the absolute values of a plurality of pixel difference values, screening out the absolute value of the minimum pixel difference value, multiplying the absolute value of the minimum pixel difference value by a preset smoothing coefficient to obtain a smooth pixel value, and adding the pixel value of each pixel point in the actual scene image and the smooth pixel value to obtain the actual scene image after smoothing processing; the adjacent pixel point is any pixel point within a preset distance range; carrying out image graying on the actual scene image after the smoothing processing to obtain a grayscale image; acquiring a first gradient value of each pixel point in the gray level image in the horizontal direction; acquiring a second gradient value of each pixel point in the gray level image in the vertical direction; calculating according to a first gradient value and a second gradient value of each pixel point to obtain a gradient amplitude of each pixel point, calculating according to the gradient amplitude of each pixel point to obtain an average amplitude, screening out the pixel points of which the gradient amplitudes are larger than the average amplitude, and generating a first pixel point set; the pixel points in the first pixel point set are edge pixel points in the gray level image; acquiring the gray value of each pixel point in the gray image, screening out the pixel points with the gray values larger than a preset gray value, and generating a second pixel point set; acquiring the intersection of the first pixel point set and the second pixel point set; acquiring a union set of the first pixel point set and the second pixel point set; calculating to obtain the definition of the gray image according to the intersection and the union, judging whether the definition is smaller than a preset definition, inputting the gray image into a depth information acquisition model trained in advance when the definition is determined to be smaller than the preset definition, outputting the depth value of each pixel point in the gray image, calculating to obtain the diameter of a diffusion circle of each pixel point according to the depth value of each pixel point, and performing defuzzification processing on the corresponding pixel point in the gray image according to the diameter of the diffusion circle of each pixel point.
The beneficial effect of above-mentioned scheme: the definition of the actual scene image also determines the accuracy of the final recognition result, and if the definition of the actual scene image is lower, the recognition result is inaccurate, so that the scheme provides a method for detecting the definition of the actual scene image and increasing the definition of the actual scene image; acquiring the size of the actual scene image, judging whether the size is the same as a preset size or not, and carrying out normalization processing on the size of the actual scene image when the size is determined to be different from the preset size; the normalization processing is to cut the size of the actual scene image to a preset size; acquiring the absolute value of the pixel difference value of each pixel point and the adjacent pixel point in the actual scene image after normalization processing to obtain the absolute values of a plurality of pixel difference values, screening out the absolute value of the minimum pixel difference value, multiplying the absolute value of the minimum pixel difference value by a preset smoothing coefficient to obtain a smooth pixel value, and adding the pixel value of each pixel point in the actual scene image and the smooth pixel value to obtain the actual scene image after smoothing processing; the adjacent pixel point is any pixel point within a preset distance range; carrying out image graying on the actual scene image after the smoothing processing to obtain a grayscale image; the image graying processing is carried out on the actual scene image to avoid banding distortion; acquiring a first gradient value of each pixel point in the gray level image in the horizontal direction; acquiring a second gradient value of each pixel point in the gray level image in the vertical direction; calculating according to a first gradient value and a second gradient value of each pixel point to obtain a gradient amplitude of each pixel point, calculating according to the gradient amplitude of each pixel point to obtain an average amplitude, screening out the pixel points of which the gradient amplitudes are larger than the average amplitude, and generating a first pixel point set; the pixel points in the first pixel point set are edge pixel points in the gray level image; the gradient amplitude value obtained by calculation according to the first gradient value of each pixel point in the gray level image in the horizontal direction and the second gradient value of each pixel point in the vertical direction is more accurate; acquiring the gray value of each pixel point in the gray image, screening out the pixel points with the gray values larger than a preset gray value, and generating a second pixel point set; acquiring the intersection of the first pixel point set and the second pixel point set; acquiring a union set of the first pixel point set and the second pixel point set; calculating the definition of the gray level image according to the intersection and the union, wherein the definition is an intersection ratio, namely a ratio of the intersection to the union; judging whether the definition is smaller than a preset definition or not, inputting the gray image into a depth information acquisition model trained in advance when the definition is determined to be smaller than the preset definition, outputting the depth value of each pixel point in the gray image and the distance between a shooting scene and a shooting camera; and calculating to obtain the diameter of the circle of confusion of each pixel point according to the depth value of each pixel point, and performing defuzzification processing on the corresponding pixel point in the gray-scale image according to the diameter of the circle of confusion of each pixel point, so that the actual scene image after defuzzification processing is clearer, and the accuracy of the final recognition result is improved.
According to some embodiments of the invention, before inputting the actual scene image into the modified cv model, the method further comprises:
calculating the signal-to-noise ratio of the actual scene image, judging whether the signal-to-noise ratio is smaller than a preset signal-to-noise ratio or not, and carrying out filtering processing on the actual scene image when the signal-to-noise ratio is smaller than the preset signal-to-noise ratio;
the signal-to-noise ratio C of the actual scene image is calculated as shown in formula (1):
Figure BDA0003107845790000131
g is the maximum gray value of a pixel point in the actual scene image; m is the length of the actual scene image; n is the width of the actual scene image; h (i, j) is the gray value of the pixel point (i, j) in the actual scene image.
The working principle of the scheme is as follows: and calculating the signal-to-noise ratio of the actual scene image, judging whether the signal-to-noise ratio is smaller than a preset signal-to-noise ratio, and carrying out filtering processing on the actual scene image when the signal-to-noise ratio is determined to be smaller than the preset signal-to-noise ratio.
The beneficial effect of above-mentioned scheme: the accuracy of the final detection result is also affected by excessive noise in the actual scene image, so that it is necessary to calculate the signal-to-noise ratio of the actual scene image, and when calculating the signal-to-noise ratio of the actual scene image, factors such as the maximum gray value of a pixel point in the actual scene image, the length of the actual scene image, the width of the actual scene image and the like are considered, so that the calculated signal-to-noise ratio is more accurate, the accuracy of judging the signal-to-noise ratio and the preset signal-to-noise ratio is improved, and the actual scene image is conveniently filtered when the signal-to-noise ratio is smaller than the preset signal-to-noise ratio.
According to some embodiments of the present invention, after the filtering process is performed on the actual scene image, the method further includes:
calculating a filter coefficient K for the actual scene image, as shown in formula (2):
Figure BDA0003107845790000141
wherein, W is the size of the filtering window; λ is laplace operator; l (i, j) is the weight of the pixel point (i, j) in the actual scene image; the weight is calculated according to the gradient information of the pixel points;
calculating the gray value f (i, j) of the pixel point (i, j) in the actual scene image after filtering processing according to the filtering coefficient K of the actual scene image, as shown in formula (3):
Figure BDA0003107845790000142
calculating the gray value of each pixel point in the actual scene image after filtering;
screening out the pixel points with the gray value larger than a preset gray value to generate a third pixel point set, screening out the pixel points with the gray value smaller than the preset gray value to generate a fourth pixel point set;
carrying out reduction processing on the gray value of each pixel point in the third pixel point set;
and carrying out increasing processing on the gray value of each pixel point in the fourth pixel point set.
The working principle and the beneficial effects of the scheme are as follows: calculating the gray value of each pixel point in the actual scene image after filtering; screening out the pixel points with the gray value larger than a preset gray value to generate a third pixel point set, screening out the pixel points with the gray value smaller than the preset gray value to generate a fourth pixel point set; carrying out reduction processing on the gray value of each pixel point in the third pixel point set; the gray value of each pixel point in the fourth pixel point set is subjected to increasing processing, and the gray value of each pixel point in the actual scene image after filtering processing is subjected to adjusting processing, so that the contrast of the actual scene image can be increased, and the accuracy of final detection is ensured.
As shown in fig. 3, an image processing system includes:
the training module 1 is used for acquiring sample scene information and a sample scene image shot in a sample scene, and performing model training according to the sample scene information and the sample scene image to obtain a sample cv model; the configuration parameters of the sample cv model include an identification range of pixels; the identification range comprises a maximum identification pixel and a minimum identification pixel;
the acquisition module 2 is used for acquiring scene information of a scene to be identified and shooting an actual scene image of the scene to be identified;
the correction module 3 is configured to compare scene information of the scene to be identified with sample scene information to obtain a comparison result, and adjust an identification range of pixels of the sample cv model according to the comparison result to obtain a corrected cv model;
and the identification module 4 is used for inputting the actual scene image into the corrected cv model and outputting an identification result of the actual scene image.
The working principle of the scheme is as follows: the training module obtains sample scene information and a sample scene image shot in a sample scene, and performs model training according to the sample scene information and the sample scene image to obtain a sample cv model; the configuration parameters of the sample cv model include an identification range of pixels; the identification range comprises a maximum identification pixel and a minimum identification pixel; the method comprises the steps that an acquisition module acquires scene information of a scene to be identified and shoots an actual scene image of the scene to be identified; the correction module compares the scene information of the scene to be identified with the sample scene information to obtain a comparison result, and adjusts the identification range of the pixels of the sample cv model according to the comparison result to obtain a corrected cv model; and the identification module inputs the actual scene image into the corrected cv model and outputs an identification result of the actual scene image. Performing model training according to the sample scene information and the sample scene images to obtain a sample cv model, wherein the model training comprises the steps of performing image analysis on a plurality of sample scene images to respectively obtain a foreground image of each sample scene image, segmenting the foreground image from the sample scene images, and leaving a blank frame in the sample scene images; splicing the foreground images into blank frames in different sample scene images respectively to obtain a plurality of spliced images; and performing model training according to the sample scene image, the spliced image and the sample scene information to obtain a cv model.
The beneficial effect of above-mentioned scheme: the scene information of the scene to be identified is compared with the sample scene information to obtain a comparison result, and the identification range of the pixels of the sample cv model is adjusted according to the comparison result, so that the adjustability of the configuration parameters of the cv model is increased, the accuracy of the identification result of the cv model is improved, and the practicability of the cv model is improved.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. An image processing method, comprising:
obtaining sample scene information and a sample scene image shot in a sample scene, and performing model training according to the sample scene information and the sample scene image to obtain a sample cv model; the configuration parameters of the sample cv model include an identification range of pixels; the identification range comprises a maximum identification pixel and a minimum identification pixel;
acquiring scene information of a scene to be identified, and shooting an actual scene image of the scene to be identified;
comparing the scene information of the scene to be identified with the sample scene information to obtain a comparison result, and adjusting the identification range of the pixels of the sample cv model according to the comparison result to obtain a corrected cv model;
and inputting the actual scene image into the corrected cv model, and outputting an identification result of the actual scene image.
2. The image processing method of claim 1, wherein the sample scene information comprises light intensity of a sample scene, an angle and a height of a camera when the sample scene image is captured.
3. The image processing method according to claim 1, further comprising, after outputting the recognition result of the actual scene image:
and analyzing the recognition result, and sending out an alarm prompt when determining that the corresponding target area in the actual scene image is abnormal.
4. The image processing method according to claim 1, wherein the comparing scene information of the scene to be recognized with sample scene information to obtain a comparison result, and the adjusting the recognition range of the pixels of the sample cv model according to the comparison result comprises:
according to the comparison result, when the light intensity of the scene to be identified is determined to be greater than that of a sample scene, and the angle and the height of a camera shooting the scene to be identified are respectively smaller than those of the camera shooting the sample scene, determining that the scene to be identified is a close scene with good light relative to the sample scene, and carrying out enlargement processing on the maximum identification pixel and the minimum identification pixel of the sample cv model;
according to the comparison result, when the light intensity of the scene to be identified is determined to be smaller than that of a sample scene, and the angle and the height of the camera shooting the scene to be identified are respectively larger than those of the camera shooting the sample scene, determining a long-range scene with the scene to be identified being poor in light relative to the sample scene, and carrying out minimization processing on the maximum identification pixel and the minimum identification pixel of the sample cv model.
5. The image processing method according to claim 1, further comprising, after outputting the recognition result of the actual scene image:
the recognition result comprises sub-recognition results of a plurality of regions to be recognized which are included in the actual scene image; the method comprises the steps of obtaining the areas of a plurality of regions to be identified, comparing the areas with a preset area, determining the regions to be identified which are smaller than or equal to the preset area as a first target region, and determining the regions to be identified which are larger than the preset area as a second target region;
determining a first number of recognition errors according to the sub-recognition result of the first target area, and performing enlargement processing on the minimum recognition range of the corrected cv model when the first number is determined to be larger than a preset number;
and determining a second number of recognition errors according to the sub-recognition result of the second target area, and performing reduction processing on the maximum recognition range of the corrected cv model when the second number is determined to be larger than a preset number.
6. The image processing method according to claim 1, wherein the configuration parameters of the sample cv model further include at least one of a start time, an end time, a detection period, an alarm period, an algorithm threshold, a detection region setting.
7. The image processing method according to claim 1, further comprising, before inputting the actual scene image into the modified cv model:
acquiring the size of the actual scene image, judging whether the size is the same as a preset size or not, and carrying out normalization processing on the size of the actual scene image when the size is determined to be different from the preset size;
acquiring the absolute value of the pixel difference value of each pixel point and the adjacent pixel point in the actual scene image after normalization processing to obtain the absolute values of a plurality of pixel difference values, screening out the absolute value of the minimum pixel difference value, multiplying the absolute value of the minimum pixel difference value by a preset smoothing coefficient to obtain a smooth pixel value, and adding the pixel value of each pixel point in the actual scene image and the smooth pixel value to obtain the actual scene image after smoothing processing; the adjacent pixel point is any pixel point within a preset distance range;
carrying out image graying on the actual scene image after the smoothing processing to obtain a grayscale image;
acquiring a first gradient value of each pixel point in the gray level image in the horizontal direction;
acquiring a second gradient value of each pixel point in the gray level image in the vertical direction;
calculating according to a first gradient value and a second gradient value of each pixel point to obtain a gradient amplitude of each pixel point, calculating according to the gradient amplitude of each pixel point to obtain an average amplitude, screening out the pixel points of which the gradient amplitudes are larger than the average amplitude, and generating a first pixel point set; the pixel points in the first pixel point set are edge pixel points in the gray level image;
acquiring the gray value of each pixel point in the gray image, screening out the pixel points with the gray values larger than a preset gray value, and generating a second pixel point set;
acquiring the intersection of the first pixel point set and the second pixel point set;
acquiring a union set of the first pixel point set and the second pixel point set;
calculating to obtain the definition of the gray image according to the intersection and the union, judging whether the definition is smaller than a preset definition, inputting the gray image into a depth information acquisition model trained in advance when the definition is determined to be smaller than the preset definition, outputting the depth value of each pixel point in the gray image, calculating to obtain the diameter of a diffusion circle of each pixel point according to the depth value of each pixel point, and performing defuzzification processing on the corresponding pixel point in the gray image according to the diameter of the diffusion circle of each pixel point.
8. The image processing method according to claim 1, further comprising, before inputting the actual scene image into the modified cv model:
calculating the signal-to-noise ratio of the actual scene image, judging whether the signal-to-noise ratio is smaller than a preset signal-to-noise ratio or not, and carrying out filtering processing on the actual scene image when the signal-to-noise ratio is smaller than the preset signal-to-noise ratio;
the signal-to-noise ratio C of the actual scene image is calculated as shown in formula (1):
Figure FDA0003107845780000041
g is the maximum gray value of a pixel point in the actual scene image; m is the length of the actual scene image; n is the width of the actual scene image; h (i, j) is the gray value of the pixel point (i, j) in the actual scene image.
9. The image processing method according to claim 8, further comprising, after the filtering process is performed on the actual scene image:
calculating a filter coefficient K for the actual scene image, as shown in formula (2):
Figure FDA0003107845780000042
wherein, W is the size of the filtering window; λ is laplace operator; l (i, j) is the weight of the pixel point (i, j) in the actual scene image; the weight is calculated according to the gradient information of the pixel points;
calculating the gray value f (i, j) of the pixel point (i, j) in the actual scene image after filtering processing according to the filtering coefficient K of the actual scene image, as shown in formula (3):
Figure FDA0003107845780000043
calculating the gray value of each pixel point in the actual scene image after filtering;
screening out the pixel points with the gray value larger than a preset gray value to generate a third pixel point set, screening out the pixel points with the gray value smaller than the preset gray value to generate a fourth pixel point set;
carrying out reduction processing on the gray value of each pixel point in the third pixel point set;
and carrying out increasing processing on the gray value of each pixel point in the fourth pixel point set.
10. An image processing system, comprising:
the training module is used for acquiring sample scene information and a sample scene image shot in a sample scene, and performing model training according to the sample scene information and the sample scene image to obtain a sample cv model; the configuration parameters of the sample cv model include an identification range of pixels; the identification range comprises a maximum identification pixel and a minimum identification pixel;
the system comprises an acquisition module, a recognition module and a recognition module, wherein the acquisition module is used for acquiring scene information of a scene to be recognized and shooting an actual scene image of the scene to be recognized;
the correction module is used for comparing the scene information of the scene to be identified with the sample scene information to obtain a comparison result, and adjusting the identification range of the pixels of the sample cv model according to the comparison result to obtain a corrected cv model;
and the identification module is used for inputting the actual scene image into the corrected cv model and outputting an identification result of the actual scene image.
CN202110641215.1A 2021-06-09 2021-06-09 Image processing method and system Active CN113344878B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110641215.1A CN113344878B (en) 2021-06-09 2021-06-09 Image processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110641215.1A CN113344878B (en) 2021-06-09 2021-06-09 Image processing method and system

Publications (2)

Publication Number Publication Date
CN113344878A true CN113344878A (en) 2021-09-03
CN113344878B CN113344878B (en) 2022-03-18

Family

ID=77475650

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110641215.1A Active CN113344878B (en) 2021-06-09 2021-06-09 Image processing method and system

Country Status (1)

Country Link
CN (1) CN113344878B (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070153308A1 (en) * 2005-11-14 2007-07-05 Reuven Zemach Apparatus and Method for Reducing Ink/ Toner Consumption of Color Printers
CN102289812A (en) * 2011-08-26 2011-12-21 上海交通大学 Object segmentation method based on priori shape and CV (Computer Vision) model
CN103425986A (en) * 2013-08-31 2013-12-04 西安电子科技大学 Breast lump image feature extraction method based on edge neighborhood weighing
CN104599238A (en) * 2013-10-30 2015-05-06 腾讯科技(北京)有限公司 Image processing method and device
US20160375830A1 (en) * 2002-08-21 2016-12-29 Magna Electronics Inc. Multi-camera vision system for a vehicle
CN107016391A (en) * 2017-04-14 2017-08-04 中国科学院合肥物质科学研究院 A kind of complex scene workpiece identification method
CN107103312A (en) * 2017-06-07 2017-08-29 深圳天珑无线科技有限公司 A kind of image processing method and device
CN107403431A (en) * 2016-05-19 2017-11-28 深圳市华因康高通量生物技术研究院 Automatic identification sample shot region method and system
CN107578039A (en) * 2017-10-08 2018-01-12 王奕博 Writing profile comparison method based on digital image processing techniques
US20190156571A1 (en) * 2013-03-14 2019-05-23 Imagination Technologies Limited Rendering in computer graphics systems
CN109934766A (en) * 2019-03-06 2019-06-25 北京市商汤科技开发有限公司 A kind of image processing method and device
CN110222629A (en) * 2019-06-03 2019-09-10 中冶赛迪重庆信息技术有限公司 Bale No. recognition methods and Bale No. identifying system under a kind of steel scene
CN110390033A (en) * 2019-07-25 2019-10-29 腾讯科技(深圳)有限公司 Training method, device, electronic equipment and the storage medium of image classification model
CN111256596A (en) * 2020-02-21 2020-06-09 北京容联易通信息技术有限公司 Size measuring method and device based on CV technology, computer equipment and medium
CN111935509A (en) * 2020-10-09 2020-11-13 腾讯科技(深圳)有限公司 Multimedia data playing method, related device, equipment and storage medium
CN112434642A (en) * 2020-12-07 2021-03-02 北京航空航天大学 Sea-land segmentation method suitable for processing large-scene optical remote sensing data

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160375830A1 (en) * 2002-08-21 2016-12-29 Magna Electronics Inc. Multi-camera vision system for a vehicle
US20070153308A1 (en) * 2005-11-14 2007-07-05 Reuven Zemach Apparatus and Method for Reducing Ink/ Toner Consumption of Color Printers
CN102289812A (en) * 2011-08-26 2011-12-21 上海交通大学 Object segmentation method based on priori shape and CV (Computer Vision) model
US20190156571A1 (en) * 2013-03-14 2019-05-23 Imagination Technologies Limited Rendering in computer graphics systems
CN103425986A (en) * 2013-08-31 2013-12-04 西安电子科技大学 Breast lump image feature extraction method based on edge neighborhood weighing
CN104599238A (en) * 2013-10-30 2015-05-06 腾讯科技(北京)有限公司 Image processing method and device
CN107403431A (en) * 2016-05-19 2017-11-28 深圳市华因康高通量生物技术研究院 Automatic identification sample shot region method and system
CN107016391A (en) * 2017-04-14 2017-08-04 中国科学院合肥物质科学研究院 A kind of complex scene workpiece identification method
CN107103312A (en) * 2017-06-07 2017-08-29 深圳天珑无线科技有限公司 A kind of image processing method and device
CN107578039A (en) * 2017-10-08 2018-01-12 王奕博 Writing profile comparison method based on digital image processing techniques
CN109934766A (en) * 2019-03-06 2019-06-25 北京市商汤科技开发有限公司 A kind of image processing method and device
CN110222629A (en) * 2019-06-03 2019-09-10 中冶赛迪重庆信息技术有限公司 Bale No. recognition methods and Bale No. identifying system under a kind of steel scene
CN110390033A (en) * 2019-07-25 2019-10-29 腾讯科技(深圳)有限公司 Training method, device, electronic equipment and the storage medium of image classification model
CN111256596A (en) * 2020-02-21 2020-06-09 北京容联易通信息技术有限公司 Size measuring method and device based on CV technology, computer equipment and medium
CN111935509A (en) * 2020-10-09 2020-11-13 腾讯科技(深圳)有限公司 Multimedia data playing method, related device, equipment and storage medium
CN112434642A (en) * 2020-12-07 2021-03-02 北京航空航天大学 Sea-land segmentation method suitable for processing large-scene optical remote sensing data

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘建磊 等: "结合概率密度函数和主动轮廓模型的磁共振图像分割", 《光学精密工程》 *
李宣平 等: "模糊聚类协作区域主动轮廓模型医学图像分割", 《仪器仪表学报》 *
谢志南 等: "改进Chan-Vese模型的肝癌消融CT图像肿块分割方法", 《激光与光电子学进展》 *

Also Published As

Publication number Publication date
CN113344878B (en) 2022-03-18

Similar Documents

Publication Publication Date Title
CN114937055B (en) Image self-adaptive segmentation method and system based on artificial intelligence
CN109870461B (en) Electronic components quality detection system
JP3123587B2 (en) Moving object region extraction method using background subtraction
US20210110188A1 (en) Stereo imaging device
CN105718931B (en) System and method for determining clutter in acquired images
CN116311079B (en) Civil security engineering monitoring method based on computer vision
CN111652098A (en) Product surface defect detection method and device
CN117764986B (en) Titanium plate surface defect detection method based on image processing
CN111047624A (en) Image dim target detection method, device, equipment and storage medium
CN112380961A (en) Method and system for detecting bubble flow pattern and evaluating air tightness based on artificial intelligence
CN112927223A (en) Glass curtain wall detection method based on infrared thermal imager
CN112261390A (en) Vehicle-mounted camera equipment and image optimization device and method thereof
CN111665249B (en) Light intensity adjusting method and system and optical detection equipment
CN113554645B (en) Industrial anomaly detection method and device based on WGAN
CN115049955A (en) Fire detection analysis method and device based on video analysis technology
CN113344878B (en) Image processing method and system
CN116630332B (en) PVC plastic pipe orifice defect detection method based on image processing
CN112613456A (en) Small target detection method based on multi-frame differential image accumulation
JPH06308256A (en) Cloudy fog detecting method
CN114283170B (en) Light spot extraction method
GB2621520A (en) Systems and methods for determining an adaptive region of interest (ROI) for image metrics calculations
CN113469980A (en) Flange identification method based on image processing
CN109688404B (en) Method for evaluating image quality
CN117474916B (en) Image detection method, electronic equipment and storage medium
CN113920065B (en) Imaging quality evaluation method for visual detection system of industrial site

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant