CN113362244B - Image processing method based on priority and data use plan - Google Patents

Image processing method based on priority and data use plan Download PDF

Info

Publication number
CN113362244B
CN113362244B CN202110621790.5A CN202110621790A CN113362244B CN 113362244 B CN113362244 B CN 113362244B CN 202110621790 A CN202110621790 A CN 202110621790A CN 113362244 B CN113362244 B CN 113362244B
Authority
CN
China
Prior art keywords
image
priority
channel
image processing
processing method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110621790.5A
Other languages
Chinese (zh)
Other versions
CN113362244A (en
Inventor
孙桂萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zibo Vocational Institute
Original Assignee
Zibo Vocational Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zibo Vocational Institute filed Critical Zibo Vocational Institute
Priority to CN202110621790.5A priority Critical patent/CN113362244B/en
Publication of CN113362244A publication Critical patent/CN113362244A/en
Application granted granted Critical
Publication of CN113362244B publication Critical patent/CN113362244B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/90
    • G06T5/70
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering

Abstract

The invention provides an image processing method based on priority and a data use plan. According to the technical scheme, firstly, a G, B channel of an image is corrected based on a gray compensation algorithm, and then an R channel is corrected by adopting an extinction coefficient inversion combined gray compensation method, so that the image is subjected to homogenization treatment under the same condition, and the influence of ambient light on the image is relieved. On the basis, the contour information of the object is calibrated based on Opencv, drawing and forming are carried out, then a plurality of adjacent frames of images are extracted, and the contour line is corrected by adopting an averaging method, so that the contour information of the object is obtained. In the preferred technical scheme, the color correction or offset correction can be further carried out on the original image; and for the pixel points in the object range, noise reduction can be performed by utilizing a median filtering method so as to facilitate subsequent feature extraction. By applying the method, the image is fully stable on the pixel level, the calibration of the object is more accurate and effective, and a foundation is laid for the subsequent accurate extraction of the characteristic information.

Description

Image processing method based on priority and data use plan
Technical Field
The invention relates to the technical field of image processing, in particular to an image processing method based on priority and a data use plan.
Background
Image processing techniques are techniques that utilize computers, cameras, and other digital processing techniques to apply certain operations and processes to an image to extract various information in the image to achieve a particular purpose. The image processing technology has the characteristics of good reproducibility, high precision, wide application range and the like. The method is widely applied to the fields of industrial automation, reading of characters and drawings, medical treatment, traffic, remote sensing image processing and the like. With the continuous development of computer technology and digital technology and the continuous reduction of the cost of image processing equipment, the application of image processing technology in the engineering field will be more and more popular.
In recent years, with the rapid development of agricultural automation, image processing technology is gradually applied to sorting or grading of agricultural products such as fruits, and characteristics such as color, shape, size and the like of the agricultural products are evaluated through image acquisition and processing means; eggs are graded according to external characteristics such as color, weight, shape, size and the like; the tobacco leaves are comprehensively classified according to color, shape, texture, area and the like. The image processing method aiming at product grading at present mainly comprises operations, pixel transformation, geometric transformation, filtering, global optimization and the like. In particular, the geometric transformation method is most widely used, and uses a mathematical modeling method to describe the changes of image position, size, shape and the like. If an image shot in an actual scene is too large or too small, the image needs to be reduced or enlarged. Some geometric distortion may occur if the scene is not in a parallel relationship with the camera during shooting, for example, a square may be shot as a trapezoid, etc. This requires some distortion correction. When matching an object, it is necessary to perform processing such as rotation and translation on an image. When displaying a three-dimensional scene, projection modeling from three dimensions to a two-dimensional plane is required. Although a plurality of application examples exist, the image analysis result obtained by simply relying on the method still has certain deviation at present, and especially, the image acquisition condition still has obvious influence on the evaluation result.
Disclosure of Invention
The invention aims to provide an image processing method based on priority and a data use plan to solve the technical problem that the conventional image processing method has large deviation in object identification and feature extraction.
In order to achieve the technical purpose, the invention adopts the following technical scheme:
an image processing method based on priority and data use plan, comprising the steps of:
1) Correcting a G, B channel of the image by using the formula 1;
Figure BDA0003098745010000021
in formula 1, J (x, λ) is the corrected G, B channel luminance; i (x, λ) is the G, B channel luminance before correction; t is t BG (x) Homogenizing the gray scale of G, B channel; b (λ) is an ambient light reflection component;
2) Correcting the R channel of the image by using the formula 2;
Figure BDA0003098745010000022
in formula 2, J (x, λ) is the corrected R channel luminance; i (x, lambda) is the R channel brightness before correction; t is t BG (X) is the gray level homogeneity of G, B channel; b (λ) is an ambient light reflection component; alpha is the scattering coefficient of ambient light; r is an aerosol extinction coefficient;
3) Reading an image to Mat, converting the Mat into a gray image, and binarizing; finding the outline of the detected object by using a cv2.FindContours () function in an Opencv-Python interface; drawing contours on the image through cv2. Drawcontigous in OpenCV;
4) And (3) extracting a plurality of adjacent frame images, respectively executing the step 3), superposing the contour lines of the frame images, repeatedly extracting the images when the deviation value is greater than a preset value, and taking the mean value to obtain the contour range of the object when the deviation value is not greater than the preset value.
Preferably, the method further comprises the following step 5): and constructing an energy functional for the part in the object contour range, and simultaneously constructing the energy functional of the active contour model based on the mixed region by using the local gray fitting term of the original image and the global gray fitting term of the enhanced image.
Preferably, before step 1) is performed, an image acquisition inclination angle and an object motion direction are acquired, a rotation matrix and a shift matrix of the image are respectively calculated, and coordinates are corrected according to the size of the center shift.
Preferably, before step 1) is executed, relative position coordinates of the image capturing device and the photographic subject are acquired, and the pixel points are corrected according to the relative position coordinates and preset perspective parameters of the angle of view.
Preferably, the function prototype of said cv2.FindContours () is cv2.FindContours (image, mode, method [, constants [, offset ] ]).
Preferably, the cv2. Drawcocontrours function prototype is cv2. Drawcocontrours (image, curves, color [, clickness [, lineType [, hierarchy [, maxLevel [, offset ] ] ]).
Preferably, two image acquisition devices are adopted, images obtained by the two image acquisition devices are correlated by time parameters, a plurality of mapping relations of the correlated images are established, monitoring points are marked in the mapped images, the motion tracks of the monitoring points on the two images are respectively determined, and the offset of the two motion tracks is calculated.
Preferably, a filtering window containing odd number of pixel points and coordinates thereof are determined for the pixel points in the object contour range, the pixel values in the filtering window are sorted according to the gray scale, and the number of bits is taken to replace the pixel value in the center of the original window.
The invention provides an image processing method based on priority and a data use plan. According to the technical scheme, firstly, a G, B channel of an image is corrected based on a gray compensation algorithm, and then an R channel is corrected by adopting an extinction coefficient inversion combined gray compensation method, so that the image is subjected to homogenization treatment under the same condition, and the influence of ambient light on the image is relieved. On the basis, the contour information of the object is calibrated based on Opencv, drawing and forming are carried out, then a plurality of adjacent frames of images are extracted, and the contour line is corrected by adopting an averaging method, so that the contour information of the object is obtained. In the preferred technical scheme, the color correction or offset correction can be further carried out on the original image; and for the pixel points in the object range, noise reduction can be performed by utilizing a median filtering method so as to facilitate subsequent feature extraction. By applying the method, the image is fully stable on the pixel level, the calibration of the object is more accurate and effective, and a foundation is laid for the subsequent accurate extraction of the characteristic information.
Drawings
Fig. 1 is a flow chart of the method of the present invention.
Detailed Description
Hereinafter, specific embodiments of the present invention will be described in detail. Well-known structures or functions may not be described in detail in the following embodiments in order to avoid unnecessarily obscuring the details. Approximating language, as used herein in the following examples, may be applied to identify quantitative representations that could permissibly vary in number without resulting in a change in the basic function. Unless defined otherwise, technical and scientific terms used in the following examples have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
Example 1
The image processing method based on the priority and the data use plan, as shown in fig. 1, comprises the following steps:
1) Correcting a G, B channel of the image by using formula 1;
Figure BDA0003098745010000041
in formula 1, J (x, λ) is the corrected G, B channel luminance; i (x, λ) is the G, B channel luminance before correction; t is t BG (x) Homogenizing the gray scale of G, B channel; b (λ) is an ambient light reflection component;
2) Correcting an R channel of the image by using the formula 2;
Figure BDA0003098745010000042
in formula 2, J (x, λ) is the corrected R channel luminance; i (x, lambda) is the R channel brightness before correction; t is t BG (X) is the gray level homogeneity of G, B channel; b (λ) is an ambient light reflection component; alpha is the scattering coefficient of ambient light; r is an aerosol extinction coefficient;
3) Reading an image to Mat, converting the Mat into a gray image, and binarizing; finding the outline of the detected object by using a cv2.FindContours () function in an Opencv-Python interface; drawing contours on the image through cv2. Drawcontigous in OpenCV;
4) And (3) extracting a plurality of adjacent frame images, respectively executing the step 3), superposing the contour lines of the frame images, repeatedly extracting the images when the deviation value is greater than a preset value, and taking the mean value to obtain the contour range of the object when the deviation value is not greater than the preset value.
In the technical scheme, step 1) firstly corrects the brightness of the G, B channel from which the ambient light reflection component is removed by using the gray level homogeneity of the G, B channel as a basis; step 2) independently correcting the brightness of the R channel, wherein a scattering coefficient of ambient light and an aerosol extinction coefficient are introduced, so that the influence of environmental factors on the brightness is relieved; step 3) calibrating the contour information of the object by utilizing Opencv; and 4) correcting the contour line by adopting an averaging method according to the adjacent frame images, so that the contour line is positioned more accurately and continuously, and the true contour of the target object is reflected objectively.
Example 2
The image processing method based on the priority and the data use plan, as shown in fig. 1, comprises the following steps:
1) Correcting a G, B channel of the image by using formula 1;
Figure BDA0003098745010000051
in formula 1, J (x, λ) is the corrected G, B channel luminance; i (x, λ) is the G, B channel luminance before correction; t is t BG (x) Homogenizing the gray scale of G, B channel; b (λ) is an ambient light reflection component;
2) Correcting an R channel of the image by using the formula 2;
Figure BDA0003098745010000052
in formula 2, J (x, λ) is the corrected R channel luminance; i (x, lambda) is the R channel brightness before correction; t is t BG (X) is the gray level homogeneity of G, B channel; b (λ) is an ambient light reflection component; alpha is the scattering coefficient of ambient light; r is an aerosol extinction coefficient;
3) Reading the image to Mat, converting the Mat into a gray image, and binarizing; finding the outline of the detected object by using a cv2.FindContours () function in an Opencv-Python interface; drawing contours on the image by cv2.Drawcontours in OpenCV;
4) Extracting a plurality of adjacent frame images, respectively executing the step 3), superposing the contour lines of the frame images, repeatedly extracting the images when the deviation value is greater than a preset value, and taking the mean value to obtain the contour range of the object when the deviation value is not greater than the preset value;
5) And constructing an energy functional for the part in the object contour range, and simultaneously constructing the energy functional of the active contour model based on the mixed region by using the local gray fitting term of the original image and the global gray fitting term of the enhanced image.
Before step 1) is executed, acquiring an image acquisition inclination angle and an object motion direction, respectively calculating a rotation matrix and an offset matrix of an image, and correcting coordinates according to the size of central offset.
Before the step 1) is executed, acquiring the relative position coordinates of the image acquisition equipment and the shooting object, and correcting the pixel points according to the relative position coordinates and the preset perspective parameters of the visual angle.
The function prototype of cv2.FindContours () is cv2.FindContours (image, mode, method [, constants [, offset ] ]).
The cv2.DrawContours function prototype is cv2.DrawContours (image, curves, color [, clickness [, lineType [, hierarchy [, maxLevel [, offset ] ] ]).
The method comprises the steps of adopting two image acquisition devices, correlating images obtained by the two image acquisition devices by using time parameters, establishing a plurality of mapping relations of the correlated images, marking monitoring points in the mapped images, respectively determining motion tracks of the monitoring points on the two images, and calculating the offset of the two motion tracks.
And determining a filtering window containing odd number of pixel points and coordinates thereof for the pixel points in the object contour range, sequencing the pixel values in the filtering window according to the gray scale, and taking the number of the bits to replace the pixel value in the center of the original window.
The embodiments of the present invention have been described in detail, but the description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention. Any modification, equivalent replacement, and improvement made within the scope of the application of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. An image processing method based on priority and data usage plan, characterized by comprising the steps of:
1) Correcting a G, B channel of the image by using the formula 1;
Figure FDA0003098745000000011
in formula 1, J (x, λ) is the corrected G, B channel luminance; i (x, λ) is the G, B channel luminance before correction; t is t BG (x) Is the gray average of G, B channels; b (λ) is an ambient light reflection component;
2) Correcting the R channel of the image by using the formula 2;
Figure FDA0003098745000000012
in formula 2, J (x, λ) is the corrected R channel luminance; i (x, lambda) is the R channel brightness before correction; t is t BG (x) Is the gray average of G, B channels; b (λ) is an ambient light reflection component; alpha is the scattering coefficient of ambient light; r is an aerosol extinction coefficient;
3) Reading the image to Mat, converting the Mat into a gray image, and binarizing; finding the outline of the detected object by using a cv2.FindContours () function in an Opencv-Python interface; drawing contours on the image through cv2. Drawcontigous in OpenCV;
4) And (3) extracting a plurality of adjacent frame images, respectively executing the step 3), superposing the contour lines of the frame images, repeatedly extracting the images when the deviation value is greater than a preset value, and taking the mean value to obtain the contour range of the object when the deviation value is not greater than the preset value.
2. The method for image processing based on priority and data usage plan according to claim 1, further comprising the step of 5): and constructing an energy functional for the part in the object contour range, and simultaneously constructing an energy functional of the active contour model based on the mixed region by using the local gray fitting item of the original image and the global gray fitting item of the enhanced image.
3. The priority and data usage plan based image processing method of claim 1, wherein before performing step 1), an image capturing inclination angle and an object moving direction are acquired, a rotation matrix and a shift matrix of the image are calculated, respectively, and coordinates are corrected according to a magnitude of the center shift.
4. The priority and data usage plan based image processing method according to claim 1, wherein before performing step 1), relative position coordinates of the image capturing device and the photographic subject are acquired, and the pixel points are corrected according to the relative position coordinates and preset perspective parameters of the viewing angle.
5. The priority-and-data-usage-plan-based image processing method according to claim 1, wherein the cv2.Findcontours () function prototype is cv2.Findcontours (image, mode, method [, contexts [, offsets ] ]).
6. The priority-and-data-usage-plan-based image processing method according to claim 1, wherein the cv2.DrawContours function prototype is cv2.DrawContours (image, relationships, relationship Idx, color [, clickness [, lineType [, hierarchy [, maxLevel [, offset ] ] ]).
7. The image processing method based on priority and data use plan according to claim 1, characterized in that two image acquisition devices are adopted, images obtained by the two image acquisition devices are correlated by time parameters, a plurality of mapping relations to the correlated images are established, monitoring points are marked in the mapped images, the motion tracks of the monitoring points on the two images are respectively determined, and the offset of the two motion tracks is calculated.
8. The method of claim 1, wherein a filtering window containing an odd number of pixels and their coordinates are determined for pixels within the contour of the object, and the pixel values in the filtering window are sorted by gray scale, and the number of bits is taken to replace the pixel value in the center of the original window.
CN202110621790.5A 2021-06-03 2021-06-03 Image processing method based on priority and data use plan Active CN113362244B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110621790.5A CN113362244B (en) 2021-06-03 2021-06-03 Image processing method based on priority and data use plan

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110621790.5A CN113362244B (en) 2021-06-03 2021-06-03 Image processing method based on priority and data use plan

Publications (2)

Publication Number Publication Date
CN113362244A CN113362244A (en) 2021-09-07
CN113362244B true CN113362244B (en) 2023-02-24

Family

ID=77531992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110621790.5A Active CN113362244B (en) 2021-06-03 2021-06-03 Image processing method based on priority and data use plan

Country Status (1)

Country Link
CN (1) CN113362244B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102968772A (en) * 2012-12-04 2013-03-13 电子科技大学 Image defogging method based on dark channel information
CN103578084A (en) * 2013-12-09 2014-02-12 西安电子科技大学 Color image enhancement method based on bright channel filtering
CN104081339A (en) * 2012-01-27 2014-10-01 微软公司 Managing data transfers over network connections based on priority and data usage plan
CN106934815A (en) * 2017-02-27 2017-07-07 南京理工大学 Movable contour model image partition method based on Mixed Zone
CN107274414A (en) * 2017-05-27 2017-10-20 西安电子科技大学 Image partition method based on the CV models for improving local message
CN112734656A (en) * 2020-12-24 2021-04-30 中电海康集团有限公司 Microscope image depth of field synthesis method and system based on local contrast weighted average

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104081339A (en) * 2012-01-27 2014-10-01 微软公司 Managing data transfers over network connections based on priority and data usage plan
CN102968772A (en) * 2012-12-04 2013-03-13 电子科技大学 Image defogging method based on dark channel information
CN103578084A (en) * 2013-12-09 2014-02-12 西安电子科技大学 Color image enhancement method based on bright channel filtering
CN106934815A (en) * 2017-02-27 2017-07-07 南京理工大学 Movable contour model image partition method based on Mixed Zone
CN107274414A (en) * 2017-05-27 2017-10-20 西安电子科技大学 Image partition method based on the CV models for improving local message
CN112734656A (en) * 2020-12-24 2021-04-30 中电海康集团有限公司 Microscope image depth of field synthesis method and system based on local contrast weighted average

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"A Spatially Variant White-Patch and Gray-World Method for Color Image Enhancement Driven by Local Contrast";Edoardo Provenzi.etc;《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》;20081031;全文 *

Also Published As

Publication number Publication date
CN113362244A (en) 2021-09-07

Similar Documents

Publication Publication Date Title
Gaiani et al. An advanced pre-processing pipeline to improve automated photogrammetric reconstructions of architectural scenes
CN107507235B (en) Registration method of color image and depth image acquired based on RGB-D equipment
US8842906B2 (en) Body measurement
CN109523551B (en) Method and system for acquiring walking posture of robot
CN110400278B (en) Full-automatic correction method, device and equipment for image color and geometric distortion
CN111739031B (en) Crop canopy segmentation method based on depth information
CN110147162B (en) Fingertip characteristic-based enhanced assembly teaching system and control method thereof
CN113012234B (en) High-precision camera calibration method based on plane transformation
US10945657B2 (en) Automated surface area assessment for dermatologic lesions
CN110110131B (en) Airplane cable support identification and parameter acquisition method based on deep learning and binocular stereo vision
CN110910456B (en) Three-dimensional camera dynamic calibration method based on Harris angular point mutual information matching
CN108470178B (en) Depth map significance detection method combined with depth credibility evaluation factor
CN109242787A (en) It paints in a kind of assessment of middle and primary schools' art input method
CN112016478B (en) Complex scene recognition method and system based on multispectral image fusion
Alemán-Flores et al. Line detection in images showing significant lens distortion and application to distortion correction
CN110533686A (en) Line-scan digital camera line frequency and the whether matched judgment method of speed of moving body and system
US6636627B1 (en) Light source direction estimating method and apparatus
CN114066857A (en) Infrared image quality evaluation method and device, electronic equipment and readable storage medium
CN110763306A (en) Monocular vision-based liquid level measurement system and method
CN112200848A (en) Depth camera vision enhancement method and system under low-illumination weak-contrast complex environment
CN105139432B (en) Infrared DIM-small Target Image emulation mode based on Gauss model
CN115108466A (en) Intelligent positioning method for container spreader
CN113362244B (en) Image processing method based on priority and data use plan
CN112002016A (en) Continuous curved surface reconstruction method, system and device based on binocular vision
Hajjdiab et al. A vision-based approach for nondestructive leaf area estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant