CN113362244B - Image processing method based on priority and data use plan - Google Patents
Image processing method based on priority and data use plan Download PDFInfo
- Publication number
- CN113362244B CN113362244B CN202110621790.5A CN202110621790A CN113362244B CN 113362244 B CN113362244 B CN 113362244B CN 202110621790 A CN202110621790 A CN 202110621790A CN 113362244 B CN113362244 B CN 113362244B
- Authority
- CN
- China
- Prior art keywords
- image
- priority
- channel
- image processing
- processing method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 17
- 238000000034 method Methods 0.000 claims abstract description 24
- 238000012937 correction Methods 0.000 claims abstract description 13
- 238000001914 filtration Methods 0.000 claims abstract description 9
- 230000008033 biological extinction Effects 0.000 claims abstract description 7
- 238000012545 processing Methods 0.000 claims description 11
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000012544 monitoring process Methods 0.000 claims description 6
- 239000000443 aerosol Substances 0.000 claims description 5
- 230000002596 correlated effect Effects 0.000 claims description 5
- 238000013507 mapping Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 abstract description 5
- 238000012935 Averaging Methods 0.000 abstract description 3
- 230000009467 reduction Effects 0.000 abstract description 3
- 238000000265 homogenisation Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 9
- 238000011161 development Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 241000208125 Nicotiana Species 0.000 description 1
- 235000002637 Nicotiana tabacum Nutrition 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 235000013601 eggs Nutrition 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 238000011426 transformation method Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Generation (AREA)
- Image Processing (AREA)
Abstract
The invention provides an image processing method based on priority and a data use plan. According to the technical scheme, firstly, a G, B channel of an image is corrected based on a gray compensation algorithm, and then an R channel is corrected by adopting an extinction coefficient inversion combined gray compensation method, so that the image is subjected to homogenization treatment under the same condition, and the influence of ambient light on the image is relieved. On the basis, the contour information of the object is calibrated based on Opencv, drawing and forming are carried out, then a plurality of adjacent frames of images are extracted, and the contour line is corrected by adopting an averaging method, so that the contour information of the object is obtained. In the preferred technical scheme, the color correction or offset correction can be further carried out on the original image; and for the pixel points in the object range, noise reduction can be performed by utilizing a median filtering method so as to facilitate subsequent feature extraction. By applying the method, the image is fully stable on the pixel level, the calibration of the object is more accurate and effective, and a foundation is laid for the subsequent accurate extraction of the characteristic information.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to an image processing method based on priority and a data use plan.
Background
Image processing techniques are techniques that utilize computers, cameras, and other digital processing techniques to apply certain operations and processes to an image to extract various information in the image to achieve a particular purpose. The image processing technology has the characteristics of good reproducibility, high precision, wide application range and the like. The method is widely applied to the fields of industrial automation, reading of characters and drawings, medical treatment, traffic, remote sensing image processing and the like. With the continuous development of computer technology and digital technology and the continuous reduction of the cost of image processing equipment, the application of image processing technology in the engineering field will be more and more popular.
In recent years, with the rapid development of agricultural automation, image processing technology is gradually applied to sorting or grading of agricultural products such as fruits, and characteristics such as color, shape, size and the like of the agricultural products are evaluated through image acquisition and processing means; eggs are graded according to external characteristics such as color, weight, shape, size and the like; the tobacco leaves are comprehensively classified according to color, shape, texture, area and the like. The image processing method aiming at product grading at present mainly comprises operations, pixel transformation, geometric transformation, filtering, global optimization and the like. In particular, the geometric transformation method is most widely used, and uses a mathematical modeling method to describe the changes of image position, size, shape and the like. If an image shot in an actual scene is too large or too small, the image needs to be reduced or enlarged. Some geometric distortion may occur if the scene is not in a parallel relationship with the camera during shooting, for example, a square may be shot as a trapezoid, etc. This requires some distortion correction. When matching an object, it is necessary to perform processing such as rotation and translation on an image. When displaying a three-dimensional scene, projection modeling from three dimensions to a two-dimensional plane is required. Although a plurality of application examples exist, the image analysis result obtained by simply relying on the method still has certain deviation at present, and especially, the image acquisition condition still has obvious influence on the evaluation result.
Disclosure of Invention
The invention aims to provide an image processing method based on priority and a data use plan to solve the technical problem that the conventional image processing method has large deviation in object identification and feature extraction.
In order to achieve the technical purpose, the invention adopts the following technical scheme:
an image processing method based on priority and data use plan, comprising the steps of:
1) Correcting a G, B channel of the image by using the formula 1;
in formula 1, J (x, λ) is the corrected G, B channel luminance; i (x, λ) is the G, B channel luminance before correction; t is t BG (x) Homogenizing the gray scale of G, B channel; b (λ) is an ambient light reflection component;
2) Correcting the R channel of the image by using the formula 2;
in formula 2, J (x, λ) is the corrected R channel luminance; i (x, lambda) is the R channel brightness before correction; t is t BG (X) is the gray level homogeneity of G, B channel; b (λ) is an ambient light reflection component; alpha is the scattering coefficient of ambient light; r is an aerosol extinction coefficient;
3) Reading an image to Mat, converting the Mat into a gray image, and binarizing; finding the outline of the detected object by using a cv2.FindContours () function in an Opencv-Python interface; drawing contours on the image through cv2. Drawcontigous in OpenCV;
4) And (3) extracting a plurality of adjacent frame images, respectively executing the step 3), superposing the contour lines of the frame images, repeatedly extracting the images when the deviation value is greater than a preset value, and taking the mean value to obtain the contour range of the object when the deviation value is not greater than the preset value.
Preferably, the method further comprises the following step 5): and constructing an energy functional for the part in the object contour range, and simultaneously constructing the energy functional of the active contour model based on the mixed region by using the local gray fitting term of the original image and the global gray fitting term of the enhanced image.
Preferably, before step 1) is performed, an image acquisition inclination angle and an object motion direction are acquired, a rotation matrix and a shift matrix of the image are respectively calculated, and coordinates are corrected according to the size of the center shift.
Preferably, before step 1) is executed, relative position coordinates of the image capturing device and the photographic subject are acquired, and the pixel points are corrected according to the relative position coordinates and preset perspective parameters of the angle of view.
Preferably, the function prototype of said cv2.FindContours () is cv2.FindContours (image, mode, method [, constants [, offset ] ]).
Preferably, the cv2. Drawcocontrours function prototype is cv2. Drawcocontrours (image, curves, color [, clickness [, lineType [, hierarchy [, maxLevel [, offset ] ] ]).
Preferably, two image acquisition devices are adopted, images obtained by the two image acquisition devices are correlated by time parameters, a plurality of mapping relations of the correlated images are established, monitoring points are marked in the mapped images, the motion tracks of the monitoring points on the two images are respectively determined, and the offset of the two motion tracks is calculated.
Preferably, a filtering window containing odd number of pixel points and coordinates thereof are determined for the pixel points in the object contour range, the pixel values in the filtering window are sorted according to the gray scale, and the number of bits is taken to replace the pixel value in the center of the original window.
The invention provides an image processing method based on priority and a data use plan. According to the technical scheme, firstly, a G, B channel of an image is corrected based on a gray compensation algorithm, and then an R channel is corrected by adopting an extinction coefficient inversion combined gray compensation method, so that the image is subjected to homogenization treatment under the same condition, and the influence of ambient light on the image is relieved. On the basis, the contour information of the object is calibrated based on Opencv, drawing and forming are carried out, then a plurality of adjacent frames of images are extracted, and the contour line is corrected by adopting an averaging method, so that the contour information of the object is obtained. In the preferred technical scheme, the color correction or offset correction can be further carried out on the original image; and for the pixel points in the object range, noise reduction can be performed by utilizing a median filtering method so as to facilitate subsequent feature extraction. By applying the method, the image is fully stable on the pixel level, the calibration of the object is more accurate and effective, and a foundation is laid for the subsequent accurate extraction of the characteristic information.
Drawings
Fig. 1 is a flow chart of the method of the present invention.
Detailed Description
Hereinafter, specific embodiments of the present invention will be described in detail. Well-known structures or functions may not be described in detail in the following embodiments in order to avoid unnecessarily obscuring the details. Approximating language, as used herein in the following examples, may be applied to identify quantitative representations that could permissibly vary in number without resulting in a change in the basic function. Unless defined otherwise, technical and scientific terms used in the following examples have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
Example 1
The image processing method based on the priority and the data use plan, as shown in fig. 1, comprises the following steps:
1) Correcting a G, B channel of the image by using formula 1;
in formula 1, J (x, λ) is the corrected G, B channel luminance; i (x, λ) is the G, B channel luminance before correction; t is t BG (x) Homogenizing the gray scale of G, B channel; b (λ) is an ambient light reflection component;
2) Correcting an R channel of the image by using the formula 2;
in formula 2, J (x, λ) is the corrected R channel luminance; i (x, lambda) is the R channel brightness before correction; t is t BG (X) is the gray level homogeneity of G, B channel; b (λ) is an ambient light reflection component; alpha is the scattering coefficient of ambient light; r is an aerosol extinction coefficient;
3) Reading an image to Mat, converting the Mat into a gray image, and binarizing; finding the outline of the detected object by using a cv2.FindContours () function in an Opencv-Python interface; drawing contours on the image through cv2. Drawcontigous in OpenCV;
4) And (3) extracting a plurality of adjacent frame images, respectively executing the step 3), superposing the contour lines of the frame images, repeatedly extracting the images when the deviation value is greater than a preset value, and taking the mean value to obtain the contour range of the object when the deviation value is not greater than the preset value.
In the technical scheme, step 1) firstly corrects the brightness of the G, B channel from which the ambient light reflection component is removed by using the gray level homogeneity of the G, B channel as a basis; step 2) independently correcting the brightness of the R channel, wherein a scattering coefficient of ambient light and an aerosol extinction coefficient are introduced, so that the influence of environmental factors on the brightness is relieved; step 3) calibrating the contour information of the object by utilizing Opencv; and 4) correcting the contour line by adopting an averaging method according to the adjacent frame images, so that the contour line is positioned more accurately and continuously, and the true contour of the target object is reflected objectively.
Example 2
The image processing method based on the priority and the data use plan, as shown in fig. 1, comprises the following steps:
1) Correcting a G, B channel of the image by using formula 1;
in formula 1, J (x, λ) is the corrected G, B channel luminance; i (x, λ) is the G, B channel luminance before correction; t is t BG (x) Homogenizing the gray scale of G, B channel; b (λ) is an ambient light reflection component;
2) Correcting an R channel of the image by using the formula 2;
in formula 2, J (x, λ) is the corrected R channel luminance; i (x, lambda) is the R channel brightness before correction; t is t BG (X) is the gray level homogeneity of G, B channel; b (λ) is an ambient light reflection component; alpha is the scattering coefficient of ambient light; r is an aerosol extinction coefficient;
3) Reading the image to Mat, converting the Mat into a gray image, and binarizing; finding the outline of the detected object by using a cv2.FindContours () function in an Opencv-Python interface; drawing contours on the image by cv2.Drawcontours in OpenCV;
4) Extracting a plurality of adjacent frame images, respectively executing the step 3), superposing the contour lines of the frame images, repeatedly extracting the images when the deviation value is greater than a preset value, and taking the mean value to obtain the contour range of the object when the deviation value is not greater than the preset value;
5) And constructing an energy functional for the part in the object contour range, and simultaneously constructing the energy functional of the active contour model based on the mixed region by using the local gray fitting term of the original image and the global gray fitting term of the enhanced image.
Before step 1) is executed, acquiring an image acquisition inclination angle and an object motion direction, respectively calculating a rotation matrix and an offset matrix of an image, and correcting coordinates according to the size of central offset.
Before the step 1) is executed, acquiring the relative position coordinates of the image acquisition equipment and the shooting object, and correcting the pixel points according to the relative position coordinates and the preset perspective parameters of the visual angle.
The function prototype of cv2.FindContours () is cv2.FindContours (image, mode, method [, constants [, offset ] ]).
The cv2.DrawContours function prototype is cv2.DrawContours (image, curves, color [, clickness [, lineType [, hierarchy [, maxLevel [, offset ] ] ]).
The method comprises the steps of adopting two image acquisition devices, correlating images obtained by the two image acquisition devices by using time parameters, establishing a plurality of mapping relations of the correlated images, marking monitoring points in the mapped images, respectively determining motion tracks of the monitoring points on the two images, and calculating the offset of the two motion tracks.
And determining a filtering window containing odd number of pixel points and coordinates thereof for the pixel points in the object contour range, sequencing the pixel values in the filtering window according to the gray scale, and taking the number of the bits to replace the pixel value in the center of the original window.
The embodiments of the present invention have been described in detail, but the description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention. Any modification, equivalent replacement, and improvement made within the scope of the application of the present invention should be included in the protection scope of the present invention.
Claims (8)
1. An image processing method based on priority and data usage plan, characterized by comprising the steps of:
1) Correcting a G, B channel of the image by using the formula 1;
in formula 1, J (x, λ) is the corrected G, B channel luminance; i (x, λ) is the G, B channel luminance before correction; t is t BG (x) Is the gray average of G, B channels; b (λ) is an ambient light reflection component;
2) Correcting the R channel of the image by using the formula 2;
in formula 2, J (x, λ) is the corrected R channel luminance; i (x, lambda) is the R channel brightness before correction; t is t BG (x) Is the gray average of G, B channels; b (λ) is an ambient light reflection component; alpha is the scattering coefficient of ambient light; r is an aerosol extinction coefficient;
3) Reading the image to Mat, converting the Mat into a gray image, and binarizing; finding the outline of the detected object by using a cv2.FindContours () function in an Opencv-Python interface; drawing contours on the image through cv2. Drawcontigous in OpenCV;
4) And (3) extracting a plurality of adjacent frame images, respectively executing the step 3), superposing the contour lines of the frame images, repeatedly extracting the images when the deviation value is greater than a preset value, and taking the mean value to obtain the contour range of the object when the deviation value is not greater than the preset value.
2. The method for image processing based on priority and data usage plan according to claim 1, further comprising the step of 5): and constructing an energy functional for the part in the object contour range, and simultaneously constructing an energy functional of the active contour model based on the mixed region by using the local gray fitting item of the original image and the global gray fitting item of the enhanced image.
3. The priority and data usage plan based image processing method of claim 1, wherein before performing step 1), an image capturing inclination angle and an object moving direction are acquired, a rotation matrix and a shift matrix of the image are calculated, respectively, and coordinates are corrected according to a magnitude of the center shift.
4. The priority and data usage plan based image processing method according to claim 1, wherein before performing step 1), relative position coordinates of the image capturing device and the photographic subject are acquired, and the pixel points are corrected according to the relative position coordinates and preset perspective parameters of the viewing angle.
5. The priority-and-data-usage-plan-based image processing method according to claim 1, wherein the cv2.Findcontours () function prototype is cv2.Findcontours (image, mode, method [, contexts [, offsets ] ]).
6. The priority-and-data-usage-plan-based image processing method according to claim 1, wherein the cv2.DrawContours function prototype is cv2.DrawContours (image, relationships, relationship Idx, color [, clickness [, lineType [, hierarchy [, maxLevel [, offset ] ] ]).
7. The image processing method based on priority and data use plan according to claim 1, characterized in that two image acquisition devices are adopted, images obtained by the two image acquisition devices are correlated by time parameters, a plurality of mapping relations to the correlated images are established, monitoring points are marked in the mapped images, the motion tracks of the monitoring points on the two images are respectively determined, and the offset of the two motion tracks is calculated.
8. The method of claim 1, wherein a filtering window containing an odd number of pixels and their coordinates are determined for pixels within the contour of the object, and the pixel values in the filtering window are sorted by gray scale, and the number of bits is taken to replace the pixel value in the center of the original window.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110621790.5A CN113362244B (en) | 2021-06-03 | 2021-06-03 | Image processing method based on priority and data use plan |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110621790.5A CN113362244B (en) | 2021-06-03 | 2021-06-03 | Image processing method based on priority and data use plan |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113362244A CN113362244A (en) | 2021-09-07 |
CN113362244B true CN113362244B (en) | 2023-02-24 |
Family
ID=77531992
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110621790.5A Active CN113362244B (en) | 2021-06-03 | 2021-06-03 | Image processing method based on priority and data use plan |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113362244B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102968772A (en) * | 2012-12-04 | 2013-03-13 | 电子科技大学 | Image defogging method based on dark channel information |
CN103578084A (en) * | 2013-12-09 | 2014-02-12 | 西安电子科技大学 | Color image enhancement method based on bright channel filtering |
CN104081339A (en) * | 2012-01-27 | 2014-10-01 | 微软公司 | Managing data transfers over network connections based on priority and data usage plan |
CN106934815A (en) * | 2017-02-27 | 2017-07-07 | 南京理工大学 | Movable contour model image partition method based on Mixed Zone |
CN107274414A (en) * | 2017-05-27 | 2017-10-20 | 西安电子科技大学 | Image partition method based on the CV models for improving local message |
CN112734656A (en) * | 2020-12-24 | 2021-04-30 | 中电海康集团有限公司 | Microscope image depth of field synthesis method and system based on local contrast weighted average |
-
2021
- 2021-06-03 CN CN202110621790.5A patent/CN113362244B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104081339A (en) * | 2012-01-27 | 2014-10-01 | 微软公司 | Managing data transfers over network connections based on priority and data usage plan |
CN102968772A (en) * | 2012-12-04 | 2013-03-13 | 电子科技大学 | Image defogging method based on dark channel information |
CN103578084A (en) * | 2013-12-09 | 2014-02-12 | 西安电子科技大学 | Color image enhancement method based on bright channel filtering |
CN106934815A (en) * | 2017-02-27 | 2017-07-07 | 南京理工大学 | Movable contour model image partition method based on Mixed Zone |
CN107274414A (en) * | 2017-05-27 | 2017-10-20 | 西安电子科技大学 | Image partition method based on the CV models for improving local message |
CN112734656A (en) * | 2020-12-24 | 2021-04-30 | 中电海康集团有限公司 | Microscope image depth of field synthesis method and system based on local contrast weighted average |
Non-Patent Citations (1)
Title |
---|
"A Spatially Variant White-Patch and Gray-World Method for Color Image Enhancement Driven by Local Contrast";Edoardo Provenzi.etc;《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》;20081031;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113362244A (en) | 2021-09-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106874949B (en) | Movement imaging platform moving target detecting method and system based on infrared image | |
CN107507235B (en) | Registration method of color image and depth image acquired based on RGB-D equipment | |
RU2680765C1 (en) | Automated determination and cutting of non-singular contour of a picture on an image | |
US8842906B2 (en) | Body measurement | |
CN109523551B (en) | Method and system for acquiring walking posture of robot | |
CN110400278B (en) | Full-automatic correction method, device and equipment for image color and geometric distortion | |
CN107169475A (en) | A kind of face three-dimensional point cloud optimized treatment method based on kinect cameras | |
CN110889829A (en) | Monocular distance measurement method based on fisheye lens | |
CN101639947A (en) | Image-based plant three-dimensional shape measurement and reconstruction method and system | |
CN111739031B (en) | Crop canopy segmentation method based on depth information | |
CN108765433A (en) | One kind is for carrying high-precision leafy area measurement method | |
CN113012234B (en) | High-precision camera calibration method based on plane transformation | |
CN110110131B (en) | Airplane cable support identification and parameter acquisition method based on deep learning and binocular stereo vision | |
CN108470178B (en) | Depth map significance detection method combined with depth credibility evaluation factor | |
CN110910456B (en) | Three-dimensional camera dynamic calibration method based on Harris angular point mutual information matching | |
CN109242787A (en) | It paints in a kind of assessment of middle and primary schools' art input method | |
Alemán-Flores et al. | Line detection in images showing significant lens distortion and application to distortion correction | |
US20190053750A1 (en) | Automated surface area assessment for dermatologic lesions | |
CN114066857A (en) | Infrared image quality evaluation method and device, electronic equipment and readable storage medium | |
CN110533686A (en) | Line-scan digital camera line frequency and the whether matched judgment method of speed of moving body and system | |
CN114998571B (en) | Image processing and color detection method based on fixed-size markers | |
CN112200848A (en) | Depth camera vision enhancement method and system under low-illumination weak-contrast complex environment | |
CN105139432B (en) | Infrared DIM-small Target Image emulation mode based on Gauss model | |
CN113362244B (en) | Image processing method based on priority and data use plan | |
CN112002016A (en) | Continuous curved surface reconstruction method, system and device based on binocular vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |