CN113132693A - Color correction method - Google Patents
Color correction method Download PDFInfo
- Publication number
- CN113132693A CN113132693A CN201911424392.3A CN201911424392A CN113132693A CN 113132693 A CN113132693 A CN 113132693A CN 201911424392 A CN201911424392 A CN 201911424392A CN 113132693 A CN113132693 A CN 113132693A
- Authority
- CN
- China
- Prior art keywords
- image
- color
- color block
- target
- standard color
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012937 correction Methods 0.000 title claims abstract description 52
- 238000000034 method Methods 0.000 title claims abstract description 14
- 238000001514 detection method Methods 0.000 claims abstract description 52
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 239000013598 vector Substances 0.000 claims description 12
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000013016 damping Methods 0.000 claims description 3
- 238000013461 design Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims 1
- 238000005457 optimization Methods 0.000 claims 1
- 239000000758 substrate Substances 0.000 description 4
- 239000003086 colorant Substances 0.000 description 3
- 238000012795 verification Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N17/002—Diagnosis, testing or measuring for television systems or their details for television cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N17/02—Diagnosis, testing or measuring for television systems or their details for colour television signals
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a color correction method, which comprises the following steps: identifying the bar code image by using a bar code identification algorithm to obtain bar code data; then according to the configuration scheme obtained by the bar code data, respectively obtaining the position coordinates and the sizes of each standard color block, each white check color block and the target, and further positioning the position and the outline of each standard color block, each white check color block and the target object in the original complete detection area image; based on the characteristics, preprocessing each original color image to obtain the image color values of each original standard color block image and each white check color block as the color value of the correction color block; and obtaining the image color value of the target object. The method can be used for solving the problem of color deviation of different cameras with unknown parameters when acquiring images of different target objects, and has the advantages of wide adaptability, simplicity in operation, small operand, and low cost realization based on general hardware and software.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a color correction method.
Background
A camera is an electronic device for acquiring images, which can convert optical image signals into electric signals, thereby realizing the acquisition and storage of images. The color of an object seen by human eyes is the true color of the object, and the color observed by human eyes is not influenced by ambient light, which is called color constancy. Different cameras have different parameters, so that the colors of images acquired by different cameras for the same object have different deviations. In order to realize accurate color acquisition and identification of cameras with different parameters, a practical color correction method and a practical color correction device are designed for reducing color deviation among different cameras when target object images are acquired.
At present, various mainstream color correction methods either need to know parameter values of a camera in advance or need to simultaneously acquire approximate images of a target object by using different cameras, identify the approximate images through an algorithm and select enough pixel points at the same position for analysis, and have the problems of certain limitation, difficult operation, long consumed time, large resource consumption and the like.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, the present invention is directed to a color correction method to solve the above-mentioned problems.
In order to achieve the purpose, the invention provides the following technical scheme:
an existing standard color card for color correction comprises a substrate, a card holding area and a detection area, wherein the card holding area and the detection area are distributed on the substrate; the detection area comprises a product parameter area, a standard color area, a positioning area, a verification area and a sample area; the parameter area is pasted or printed with detection parameters which can be bar codes or codes in other forms; the standard color area comprises a plurality of standard color blocks with different colors; the checking area comprises a plurality of white checking blocks; moreover, each check block is distributed among the standard color blocks in the detection area; the positioning area is used for assisting the image acquisition equipment in carrying out image acquisition and color block positioning on the standard color card; the sample area is provided with a sample positioning quadrilateral or polygonal frame line for placing a target object to be color-collected.
A color correction method comprising the steps of:
step 8, obtaining a complete bar code image according to the outline of the positioned bar code area; then, identifying the bar code image by using a bar code identification algorithm to obtain bar code data;
step 9, obtaining a configuration scheme according to the bar code data in the step 8; according to the configuration scheme, position coordinates and sizes of each standard color block, each white check color block and the target are respectively obtained, and then the positions and the outlines of each standard color block, each white check color block and the target object are positioned in the original complete detection area image;
step 10, obtaining color images of each standard color block, each white check color block and the target object from the original complete detection area image based on the positions and the contours of each standard color block, each white check color block and the target object; then, preprocessing each original color image to obtain image color values of each original standard color block image and each white check color block as correction color block color values; obtaining an image color value of a target object;
step 11, taking the standard color chart shot by the camera A as a reference image, taking the image shot by the camera B and comprising the standard color chart and the target object as a target image, and then correcting the color of the image of the target object collected by the camera B to be consistent with the color of the image of the target object collected by the camera A;
the parameters affecting the color of the image taken by the camera are mainly three, namely Gain (Gain), Offset (Offset) and Gamma (Gamma); the mathematical representation of the image pixel values with three parameters is as follows:
gain (Gain): pref=Ptar×Cgain
Offset (Offset): Pref=Ptar+Coffset
wherein P isrefAnd PtarPixel values, C, representing points of the reference and target images, respectivelygain,Coffset, CGammaRepresenting the Gain (Gain), Offset (Offset) and Gamma (Gamma) parameters of the target camera, respectively, 2bitdepthRepresenting a total number of gray levels of the image color space; the general image adopts 8bit gray scale; the combination of the three parameters can obtain a mathematical table of the pixel values of the image acquired by the cameraThe present form is as follows:
step 12, using the reference correction color block and the pixel points at the corresponding positions of the target correction color block image, and using a Levenberg-Marquard optimal algorithm to perform fitting to calculate three parameters of Gain (Gain), Offset (Offset) and Gamma (Gamma) of the target camera relative to the reference camera, which comprises the following specific steps:
step (1), firstly, defining an error function as follows:
wherein y isiAnd xiRespectively obtaining RGB values of pixel points at corresponding positions of the reference correction color block image and the target correction color block image;is a vector C composed of three parameters of target camera Gain (Gain), Offset (Offset) and Gamma (Gamma)gain,Coffset,CGammaDenoted as { beta }0,β1,β2The expression of the function f is:
step (2), selecting initial vectors according to experienceCalculating RGB values of pixel points at all corresponding positions of the reference correction color block image and the target correction color block image respectively corresponding to {1, 0, 1} or other empirical valuesAnd the sum of squares is calculated using the following formula:
step (3), continuously and circularly calculating correction parametersWhereinIs a function eiFormed vector, matrixIs an m x 3 function matrix with the value of the ith row beingThe damping coefficient lambda is initially an empirical value lambda0Then starting from the second time, each time the value of lambda/v, v is any number greater than 0, if calculated after lambda adjustmentIf the v does not become smaller, v is doubled continuously and then lambda is adjusted continuously until the calculated value is obtainedBecome smaller; using the formulaCalculate new vectorWhereinBy usingIteratively calculating newUp toIf the minimum value is approached, no circulation is performed and the step (4) is carried out;
step (4), removing noise points, and correcting the RGB values of pixel points at all corresponding positions of the color block image of the target correction calculated finally in the step (3)Calculating the mean deviation h, wherein the parameter a is constant, e.g. 1.5, xi: correcting pixel point RGB value, x of all corresponding positions of color block image for targeteIs composed ofThe formula is as follows:
calculating a difference value of RGB values of pixel points at all corresponding positions of the reference original correction color block image and the target correction color block image, and removing the pixel points with the difference value larger than 2 times h as noise points;
step (5), re-executing steps (2) - (4) by using pixel point RGB values at all corresponding positions of the reference original correction color block image and the target correction color block image after noise points are removed until the number of the noise points is 0 and entering step (6);
step (6) of using the product obtained in step (5)As a parameter, correcting the color deviation of all pixel point RGB values of the target object image by using the following formula to obtain the corrected pixel point RGB values of the target object image;
compared with the prior art, the method can be used for solving the problem of color deviation of different cameras with unknown parameters when acquiring images of different target objects, and has the advantages of wide adaptability, simplicity in operation, small operand, and low cost realization based on general hardware and software.
Drawings
Fig. 1 is a schematic structural diagram of a conventional standard color chart.
Detailed Description
The technical solution of the present patent will be described in further detail with reference to the following embodiments.
As shown in fig. 1, a conventional standard color chart for color calibration includes a substrate, and a card holding area 1 and a detection area 1 distributed on the substrate; the detection area 2 comprises a product parameter area 3, a standard color area 4, a positioning area 5, a verification area 6 and a sample area 7; the parameter area 3 is pasted or printed with detection parameters, which can be bar codes or codes in other forms; the standard color area 4 comprises a plurality of standard color blocks with different colors; the checking area 6 comprises a plurality of white checking blocks; moreover, each check block is distributed among the standard color blocks in the detection area; the positioning area 5 is used for assisting image acquisition equipment in carrying out image acquisition and color block positioning on a standard color card; the sample area 7 is provided with a sample positioning quadrilateral or polygonal frame line for placing a target object to be color-collected.
A color correction method comprising the steps of:
step 8, obtaining a complete bar code image according to the outline of the positioned bar code area; then, identifying the bar code image by using a bar code identification algorithm to obtain bar code data;
step 9, obtaining a configuration scheme according to the bar code data in the step 8; according to the configuration scheme, position coordinates and sizes of each standard color block, each white check color block and the target are respectively obtained, and then the positions and the outlines of each standard color block, each white check color block and the target object are positioned in the original complete detection area image;
step 10, obtaining color images of each standard color block, each white check color block and the target object from the original complete detection area image based on the positions and the contours of each standard color block, each white check color block and the target object; then, preprocessing each original color image to obtain image color values of each original standard color block image and each white check color block as correction color block color values; obtaining an image color value of a target object;
step 11, taking the standard color chart shot by the camera A as a reference image, taking the image shot by the camera B and comprising the standard color chart and the target object as a target image, and then correcting the color of the image of the target object collected by the camera B to be consistent with the color of the image of the target object collected by the camera A;
the parameters affecting the color of the image taken by the camera are mainly three, namely Gain (Gain), Offset (Offset) and Gamma (Gamma); the mathematical representation of the image pixel values with three parameters is as follows:
gain (Gain): pref=Ptar×Cgain
Offset (Offset): Pref=Ptar+Coffset
wherein P isrefAnd PtarPixel values, C, representing points of the reference and target images, respectivelygain,Coffset, CGammaRepresenting the Gain (Gain), Offset (Offset) and Gamma (Gamma) parameters of the target camera, respectively, 2bitdepthRepresenting a total number of gray levels of the image color space; the general image adopts 8bit gray scale; the mathematical representation of the pixel values of the image acquired by the camera can be obtained by combining the three parameters as follows:
step 12, using the reference correction color block and the pixel points at the corresponding positions of the target correction color block image, and using a Levenberg-Marquard optimal algorithm to perform fitting to calculate three parameters of Gain (Gain), Offset (Offset) and Gamma (Gamma) of the target camera relative to the reference camera, which comprises the following specific steps:
step (1), firstly, defining an error function as follows:
wherein y isiAnd xiRespectively obtaining RGB values of pixel points at corresponding positions of the reference correction color block image and the target correction color block image;is a vector C composed of three parameters of target camera Gain (Gain), Offset (Offset) and Gamma (Gamma)gain,Coffset,CGammaDenoted as { beta }0,β1,β2The expression of the function f is:
step (2), selecting initial vectors according to experienceCalculating RGB values of pixel points at all corresponding positions of the reference correction color block image and the target correction color block image respectively corresponding to {1, 0, 1} or other empirical valuesAnd the sum of squares is calculated using the following formula:
step (3), continuously and circularly calculating correction parametersWhereinIs a function eiFormed vector, matrixIs an m x 3 function matrix with the value of the ith row beingThe damping coefficient lambda is initially an empirical value lambda0Then starting from the second time, each time the value of lambda/v, v is any number greater than 0, if calculated after lambda adjustmentIf the v does not become smaller, v is doubled continuously and then lambda is adjusted continuously until the calculated value is obtainedBecome smaller; using the formulaCalculate new vectorWhereinBy usingIteratively calculating newUp toIf the minimum value is approached, no circulation is performed and the step (4) is carried out;
step (4), removing noise points, and carrying out the last step of the step (3)Calculated RGB values of pixel points at all corresponding positions of target correction color lump imageCalculating the mean deviation h, wherein the parameter a is constant, e.g. 1.5, xi: correcting pixel point RGB value, x of all corresponding positions of color block image for targeteIs composed ofThe formula is as follows:
calculating a difference value of RGB values of pixel points at all corresponding positions of the reference original correction color block image and the target correction color block image, and removing the pixel points with the difference value larger than 2 times h as noise points;
step (5), re-executing steps (2) - (4) by using pixel point RGB values at all corresponding positions of the reference original correction color block image and the target correction color block image after noise points are removed until the number of the noise points is 0 and entering step (6);
step (6) of using the product obtained in step (5)As a parameter, correcting the color deviation of all pixel point RGB values of the target object image by using the following formula to obtain the corrected pixel point RGB values of the target object image;
although the preferred embodiments of the present patent have been described in detail, the present patent is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present patent within the knowledge of those skilled in the art.
Claims (2)
1. A color correction method, comprising the steps of:
step 1, placing a target object in a sample area on a standard color card, and respectively acquiring the standard color card and a complete color image comprising the standard color card and the target object by using two different cameras;
step 2, the original complete image is a color image; converting the colored original complete image into a binary image, wherein the binary image is a black-and-white image;
step 3, judging whether the final four positioning marks can be identified in the complete black-and-white image or not based on the binarization mathematical characteristics of the four positioning marks, if not, abandoning the original complete color image, returning to the step 1, and re-collecting the complete color image comprising the standard color card and the target object; if yes, executing step 4;
step 4, acquiring the identified mark points of each positioning identifier by using a detection analysis module of the detection equipment, and sequentially connecting the mark points of each positioning identifier to form a quadrilateral profile, wherein the internal area of the quadrilateral profile is the candidate standard color card; thereby obtaining the shape and the size of the candidate standard color card detection area;
step 5, reading the real shape and the real size of a real detection area of a standard color card with the same pre-stored specification; then judging whether the deviation of the candidate standard color card detection area and the real standard color card detection area in shape and size is within a design threshold value; if not, returning to the step 1, and re-acquiring a complete color image comprising the standard color card and the target object; if so, indicating that the shapes and the sizes of the candidate standard color card detection area and the real standard color card detection area are very approximate, and then executing the step 6;
step 6, according to the candidate standard color card detection area, an original complete detection area image is segmented from the original complete color image; wherein, the original complete detection area image is a color image;
step 7, reading the position coordinates and the size of the bar code area of the pre-stored standard color card with the same specification; then, accurately positioning the outline of the bar code area in the original complete detection area image based on the position coordinate of the bar code area and the size of the bar code area;
step 8, obtaining a complete bar code image according to the outline of the positioned bar code area; then, identifying the bar code image by using a bar code identification algorithm to obtain bar code data;
step 9, obtaining a configuration scheme according to the bar code data in the step 8; according to the configuration scheme, position coordinates and sizes of each standard color block, each white check color block and the target are respectively obtained, and then the positions and the outlines of each standard color block, each white check color block and the target object are positioned in the original complete detection area image;
step 10, obtaining color images of each standard color block, each white check color block and the target object from the original complete detection area image based on the positions and the contours of each standard color block, each white check color block and the target object; then, preprocessing each original color image to obtain image color values of each original standard color block image and each white check color block as correction color block color values; obtaining an image color value of a target object;
step 11, taking the standard color chart shot by the camera A as a reference image, taking the image shot by the camera B and comprising the standard color chart and the target object as a target image, and then correcting the color of the image of the target object collected by the camera B to be consistent with the color of the image of the target object collected by the camera A;
the parameters affecting the color of the image taken by the camera are mainly three, namely Gain (Gain), Offset (Offset) and Gamma (Gamma); the mathematical representation of the image pixel values with three parameters is as follows:
gain (Gain): pref=Ptar×Cgain
Offset (Offset): pref=Ptar+Coffset
wherein P isrefAnd PtarPixel values, C, representing points of the reference and target images, respectivelygain,Coffset,CGammaRepresenting the Gain (Gain), Offset (Offset) and Gamma (Gamma) parameters of the target camera, respectively, 2bitdepthRepresenting the total number of gray levels in the color space of the image. The general image adopts 8bit gray scale;
the mathematical representation of the pixel values of the image acquired by the camera can be obtained by combining the three parameters as follows:
and step 12, using the reference correction color block and the pixel points at the corresponding positions of the target correction color block image.
2. The method of claim 1, wherein the three parameters of Gain (Gain), Offset (Offset) and Gamma (Gamma) of the target camera relative to the reference camera are calculated, and the Levenberg-Marquard optimization algorithm is used to perform the fitting calculation, and the method comprises the following steps:
step (1), firstly, defining an error function as follows:
wherein y isiAnd xiRespectively obtaining RGB values of pixel points at corresponding positions of the reference correction color block image and the target correction color block image;is a vector C composed of three parameters of target camera Gain (Gain), Offset (Offset) and Gamma (Gamma)gain,Coffset,CGammaDenoted as { beta }0,β1,β2The expression of the function f is:
step (2), selecting initial vectors according to experienceCalculating RGB values of pixel points at all corresponding positions of the reference correction color block image and the target correction color block image respectively corresponding to {1, 0, 1} or other empirical valuesAnd the sum of squares is calculated using the following formula:
step (3), continuously and circularly calculating correction parametersWhereinIs a function eiFormed vector, matrixIs an m x 3 function matrix with the value of the ith row beingThe damping coefficient lambda is initially an empirical value lambda0Then starting from the second time, each time the value of lambda/v, v is any number greater than 0, if calculated after lambda adjustmentIf the v does not become smaller, v is doubled continuously and then lambda is adjusted continuously until the calculated value is obtainedBecome smaller; using the formulaCalculate new vectorWhereinBy usingIteratively calculating newUp toIf the minimum value is approached, no circulation is performed and the step (4) is carried out;
step (4), removing noise points, and correcting the RGB values of pixel points at all corresponding positions of the color block image of the target correction calculated finally in the step (3)Calculating the mean deviation h, wherein the parameter a is constant, e.g. 1.5, xi: correcting pixel point RGB value, x of all corresponding positions of color block image for targeteIs composed ofThe formula is as follows:
calculating a difference value of RGB values of pixel points at all corresponding positions of the reference original correction color block image and the target correction color block image, and removing the pixel points with the difference value larger than 2 times h as noise points;
step (5), re-executing steps (2) - (4) by using pixel point RGB values at all corresponding positions of the reference original correction color block image and the target correction color block image after noise points are removed until the number of the noise points is 0 and entering step (6);
step (6) of using the product obtained in step (5)As a parameter, correcting the color deviation of all pixel point RGB values of the target object image by using the following formula to obtain the corrected pixel point RGB values of the target object image;
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911424392.3A CN113132693B (en) | 2019-12-31 | 2019-12-31 | Color correction method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911424392.3A CN113132693B (en) | 2019-12-31 | 2019-12-31 | Color correction method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113132693A true CN113132693A (en) | 2021-07-16 |
CN113132693B CN113132693B (en) | 2024-10-01 |
Family
ID=76769843
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911424392.3A Active CN113132693B (en) | 2019-12-31 | 2019-12-31 | Color correction method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113132693B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113392670A (en) * | 2021-07-30 | 2021-09-14 | 新疆金牛能源物联网科技股份有限公司 | Cassette, cassette reading device, device configuration apparatus, and configuration method |
CN115115609A (en) * | 2022-07-18 | 2022-09-27 | 中国农业科学院蔬菜花卉研究所 | Image analysis method and system for plant leaf positive phenotypic characters |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006303783A (en) * | 2005-04-19 | 2006-11-02 | Fuji Photo Film Co Ltd | Image processing method, image processing system, and image processing program |
JP2011259047A (en) * | 2010-06-07 | 2011-12-22 | For-A Co Ltd | Color correction device, color correction method, and video camera system |
WO2017046829A1 (en) * | 2015-09-17 | 2017-03-23 | 株式会社Elan | Color measuring device and color measuring method |
CN109805891A (en) * | 2019-01-08 | 2019-05-28 | 中南大学湘雅医院 | Post-operative recovery state monitoring method, device, system, readable medium and colour atla |
CN110400278A (en) * | 2019-07-30 | 2019-11-01 | 广东工业大学 | A kind of full-automatic bearing calibration, device and the equipment of color of image and geometric distortion |
-
2019
- 2019-12-31 CN CN201911424392.3A patent/CN113132693B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006303783A (en) * | 2005-04-19 | 2006-11-02 | Fuji Photo Film Co Ltd | Image processing method, image processing system, and image processing program |
JP2011259047A (en) * | 2010-06-07 | 2011-12-22 | For-A Co Ltd | Color correction device, color correction method, and video camera system |
WO2017046829A1 (en) * | 2015-09-17 | 2017-03-23 | 株式会社Elan | Color measuring device and color measuring method |
CN109805891A (en) * | 2019-01-08 | 2019-05-28 | 中南大学湘雅医院 | Post-operative recovery state monitoring method, device, system, readable medium and colour atla |
CN110400278A (en) * | 2019-07-30 | 2019-11-01 | 广东工业大学 | A kind of full-automatic bearing calibration, device and the equipment of color of image and geometric distortion |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113392670A (en) * | 2021-07-30 | 2021-09-14 | 新疆金牛能源物联网科技股份有限公司 | Cassette, cassette reading device, device configuration apparatus, and configuration method |
CN115115609A (en) * | 2022-07-18 | 2022-09-27 | 中国农业科学院蔬菜花卉研究所 | Image analysis method and system for plant leaf positive phenotypic characters |
Also Published As
Publication number | Publication date |
---|---|
CN113132693B (en) | 2024-10-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200364849A1 (en) | Method and device for automatically drawing structural cracks and precisely measuring widths thereof | |
CN111932504B (en) | Edge contour information-based sub-pixel positioning method and device | |
CN108921057B (en) | Convolutional neural network-based prawn form measuring method, medium, terminal equipment and device | |
CN103048331B (en) | Printing defect detection method based on flexible template registration | |
CN110400278B (en) | Full-automatic correction method, device and equipment for image color and geometric distortion | |
CN111223133A (en) | Registration method of heterogeneous images | |
CN111784778A (en) | Binocular camera external parameter calibration method and system based on linear solving and nonlinear optimization | |
CN111145205B (en) | Pig body temperature detection method based on infrared image under multiple pig scenes | |
CN113132693A (en) | Color correction method | |
CN114692991B (en) | Deep learning-based wolfberry yield prediction method and system | |
CN111724354A (en) | Image processing-based method for measuring spike length and small spike number of multiple wheat | |
CN111323125A (en) | Temperature measurement method and device, computer storage medium and electronic equipment | |
CN108510477A (en) | The localization method and device of test paper color lump | |
CN112561986A (en) | Secondary alignment method, device, equipment and storage medium for inspection robot holder | |
CN117893457B (en) | PCB intelligent detection method, device and computer equipment | |
CN111369455B (en) | Highlight object measuring method based on polarization image and machine learning | |
CN112700488A (en) | Living body long blade area analysis method, system and device based on image splicing | |
Tu et al. | 2D in situ method for measuring plant leaf area with camera correction and background color calibration | |
CN117475373A (en) | Tea garden pest and disease damage identification and positioning method and system based on binocular vision | |
CN112215304A (en) | Gray level image matching method and device for geographic image splicing | |
CN111986266A (en) | Photometric stereo light source parameter calibration method | |
CN113628182B (en) | Automatic fish weight estimation method and device, electronic equipment and storage medium | |
CN113449638B (en) | Pig image ideal frame screening method based on machine vision technology | |
CN114170319A (en) | Method and device for adjusting test target | |
CN103279953A (en) | Machine vision calibration system based on LabVIEW platform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: Room 4121, Floor 4, Main Building, No. 15 Lutian Road, High-tech Development Zone, Changsha, Hunan Province, 410221 Applicant after: CHANGSHA YUNZHIJIAN INFORMATION TECHNOLOGY CO.,LTD. Address before: Room 604, scientific research building, 229 tongzipo West Road, Changsha hi tech Development Zone, Hunan 410205 Applicant before: CHANGSHA YUNZHIJIAN INFORMATION TECHNOLOGY CO.,LTD. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |