CN117649661B - Carbon nanotube preparation state image processing method - Google Patents

Carbon nanotube preparation state image processing method Download PDF

Info

Publication number
CN117649661B
CN117649661B CN202410128175.4A CN202410128175A CN117649661B CN 117649661 B CN117649661 B CN 117649661B CN 202410128175 A CN202410128175 A CN 202410128175A CN 117649661 B CN117649661 B CN 117649661B
Authority
CN
China
Prior art keywords
image
characteristic
state
feature
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410128175.4A
Other languages
Chinese (zh)
Other versions
CN117649661A (en
Inventor
赵屹坤
邓炜
孙宝强
吴涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Chaorui Nano New Material Technology Co ltd
Original Assignee
Qingdao Chaorui Nano New Material Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Chaorui Nano New Material Technology Co ltd filed Critical Qingdao Chaorui Nano New Material Technology Co ltd
Priority to CN202410128175.4A priority Critical patent/CN117649661B/en
Publication of CN117649661A publication Critical patent/CN117649661A/en
Application granted granted Critical
Publication of CN117649661B publication Critical patent/CN117649661B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/693Acquisition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a carbon nanotube preparation state image processing method, which belongs to the technical field of image processing and comprises the following steps: s1, scanning the surface of a carbon nanotube sample by using a scanning electron microscope to obtain a surface image, and denoising the surface image to obtain a characteristic image; s2, determining state characteristic values of all pixel points in the characteristic image to obtain a state characteristic matrix of the characteristic image; s3, correcting the characteristic image according to the state characteristic matrix of the characteristic image to obtain a standard surface image, and finishing image processing. The method comprises the steps of carrying out denoising treatment on a surface image of a scanned carbon nanotube sample, extracting a state feature matrix representing pixel point features of the whole image, and obtaining the state feature matrix through convolution, feature calculation and the like; the invention also carries out color correction on the image according to the state characteristic matrix, ensures the definition of the carbon nanotube sample image, and further improves the accuracy of observation.

Description

Carbon nanotube preparation state image processing method
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a carbon nanotube preparation state image processing method.
Background
The carbon nanotube is a nano-sized tubular structure formed by carbon atoms in a certain mode, has excellent physical properties such as high strength, high electrical conductivity, high thermal conductivity and the like, and can fully exert the advantages of the carbon nanotube and other materials by combining the carbon nanotube with the other materials, so that the performance of the materials is improved. The dispersion state of the carbon nanotubes is a key factor, and directly affects the bonding performance of the carbon nanotubes with other materials. The dispersion state of the carbon nanotubes is generally observed through an optical microscope, a scanning electron microscope, a transmission electron microscope and the like, so that the quality of the collected images of the carbon nanotubes directly determines the observation result, and how to improve the image definition is an urgent problem to be solved.
Disclosure of Invention
In order to solve the problems, the invention provides a method for processing a carbon nanotube preparation state image.
The technical scheme of the invention is as follows: the carbon nanotube preparation state image processing method comprises the following steps:
s1, scanning the surface of a carbon nanotube sample by using a scanning electron microscope to obtain a surface image, and denoising the surface image to obtain a characteristic image;
s2, determining state characteristic values of all pixel points in the characteristic image to obtain a state characteristic matrix of the characteristic image;
s3, correcting the characteristic image according to the state characteristic matrix of the characteristic image to obtain a standard surface image, and finishing image processing.
Scanning electron microscopy is a common characterization means that obtains surface topography and composition information of a sample by scanning the surface of the sample and by detecting secondary or reflected electron signals from the sample.
Further, S2 comprises the following sub-steps:
s21, determining color weights of all pixel points according to RGB components of all pixel points in the feature image;
s22, constructing a state characteristic model, inputting the color weight of each pixel point into the state characteristic model, and determining the state characteristic value of each pixel point;
s23, generating a state feature matrix of the feature image according to the state feature values of the pixel points.
Further, in S21, the color weight q of the pixel point in the ith row and the jth column in the feature image ij The calculation formula of (2) is as follows:the method comprises the steps of carrying out a first treatment on the surface of the Wherein R is ij Representing the red component, G, of the ith row and jth column pixel points in the feature image ij Representing green component of ith row and jth column pixel points in characteristic image, B ij Representing blue component of pixel point in ith row and jth column in characteristic image, R 0 Representing the mean value of red components of all pixel points in the characteristic image, G 0 Representing the green component mean value of all pixel points in the characteristic image, B 0 Representing the mean of the blue components of all pixels in the feature image.
Further, the state feature model comprises a depth convolution module, a global feature module and an output module which are sequentially connected.
The beneficial effects of the above-mentioned further scheme are: in the invention, the depth convolution module comprises convolution kernels with the same number as the pixel points of the characteristic image, and each convolution kernel is used for extracting the edge characteristics of each pixel point in the characteristic image. The output module is used for comprehensively outputting the state characteristic values of the pixel points.
According to the method, red, green and blue components of each pixel point in the characteristic image are compared with the average value of the red, green and blue components of all the pixel points, and the color weight of each pixel point is determined; and inputting each pixel point and the color weight thereof into a state feature model, respectively extracting features of each pixel point by utilizing each convolution kernel of a depth convolution layer in the state feature model, calculating the output of each convolution kernel in the depth convolution module and the color weight value corresponding to each convolution kernel processing pixel point by utilizing a global feature module to obtain global features, calculating the RGB values of the global features and the processing pixels corresponding to each output layer by utilizing each output layer of an output module, and determining the state feature value of each pixel point.
Further, the expression of the global feature module is:wherein Q represents the output of the global feature module, C m Representing the output, K, of the mth convolution kernel in the depth convolution module m Represents the size, q, of the mth convolution kernel in the depth convolution module m Representing color weight corresponding to pixel point processed by mth convolution kernel in depth convolution module, B m The step length of the mth convolution kernel in the depth convolution module is represented, M represents the number of convolution kernels of the depth convolution module, sigma represents the standard deviation of all convolution kernel outputs, exp (·) represents an exponential function, and q represents the hyper-parameter of the global feature module.
Further, the expression of the output module is:wherein Z represents the output of the nth output layer in the output module, Q represents the output of the global feature module, R represents the red component of the pixel point corresponding to the nth output layer, G represents the green component of the pixel point corresponding to the nth output layer, B represents the blue component of the pixel point corresponding to the nth output layer, and ln (·) represents a logarithmic function.
Further, in S23, the specific method for generating the state feature matrix is as follows: the method comprises the steps of taking the number of rows of pixel points of a feature image as the number of rows of a state feature matrix, taking the number of columns of pixel points of the feature image as the number of columns of the state feature matrix, taking the state feature value of each row of pixel points as the element of each row of the state feature matrix, and generating the state feature matrix of the feature image.
Further, S3 comprises the following sub-steps:
s31, determining a red component correction coefficient, a green component correction coefficient and a blue component correction coefficient of each pixel point in the characteristic image according to the state characteristic matrix of the characteristic image;
s32, correcting the characteristic image according to the red component correction coefficient, the green component correction coefficient and the blue component correction coefficient of the characteristic image to obtain a standard surface image, and finishing image processing.
And taking the average value of the red component correction coefficient and the original red component value of each pixel point of the characteristic image as the latest red component value, blue component and green component of the corresponding pixel point in the standard surface image.
Further, in S31, the correction coefficient of the red component of the pixel point in the ith row and the jth column in the feature imageThe calculation formula of (2) is as follows: />The method comprises the steps of carrying out a first treatment on the surface of the Wherein lambda represents the eigenvalue of the state eigenvalue matrix, R ij Representing the red component of the ith row and jth column pixel points in the feature image,/and (ii)>Representing an upward rounding, P 0 Representing the position of the pixel point corresponding to the maximum value of the red component in the characteristic image, P ij Representing the position of the pixel point in the ith row and the jth column in the characteristic image, wherein dis (·) represents a Euclidean distance function;
in S31, the green component correction coefficient of the ith row and jth column pixel points in the feature imageThe calculation formula of (2) is as follows:the method comprises the steps of carrying out a first treatment on the surface of the Wherein G is ij Representing green component, P of ith row and jth column pixel points in characteristic image 1 Representing the position of the pixel point corresponding to the maximum value of the green component;
in S31, the green component correction coefficient of the ith row and jth column pixel points in the feature imageThe calculation formula of (2) is as follows:the method comprises the steps of carrying out a first treatment on the surface of the Wherein B is ij Representing blue component, P of pixel point in ith row and jth column in characteristic image 2 The position of the maximum value of the blue component corresponding to the pixel point is indicated.
The beneficial effects of the invention are as follows: the method comprises the steps of carrying out denoising treatment on a surface image of a scanned carbon nanotube sample, extracting a state feature matrix representing pixel point features of the whole image, and obtaining the state feature matrix through convolution, feature calculation and the like; the invention also carries out color correction on the image according to the state characteristic matrix, ensures the definition of the carbon nanotube sample image, and further improves the accuracy of observation.
Drawings
Fig. 1 is a flowchart of a method for processing a carbon nanotube preparation state image.
Detailed Description
Embodiments of the present invention are further described below with reference to the accompanying drawings.
As shown in fig. 1, the present invention provides a method for processing a carbon nanotube preparation status image, comprising the steps of:
s1, scanning the surface of a carbon nanotube sample by using a scanning electron microscope to obtain a surface image, and denoising the surface image to obtain a characteristic image;
s2, determining state characteristic values of all pixel points in the characteristic image to obtain a state characteristic matrix of the characteristic image;
s3, correcting the characteristic image according to the state characteristic matrix of the characteristic image to obtain a standard surface image, and finishing image processing.
Scanning electron microscopy is a common characterization means that obtains surface topography and composition information of a sample by scanning the surface of the sample and by detecting secondary or reflected electron signals from the sample.
In an embodiment of the present invention, S2 comprises the following sub-steps:
s21, determining color weights of all pixel points according to RGB components of all pixel points in the feature image;
s22, constructing a state characteristic model, inputting the color weight of each pixel point into the state characteristic model, and determining the state characteristic value of each pixel point;
s23, generating a state feature matrix of the feature image according to the state feature values of the pixel points.
In the embodiment of the invention, in S21, the color weight q of the pixel point in the ith row and the jth column in the feature image ij The calculation formula of (2) is as follows:the method comprises the steps of carrying out a first treatment on the surface of the Wherein R is ij Representing the red component, G, of the ith row and jth column pixel points in the feature image ij Representing green component of ith row and jth column pixel points in characteristic image, B ij Representing blue component of pixel point in ith row and jth column in characteristic image, R 0 Representing the mean value of red components of all pixel points in the characteristic image, G 0 Representing the green component mean value of all pixel points in the characteristic image, B 0 Representing the mean of the blue components of all pixels in the feature image.
In the embodiment of the invention, the state characteristic model comprises a depth convolution module, a global characteristic module and an output module which are sequentially connected.
In the invention, the depth convolution module comprises convolution kernels with the same number as the pixel points of the characteristic image, and each convolution kernel is used for extracting the edge characteristics of each pixel point in the characteristic image. The output module is used for comprehensively outputting the state characteristic values of the pixel points.
According to the method, red, green and blue components of each pixel point in the characteristic image are compared with the average value of the red, green and blue components of all the pixel points, and the color weight of each pixel point is determined; and inputting each pixel point and the color weight thereof into a state feature model, respectively extracting features of each pixel point by utilizing each convolution kernel of a depth convolution layer in the state feature model, calculating the output of each convolution kernel in the depth convolution module and the color weight value corresponding to each convolution kernel processing pixel point by utilizing a global feature module to obtain global features, calculating the RGB values of the global features and the processing pixels corresponding to each output layer by utilizing each output layer of an output module, and determining the state feature value of each pixel point.
In the embodiment of the invention, the expression of the global feature module is as follows:wherein Q represents the output of the global feature module, C m Representing the output, K, of the mth convolution kernel in the depth convolution module m Represents the size, q, of the mth convolution kernel in the depth convolution module m Representing color weight corresponding to pixel point processed by mth convolution kernel in depth convolution module, B m The step length of the mth convolution kernel in the depth convolution module is represented, M represents the number of convolution kernels of the depth convolution module, sigma represents the standard deviation of all convolution kernel outputs, exp (·) represents an exponential function, and q represents the hyper-parameter of the global feature module.
In the embodiment of the invention, the expression of the output module is:wherein Z represents the output of the nth output layer in the output module, Q represents the output of the global feature module, R represents the red component of the pixel point corresponding to the nth output layer, G represents the green component of the pixel point corresponding to the nth output layer, B represents the blue component of the pixel point corresponding to the nth output layer, and ln (·) represents a logarithmic function.
In the embodiment of the present invention, in S23, a specific method for generating the state feature matrix is as follows: the method comprises the steps of taking the number of rows of pixel points of a feature image as the number of rows of a state feature matrix, taking the number of columns of pixel points of the feature image as the number of columns of the state feature matrix, taking the state feature value of each row of pixel points as the element of each row of the state feature matrix, and generating the state feature matrix of the feature image.
In an embodiment of the present invention, S3 comprises the following sub-steps:
s31, determining a red component correction coefficient, a green component correction coefficient and a blue component correction coefficient of each pixel point in the characteristic image according to the state characteristic matrix of the characteristic image;
s32, correcting the characteristic image according to the red component correction coefficient, the green component correction coefficient and the blue component correction coefficient of the characteristic image to obtain a standard surface image, and finishing image processing.
And taking the average value of the red component correction coefficient and the original red component value of each pixel point of the characteristic image as the latest red component value, blue component and green component of the corresponding pixel point in the standard surface image.
In the embodiment of the present invention, in S31, the correction coefficient of the red component of the pixel point in the ith row and the jth column in the feature imageThe calculation formula of (2) is as follows: />The method comprises the steps of carrying out a first treatment on the surface of the Wherein lambda represents the eigenvalue of the state eigenvalue matrix, R ij Representing the red component of the ith row and jth column pixel points in the feature image,/and (ii)>Representing an upward rounding, P 0 Representing the position of the pixel point corresponding to the maximum value of the red component in the characteristic image, P ij Representing the position of the pixel point in the ith row and the jth column in the characteristic image, wherein dis (·) represents a Euclidean distance function;
in S31, the green component correction coefficient of the ith row and jth column pixel points in the feature imageThe calculation formula of (2) is as follows:the method comprises the steps of carrying out a first treatment on the surface of the Wherein G is ij Representing green component, P of ith row and jth column pixel points in characteristic image 1 Representing the position of the pixel point corresponding to the maximum value of the green component;
in S31, the green component of the ith row and jth column pixels in the feature image is correctedCoefficients ofThe calculation formula of (2) is as follows:the method comprises the steps of carrying out a first treatment on the surface of the Wherein B is ij Representing blue component, P of pixel point in ith row and jth column in characteristic image 2 The position of the maximum value of the blue component corresponding to the pixel point is indicated.
Those of ordinary skill in the art will recognize that the embodiments described herein are for the purpose of aiding the reader in understanding the principles of the present invention and should be understood that the scope of the invention is not limited to such specific statements and embodiments. Those of ordinary skill in the art can make various other specific modifications and combinations from the teachings of the present disclosure without departing from the spirit thereof, and such modifications and combinations remain within the scope of the present disclosure.

Claims (4)

1. A method for processing a state image of a carbon nanotube preparation, comprising the steps of:
s1, scanning the surface of a carbon nanotube sample by using a scanning electron microscope to obtain a surface image, and denoising the surface image to obtain a characteristic image;
s2, determining state characteristic values of all pixel points in the characteristic image to obtain a state characteristic matrix of the characteristic image;
s3, correcting the characteristic image according to the state characteristic matrix of the characteristic image to obtain a standard surface image, and finishing image processing;
the step S2 comprises the following substeps:
s21, determining color weights of all pixel points according to RGB components of all pixel points in the feature image;
s22, constructing a state characteristic model, inputting the color weight of each pixel point into the state characteristic model, and determining the state characteristic value of each pixel point;
s23, generating a state feature matrix of the feature image according to the state feature values of the pixel points;
the state feature model comprises a depth convolution module, a global feature module and an output module which are sequentially connected;
the expression of the global feature module is as follows:wherein Q represents the output of the global feature module, C m Representing the output, K, of the mth convolution kernel in the depth convolution module m Represents the size, q, of the mth convolution kernel in the depth convolution module m Representing color weight corresponding to pixel point processed by mth convolution kernel in depth convolution module, B m The step length of the mth convolution kernel in the depth convolution module is represented, M represents the number of convolution kernels of the depth convolution module, sigma represents the standard deviation of all convolution kernel outputs, exp (·) represents an exponential function, and q represents the super-parameter of the global feature module;
the expression of the output module is as follows:wherein Z represents the output of the nth output layer in the output module, Q represents the output of the global feature module, R represents the red component of the pixel point corresponding to the nth output layer, G represents the green component of the pixel point corresponding to the nth output layer, B represents the blue component of the pixel point corresponding to the nth output layer, and ln (·) represents a logarithmic function;
in S23, the specific method for generating the state feature matrix is as follows: the method comprises the steps of taking the number of rows of pixel points of a feature image as the number of rows of a state feature matrix, taking the number of columns of pixel points of the feature image as the number of columns of the state feature matrix, taking the state feature value of each row of pixel points as the element of each row of the state feature matrix, and generating the state feature matrix of the feature image.
2. The method for processing a carbon nanotube preparation state image according to claim 1, wherein in S21, a color weight q of a pixel point in an ith row and a jth column in a feature image ij The calculation formula of (2) is as follows:the method comprises the steps of carrying out a first treatment on the surface of the Wherein R is ij Representing the red component, G, of the ith row and jth column pixel points in the feature image ij Representing green component of ith row and jth column pixel points in characteristic image, B ij Representing blue component of pixel point in ith row and jth column in characteristic image, R 0 Representing the mean value of red components of all pixel points in the characteristic image, G 0 Representing the green component mean value of all pixel points in the characteristic image, B 0 Representing the mean of the blue components of all pixels in the feature image.
3. The method for processing a carbon nanotube preparation state image according to claim 1, wherein S3 comprises the sub-steps of:
s31, determining a red component correction coefficient, a green component correction coefficient and a blue component correction coefficient of each pixel point in the characteristic image according to the state characteristic matrix of the characteristic image;
s32, correcting the characteristic image according to the red component correction coefficient, the green component correction coefficient and the blue component correction coefficient of the characteristic image to obtain a standard surface image, and finishing image processing.
4. The method according to claim 3, wherein in S31, the correction coefficient of the red component of the ith row and jth column pixels in the feature imageThe calculation formula of (2) is as follows:the method comprises the steps of carrying out a first treatment on the surface of the Wherein lambda represents the eigenvalue of the state eigenvalue matrix, R ij Representing the red component of the ith row and jth column pixel points in the feature image,/and (ii)>Representing an upward rounding, P 0 Representing maximum corresponding image of red component in characteristic imageThe position of the prime point, P ij Representing the position of the pixel point in the ith row and the jth column in the characteristic image, wherein dis (·) represents a Euclidean distance function;
in S31, the green component correction coefficient of the ith row and jth column pixel points in the feature imageThe calculation formula of (2) is as follows:the method comprises the steps of carrying out a first treatment on the surface of the Wherein G is ij Representing green component, P of ith row and jth column pixel points in characteristic image 1 Representing the position of the pixel point corresponding to the maximum value of the green component;
in S31, the green component correction coefficient of the ith row and jth column pixel points in the feature imageThe calculation formula of (2) is as follows:the method comprises the steps of carrying out a first treatment on the surface of the Wherein B is ij Representing blue component, P of pixel point in ith row and jth column in characteristic image 2 The position of the maximum value of the blue component corresponding to the pixel point is indicated.
CN202410128175.4A 2024-01-30 2024-01-30 Carbon nanotube preparation state image processing method Active CN117649661B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410128175.4A CN117649661B (en) 2024-01-30 2024-01-30 Carbon nanotube preparation state image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410128175.4A CN117649661B (en) 2024-01-30 2024-01-30 Carbon nanotube preparation state image processing method

Publications (2)

Publication Number Publication Date
CN117649661A CN117649661A (en) 2024-03-05
CN117649661B true CN117649661B (en) 2024-04-12

Family

ID=90043772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410128175.4A Active CN117649661B (en) 2024-01-30 2024-01-30 Carbon nanotube preparation state image processing method

Country Status (1)

Country Link
CN (1) CN117649661B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118209565B (en) * 2024-05-21 2024-08-13 青岛超瑞纳米新材料科技有限公司 Intelligent monitoring method for processing process of carbon nano tube

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103079077A (en) * 2011-10-26 2013-05-01 比亚迪股份有限公司 Image processing method
WO2015109693A1 (en) * 2014-01-22 2015-07-30 中兴通讯股份有限公司 Method and system for image color calibration
CN106127817A (en) * 2016-06-28 2016-11-16 广东工业大学 A kind of image binaryzation method based on passage
CN107507250A (en) * 2017-06-02 2017-12-22 北京工业大学 A kind of complexion tongue color image color correction method based on convolutional neural networks
CN107578390A (en) * 2017-09-14 2018-01-12 长沙全度影像科技有限公司 A kind of method and device that image white balance correction is carried out using neutral net
CN108377373A (en) * 2018-05-10 2018-08-07 杭州雄迈集成电路技术有限公司 A kind of color rendition device and method pixel-based
CN108495101A (en) * 2018-04-08 2018-09-04 北京大恒图像视觉有限公司 A kind of method for correcting image, device, image capture device and readable storage medium storing program for executing
CN108600723A (en) * 2018-07-20 2018-09-28 长沙全度影像科技有限公司 A kind of color calibration method and evaluation method of panorama camera
CN109919994A (en) * 2019-01-08 2019-06-21 浙江大学 A kind of coal mining machine roller automatic height-adjusting system based on deep learning image procossing
CN110400275A (en) * 2019-07-22 2019-11-01 中电健康云科技有限公司 One kind being based on full convolutional neural networks and the pyramidal color calibration method of feature
CN111222445A (en) * 2019-12-31 2020-06-02 江苏南高智能装备创新中心有限公司 Straw detection system and method thereof
CN111292246A (en) * 2018-12-07 2020-06-16 上海安翰医疗技术有限公司 Image color correction method, storage medium, and endoscope
CN111310666A (en) * 2020-02-18 2020-06-19 浙江工业大学 High-resolution image ground feature identification and segmentation method based on texture features
CN112508812A (en) * 2020-12-01 2021-03-16 厦门美图之家科技有限公司 Image color cast correction method, model training method, device and equipment
CN114240782A (en) * 2021-12-16 2022-03-25 北京爱芯科技有限公司 Image correction method and system and electronic equipment
CN114266803A (en) * 2021-12-21 2022-04-01 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN114331873A (en) * 2021-12-07 2022-04-12 南京邮电大学 Non-uniform illumination color image correction method based on region division
CN114463196A (en) * 2021-12-28 2022-05-10 浙江大学嘉兴研究院 Image correction method based on deep learning
CN116958534A (en) * 2022-12-29 2023-10-27 腾讯科技(深圳)有限公司 Image processing method, training method of image processing model and related device
CN116996786A (en) * 2023-09-21 2023-11-03 清华大学 RGB-IR image color recovery and correction method and device
CN117372431A (en) * 2023-12-07 2024-01-09 青岛天仁微纳科技有限责任公司 Image detection method of nano-imprint mold

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108282647B (en) * 2018-01-31 2019-11-05 上海小蚁科技有限公司 Color correcting method and device, computer readable storage medium, terminal

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103079077A (en) * 2011-10-26 2013-05-01 比亚迪股份有限公司 Image processing method
WO2015109693A1 (en) * 2014-01-22 2015-07-30 中兴通讯股份有限公司 Method and system for image color calibration
CN106127817A (en) * 2016-06-28 2016-11-16 广东工业大学 A kind of image binaryzation method based on passage
CN107507250A (en) * 2017-06-02 2017-12-22 北京工业大学 A kind of complexion tongue color image color correction method based on convolutional neural networks
CN107578390A (en) * 2017-09-14 2018-01-12 长沙全度影像科技有限公司 A kind of method and device that image white balance correction is carried out using neutral net
CN108495101A (en) * 2018-04-08 2018-09-04 北京大恒图像视觉有限公司 A kind of method for correcting image, device, image capture device and readable storage medium storing program for executing
CN108377373A (en) * 2018-05-10 2018-08-07 杭州雄迈集成电路技术有限公司 A kind of color rendition device and method pixel-based
CN108600723A (en) * 2018-07-20 2018-09-28 长沙全度影像科技有限公司 A kind of color calibration method and evaluation method of panorama camera
CN111292246A (en) * 2018-12-07 2020-06-16 上海安翰医疗技术有限公司 Image color correction method, storage medium, and endoscope
CN109919994A (en) * 2019-01-08 2019-06-21 浙江大学 A kind of coal mining machine roller automatic height-adjusting system based on deep learning image procossing
CN110400275A (en) * 2019-07-22 2019-11-01 中电健康云科技有限公司 One kind being based on full convolutional neural networks and the pyramidal color calibration method of feature
CN111222445A (en) * 2019-12-31 2020-06-02 江苏南高智能装备创新中心有限公司 Straw detection system and method thereof
CN111310666A (en) * 2020-02-18 2020-06-19 浙江工业大学 High-resolution image ground feature identification and segmentation method based on texture features
CN112508812A (en) * 2020-12-01 2021-03-16 厦门美图之家科技有限公司 Image color cast correction method, model training method, device and equipment
CN114331873A (en) * 2021-12-07 2022-04-12 南京邮电大学 Non-uniform illumination color image correction method based on region division
CN114240782A (en) * 2021-12-16 2022-03-25 北京爱芯科技有限公司 Image correction method and system and electronic equipment
CN114266803A (en) * 2021-12-21 2022-04-01 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN114463196A (en) * 2021-12-28 2022-05-10 浙江大学嘉兴研究院 Image correction method based on deep learning
CN116958534A (en) * 2022-12-29 2023-10-27 腾讯科技(深圳)有限公司 Image processing method, training method of image processing model and related device
CN116996786A (en) * 2023-09-21 2023-11-03 清华大学 RGB-IR image color recovery and correction method and device
CN117372431A (en) * 2023-12-07 2024-01-09 青岛天仁微纳科技有限责任公司 Image detection method of nano-imprint mold

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
EFFECTIVE COLOR CORRECTION PIPELINE FOR A NOISY IMAGE;Kenta Takahashi 等;《ICIP 2016》;20161231;4002-4006 *
基于稠密卷积神经网络的遥感图像自动色彩校正;朱思捷 等;《中国科学院大学学报》;20190131;第36卷(第1期);93-100 *
小波域彩色图像的混合噪声抑制算法;王小杰 等;《安庆师范学院学报(自然科学版)》;20161231(第04期);46-49 *

Also Published As

Publication number Publication date
CN117649661A (en) 2024-03-05

Similar Documents

Publication Publication Date Title
CN117649661B (en) Carbon nanotube preparation state image processing method
CN110910390B (en) Panoramic three-dimensional color point cloud semantic segmentation method based on depth distortion convolution
CN101441764B (en) MTFC remote sensing image restoration method
CN110956581B (en) Image modality conversion method based on dual-channel generation-fusion network
CN112734646A (en) Image super-resolution reconstruction method based on characteristic channel division
US11461877B2 (en) Image inpainting method, image inpainting system and flat panel detector thereof
CN111353995B (en) Cervical single cell image data generation method based on generation countermeasure network
CN1573810A (en) Image processing method and apparatus
CN110517272B (en) Deep learning-based blood cell segmentation method
CN111882489A (en) Super-resolution graph recovery method for simultaneously enhancing underwater images
CN103020905B (en) For the sparse constraint adaptive N LM super resolution ratio reconstruction method of character image
CN111583113A (en) Infrared image super-resolution reconstruction method based on generation countermeasure network
CN111882485A (en) Hierarchical feature feedback fusion depth image super-resolution reconstruction method
CN115115520A (en) Image imaging method capable of improving image definition and resolution
CN114418904A (en) Infrared image enhancement method based on improved histogram equalization and enhanced high-pass filtering
CN110288529B (en) Single image super-resolution reconstruction method based on recursive local synthesis network
CN114863283A (en) SAR image target identification method combining transfer learning and attention mechanism
CN103020936B (en) A kind of face image super-resolution reconstructing method
CN108364274B (en) Nondestructive clear reconstruction method of optical image under micro-nano scale
CN115409712A (en) SICM scanning image resolution enhancement method
CN110852950B (en) Hyperspectral image super-resolution reconstruction method based on sparse representation and image fusion
CN118230131B (en) Image recognition and target detection method
CN117576139B (en) Edge and corner detection method and system based on bilateral filtering
CN112766287B (en) SAR image ship target detection acceleration method based on density examination
CN113962913B (en) Construction method of deep mutual learning framework integrating spectral space information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant