CN113933294B - Concentration detection method and device - Google Patents

Concentration detection method and device Download PDF

Info

Publication number
CN113933294B
CN113933294B CN202111315620.0A CN202111315620A CN113933294B CN 113933294 B CN113933294 B CN 113933294B CN 202111315620 A CN202111315620 A CN 202111315620A CN 113933294 B CN113933294 B CN 113933294B
Authority
CN
China
Prior art keywords
feature
image
color
features
configuration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111315620.0A
Other languages
Chinese (zh)
Other versions
CN113933294A (en
Inventor
田新雪
肖征荣
李朝霞
马书惠
杨子文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202111315620.0A priority Critical patent/CN113933294B/en
Publication of CN113933294A publication Critical patent/CN113933294A/en
Application granted granted Critical
Publication of CN113933294B publication Critical patent/CN113933294B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/75Systems in which material is subjected to a chemical reaction, the progress or the result of the reaction being investigated
    • G01N21/77Systems in which material is subjected to a chemical reaction, the progress or the result of the reaction being investigated by observing the effect on a chemical indicator
    • G01N21/78Systems in which material is subjected to a chemical reaction, the progress or the result of the reaction being investigated by observing the effect on a chemical indicator producing a change of colour
    • G01N21/783Systems in which material is subjected to a chemical reaction, the progress or the result of the reaction being investigated by observing the effect on a chemical indicator producing a change of colour for analysing gases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Plasma & Fusion (AREA)
  • Biochemistry (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Chemical Kinetics & Catalysis (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a concentration detection method and device, and belongs to the technical field of image processing. The method comprises the following steps: acquiring a first image of a reagent to be detected and a second image of a control reagent; respectively extracting features of the first image and the second image to obtain first features of the first image and second features of the second image; determining configuration influence features according to the second features and third features of a pre-acquired third image; obtaining target features according to the configuration influence features and the first features; and determining concentration information of the reagent to be detected according to the target feature and a fourth feature of a fourth image acquired in advance. The method can remove the influence of the configuration condition difference on the image of the reagent to be detected from the image of the reagent to be detected, and determine the target color block based on the image characteristics, so that the subjective influence can be effectively reduced, the objectivity and the accuracy of the determined target color block are improved, and objective and accurate results are obtained when the concentration of the substance to be detected of the reagent is determined according to the target color block.

Description

Concentration detection method and device
Technical Field
The application relates to the technical field of image processing, in particular to a concentration detection method and device.
Background
When the concentration of certain components in the air is out of a certain range, the health of the living being in the air is affected. Taking formaldehyde as an example, when the formaldehyde content in the air exceeds a certain concentration, the formaldehyde can cause great damage to human bodies, and even death can occur in serious cases. Therefore, detection of a substance of a specific component is particularly important. In the related art, a common formaldehyde detection method includes detection using a formaldehyde test cartridge. However, in the case of using the formaldehyde test kit, the color most matching the color of the kit cannot be selected from the color cards accurately and objectively due to environmental factors and subjectivity of a tester, thereby resulting in lack of accuracy and objectivity of formaldehyde detection concentration. Therefore, how to accurately and objectively detect the concentration of a specific component is a problem to be solved in the art.
Disclosure of Invention
Therefore, the application provides a concentration detection method and device, which are used for solving the problem that the detection result cannot be accurately and objectively determined when the concentration of an object to be detected is determined.
In order to achieve the above object, a first aspect of the present application provides a concentration detection method, including:
acquiring a first image of a reagent to be detected and a second image of a control reagent, wherein the first image and the second image are images acquired under a first configuration condition;
Extracting features of the first image and the second image respectively to obtain first features of the first image and second features of the second image;
determining a configuration influence feature according to the second feature and a third feature of a pre-acquired third image, wherein the third image is an image of the control reagent acquired under a second configuration condition, the second configuration condition is different from the first configuration condition, and the configuration influence feature is used for representing the influence of the difference of the configuration condition on the image feature;
obtaining target features according to the configuration influence features and the first features, wherein the target features are features after configuration influence in the first features is removed;
and determining concentration information of the reagent to be detected according to the target feature and a fourth feature of a fourth image acquired in advance, wherein the fourth image is an image of a color chart acquired under the second configuration condition, the color chart comprises at least one color lump, and the color lump has a corresponding relation with the concentration of the object to be detected.
Further, the feature extraction is performed on the first image and the second image respectively, to obtain a first feature of the first image and a second feature of the second image, including:
Respectively extracting the characteristics of the first image and the second image according to preset characteristic dimensions to obtain the first characteristics of the first image and the second characteristics of the second image;
wherein the feature dimensions include at least one of a color feature dimension, a grayscale feature dimension, and a texture feature dimension.
Further, the feature extraction is performed on the first image and the second image according to a preset feature dimension, so as to obtain a first feature of the first image and a second feature of the second image, including:
performing edge detection on the first image and the second image respectively, and determining a region to be extracted of the first image and a region to be extracted of the second image, wherein the region to be extracted represents an effective feature extraction region;
and respectively carrying out feature extraction on the region to be extracted of the first image and the region to be extracted of the second image according to the feature dimension to obtain a first feature of the first image and a second feature of the second image.
Further, the feature dimension comprises a color feature dimension, and the first feature and the second feature comprise color feature components;
And respectively carrying out feature extraction on the region to be extracted of the first image and the region to be extracted of the second image according to the feature dimension to obtain a first feature of the first image and a second feature of the second image, wherein the feature extraction comprises the following steps:
determining the value of a region to be extracted of the first image in a designated color channel, and obtaining the color characteristic component of the first image in the designated color channel;
and determining the value of the region to be extracted of the second image in the appointed color channel, and obtaining the color characteristic component of the second image in the appointed color channel.
Further, the determining a configuration influencing feature according to the second feature and a third feature of a pre-acquired third image includes:
acquiring first characteristic difference information according to the second characteristic and the third characteristic;
and determining the configuration influence characteristic according to the first characteristic difference information.
Further, the second feature and the third feature include color feature components;
the first feature difference information includes color feature component differences of the second feature and the third feature in each of the specified color channels.
Further, the first feature comprises a color feature component, and the configuration influencing feature comprises a color feature component difference value;
the obtaining the target feature according to the configuration influence feature and the first feature comprises:
and subtracting the color characteristic component difference value of the configuration influence characteristic in each specified color channel from the color characteristic component of the first characteristic in each specified color channel to obtain the target characteristic.
Further, the fourth feature comprises a plurality of color patch features;
the determining the concentration information of the reagent to be detected according to the target feature and the fourth feature of the fourth image acquired in advance comprises:
acquiring second characteristic difference information according to the target characteristic and the color block characteristic;
selecting a target color block from the color blocks according to the second characteristic difference information;
and determining the concentration of the to-be-detected object of the to-be-detected reagent according to the corresponding relation between the target color lump and the concentration of the to-be-detected object.
In order to achieve the above object, a second aspect of the present application provides a concentration detection apparatus comprising:
an image acquisition module configured to acquire a first image of a reagent to be tested and a second image of a control reagent, wherein the first image and the second image are images acquired under a first configuration condition;
A feature extraction module configured to perform feature extraction on the first image and the second image, respectively, to obtain a first feature of the first image and a second feature of the second image;
an influence feature determination module configured to determine a configuration influence feature from the second feature and a third feature of a pre-acquired third image, wherein the third image is an image of the control reagent acquired under a second configuration condition, the second configuration condition being different from the first configuration condition, the configuration influence feature being used to characterize an influence of a difference in configuration condition on an image feature;
a target feature acquisition module configured to obtain a target feature according to the configuration influence feature and the first feature, wherein the target feature is a feature after the configuration influence in the first feature is removed;
the concentration detection module is configured to determine concentration information of the reagent to be detected according to the target feature and a fourth feature of a fourth image acquired in advance, wherein the fourth image is an image of a color chart acquired under the second configuration condition, the color chart comprises at least one color lump, and the color lump has a corresponding relation with the concentration of the object to be detected.
In order to achieve the above object, a third aspect of the present application provides a terminal, including: at least one concentration detection means;
wherein, concentration detection apparatus adopts the concentration detection apparatus of any one of the embodiments of this application.
The application has the following advantages:
the concentration detection method and device provided by the application acquire a first image of a reagent to be detected and a second image of a contrast reagent, wherein the first image and the second image are images acquired under a first configuration condition; respectively extracting features of the first image and the second image to obtain first features of the first image and second features of the second image; determining a configuration influence feature according to the second feature and a third feature of a pre-acquired third image, wherein the third image is an image of a contrast agent acquired under a second configuration condition, the second configuration condition is different from the first configuration condition, and the configuration influence feature is used for representing the influence of the difference of the configuration condition on the image feature; obtaining target features according to the configuration influence features and the first features, wherein the target features are features after the configuration influence in the first features is removed; and determining concentration information of the reagent to be detected according to the target characteristic and a fourth characteristic of a fourth image acquired in advance, wherein the fourth image is an image of a color chart acquired under the second configuration condition, the color chart comprises at least one color lump, and the color lump has a corresponding relation with the concentration of the object to be detected. The method can determine the influence of the configuration condition difference on the image of the reagent to be detected, remove the influence from the image of the reagent to be detected, thereby obtaining more accurate image characteristics of the reagent to be detected, and determine the target color block based on the image characteristics without depending on human eye recognition, thereby effectively reducing subjective influence, improving the objectivity and accuracy of the determined target color block, and further obtaining objective and accurate results when determining the concentration of the substance to be detected of the reagent according to the target color block.
Drawings
The accompanying drawings are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate the application and, together with the description, do not limit the application.
Fig. 1 is a flowchart of a concentration detection method according to an embodiment of the present application;
fig. 2 is a flowchart of an image feature extraction method provided in an embodiment of the present application;
FIG. 3 is a flow chart of a concentration detection method according to an exemplary embodiment of the present application;
FIG. 4 is a block diagram showing the composition of a concentration detection apparatus according to an embodiment of the present application;
fig. 5 is a block diagram of a terminal according to an embodiment of the present application;
fig. 6 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following detailed description of specific embodiments of the present application refers to the accompanying drawings. It should be understood that the detailed description is presented herein for purposes of illustration and explanation only and is not intended to limit the present application.
Formaldehyde is a colorless, irritating and soluble gas, and when the formaldehyde in the air exceeds a certain concentration, the formaldehyde can cause symptoms such as headache, hypodynamia, tiredness, insomnia and the like, and even death can occur when serious. Therefore, it is necessary to detect the formaldehyde concentration.
The common formaldehyde detection method is to use a formaldehyde detection box for detection, and the detection process comprises the following steps: firstly, closing doors and windows of a room to be detected for a period of time (for example, 3 hours) according to product requirements, closing air conditioning equipment such as indoor air conditioners, air purifiers and the like during closing, and simultaneously opening furniture doors, drawers and the like; secondly, placing an absorption box of the formaldehyde detection box in the closed room so that reagents in the absorption box fully absorb formaldehyde in the room; and finally, pouring the color reagent into an absorption box, covering a box cover, standing for a period of time, determining a color block matched with the color of the reagent in the colorimetric card after the color of the reagent is stable, and determining the indoor formaldehyde concentration according to the formaldehyde concentration corresponding to the color block. However, due to the reasons of ambient light and human subjectivity, for the same reagent, color blocks selected from the colorimetric card by different personnel may be different, so that the determined formaldehyde concentration is also different, and thus the accuracy and objectivity of the detection result cannot be ensured.
In view of this, the present application provides a concentration detection method and apparatus, which can determine the influence of the difference of the configuration conditions on the image of the reagent to be detected, and remove the influence from the image of the reagent to be detected, so as to obtain more accurate image features of the reagent to be detected, and when determining the target color patch, the method and apparatus do not rely on human eye recognition, but perform determination based on the image features, so as to effectively reduce subjective influence, improve objectivity and accuracy of the determined target color patch, and further obtain objective and accurate results when determining the concentration of the substance to be detected of the reagent according to the target color patch.
A first aspect of the present application provides a concentration detection method. Fig. 1 is a flowchart of a concentration detection method according to an embodiment of the present application. As shown in fig. 1, the concentration detection method includes the steps of:
step S101, acquiring a first image of a reagent to be tested and a second image of a control reagent.
The reagent to be detected is a reagent to be detected, which can be used for detecting formaldehyde concentration, substance acidity and alkalinity, the existence of specific biological enzymes and the like, and the reagent can be set into a liquid reagent, a paper reagent and the like, and the application of the reagent and the reagent form are not limited. The control reagent should have the same reagent form as the reagent to be measured, and the control reagent is provided so as to exclude the influence of variables other than the concentration of the analyte on the detection result. In some embodiments, the reagent to be tested is a liquid reagent for detecting formaldehyde and the control reagent is a colorless liquid (e.g., purified water is used as the control reagent).
The first image and the second image are images acquired under the first configuration condition. In some embodiments, the first configuration conditions include, but are not limited to, environmental configuration conditions and shooting configuration conditions. In some implementations, the environmental configuration conditions include at least one of a luminous flux configuration, a luminous intensity configuration, a brightness configuration, and an illuminance configuration, and the shooting configuration conditions include at least one of an exposure configuration, a shooting angle configuration, and a shooting distance configuration. In other words, images of the reagent to be detected and the control reagent are acquired under the same configuration condition, and based on the images, interference on the first image and the second image caused by environmental factors or shooting operation can be effectively avoided, so that accuracy of concentration detection results is improved.
Note that the same environmental configuration condition and the same shooting configuration condition may refer to: the above-described respective configurations are in the same range when the first image and the second image are acquired. For example, the shooting distance arrangements are all in the distance range of 0.2-0.5m (meters). The present disclosure is not limited to a specific setting of the scope.
It should also be noted that, in some embodiments, the concentration of the analyte is determined by a color reaction of the reagent, where the color reaction refers to a reaction that converts the component of the reagent to be detected into a colored compound. Taking formaldehyde detection as an example, a reagent is placed in a space to be detected, after a period of time, the reagent absorbs formaldehyde in the space to be detected, and the formaldehyde reacts with certain specific components in the reagent to form a colored compound, so that the color of the reagent changes. Under normal conditions, the formaldehyde concentration in the space to be measured is different, and the final color displayed by the reagent is correspondingly different, so that the formaldehyde concentration in the space to be measured can be determined by the color of the reagent after the color development reaction.
Step S102, feature extraction is carried out on the first image and the second image respectively, and the first feature of the first image and the second feature of the second image are obtained.
Where features of the image may reflect information of image color, texture, shape, etc., including but not limited to color features, gray features, and texture features. In practical application, features extracted from images are correspondingly different for different detection scenes, so that a more accurate detection result is obtained.
For example, in an application scenario in which formaldehyde concentration is detected using a formaldehyde detection cartridge, considering that a formaldehyde detection reagent changes in color based on a color development reaction and the formaldehyde concentration is different, the color of the reagent is also different accordingly, for the application scenario, the color features of an image may be extracted in order to confirm the formaldehyde concentration from the color features.
For another example, in the case of detection using a helicobacter pylori test strip, a chromogenic reaction region (e.g., a bar region or a cross-shaped region) is usually preset in the test strip. After the concentration of helicobacter pylori reaches a certain concentration threshold, the color of the color reaction area changes, and the color of other areas of the test paper does not change, so that a specific pattern (the shape of the pattern corresponds to the shape of the color reaction area) is displayed on the test paper, and the pattern has a specific color. Based on this, the texture feature and the color feature (or the texture feature and the gradation feature) of the image can be extracted to collectively confirm the helicobacter pylori concentration from the texture feature and the color feature (or the texture feature and the gradation feature).
In some embodiments, the step of extracting features of the first image and the second image, respectively, to obtain the first features of the first image and the second features of the second image, includes: and respectively extracting the characteristics of the first image and the second image according to the preset characteristic dimension to obtain the first characteristics of the first image and the second characteristics of the second image. Wherein the feature dimension includes at least one of a color feature dimension, a grayscale feature dimension, and a texture feature dimension.
In some implementations, the feature dimension includes a color feature dimension, and the first feature and the second feature include color feature components. According to a preset feature dimension, respectively extracting features of the first image and the second image to obtain the first feature of the first image and the second feature of the second image, including: firstly, determining the value of a first image in a designated color channel, and obtaining color characteristic components of the first image in the designated color channel; and secondly, determining the value of the second image in the appointed color channel, and obtaining the color characteristic component of the second image in the appointed color channel. The designated color channel may be a color channel based on an RGB (Red, green, blue, red, green and blue) color model, which includes a red channel, a green channel and a blue channel, and the color feature component of the image includes a value of the image in the red channel, a value of the green channel and a value of the blue channel, respectively.
Step S103, determining a configuration influencing feature according to the second feature and the third feature of the pre-acquired third image.
The third image is an image of the control reagent acquired under a second configuration condition, the second configuration condition being different from the first configuration condition, and the configuration influencing feature being used to characterize the influence of the difference of the configuration conditions on the image feature. Since the second feature and the third feature are both features of the control reagent image, the difference between the second feature and the third feature is caused by the difference in the arrangement condition. For example, a difference in the luminance configuration in the first configuration condition and the second configuration condition may result in a difference between the second image and the third image, and thus a corresponding difference between the second feature and the third feature.
In some embodiments, the step of determining the configuration influencing feature from the second feature and the third feature of the pre-acquired third image comprises: acquiring first characteristic difference information according to the second characteristic and the third characteristic; and determining configuration influence characteristics according to the first characteristic difference information.
In some implementations, the second feature and the third feature include color feature components; the first feature difference information includes color feature component differences of the second feature and the third feature in each of the specified color channels.
In other implementations, the second feature and the third feature include texture features; the first feature difference information includes a texture feature difference value between the second feature and the third feature.
Step S104, obtaining target features according to the configuration influence features and the first features.
Wherein the target feature is a feature after the configuration influence in the first feature is removed.
In some embodiments, the first feature comprises a color feature component and the configuration influencing feature comprises a color feature component difference value. The step of obtaining the target feature according to the configuration influence feature and the first feature comprises the following steps: and subtracting the color feature component differences of the configuration influence features in the designated color channels from the color feature components of the first features in the designated color channels to obtain target features. Wherein the target feature is a feature relating to color.
In some other embodiments, the first feature comprises a texture feature component and the configuration influencing feature comprises a texture feature difference value. The step of obtaining the target feature according to the configuration influence feature and the first feature comprises the following steps: and subtracting the texture characteristic difference value from the texture characteristic of the first characteristic to obtain the target characteristic. Wherein the target feature is a feature related to texture.
When the first feature corresponds to the plurality of feature dimensions, the features of each feature dimension may be processed in the above manner.
Step S105, determining concentration information of the reagent to be measured according to the target feature and the fourth feature of the fourth image acquired in advance.
The fourth image is an image of a color chart obtained under the second configuration condition, the color chart comprises at least one color lump, and the color lump has a corresponding relation with the concentration of the object to be detected.
In some embodiments, the fourth feature comprises a plurality of color patch features. Determining concentration information of the reagent to be detected according to the target feature and a fourth feature of a fourth image acquired in advance, wherein the step comprises the following steps: acquiring second characteristic difference information according to the target characteristics and the color block characteristics; selecting a target color block from the color blocks according to the second characteristic difference information; and determining the concentration of the to-be-detected object of the to-be-detected reagent according to the corresponding relation between the target color lump and the concentration of the to-be-detected object.
In some implementations, the target feature and the fourth feature are color features, and the second feature difference information includes color feature component differences. And when the target color block is selected according to the second characteristic difference information, the color block with the smallest color characteristic component difference value is taken as the target color block. The reason is that the difference between the color lump features and the target features is the smallest in all color lump in the color chart, so that the concentration to be detected corresponding to the color lump is the concentration to be detected closest to the reagent to be detected, and the concentration to be detected corresponding to the color lump is taken as the final detection concentration.
In this embodiment, a first image of a reagent to be tested and a second image of a control reagent are acquired, wherein the first image and the second image are images acquired under a first configuration condition; respectively extracting features of the first image and the second image to obtain first features of the first image and second features of the second image; determining a configuration influence feature according to the second feature and a third feature of a pre-acquired third image, wherein the third image is an image of a contrast agent acquired under a second configuration condition, the second configuration condition is different from the first configuration condition, and the configuration influence feature is used for representing the influence of the difference of the configuration condition on the image feature; obtaining target features according to the configuration influence features and the first features, wherein the target features are features after the configuration influence in the first features is removed; and determining concentration information of the reagent to be detected according to the target characteristic and a fourth characteristic of a fourth image acquired in advance, wherein the fourth image is an image of a color chart acquired under the second configuration condition, the color chart comprises at least one color lump, and the color lump has a corresponding relation with the concentration of the object to be detected. The method can determine the influence of the configuration condition difference on the image of the reagent to be detected, remove the influence from the image of the reagent to be detected, thereby obtaining more accurate image characteristics of the reagent to be detected, and determine the target color block based on the image characteristics without depending on human eye recognition, thereby effectively reducing subjective influence, improving the objectivity and accuracy of the determined target color block, and further obtaining objective and accurate results when determining the concentration of the substance to be detected of the reagent according to the target color block.
It should be noted that, in some embodiments, before step S101, the method further includes: and acquiring a third image of the control reagent and a fourth image of the color chart under a second configuration condition, and respectively extracting features of the third image and the fourth image to obtain a third feature and a fourth feature. The method for extracting the third feature and the fourth feature is the same as the method for extracting the first feature and the second feature, and will not be described herein.
Fig. 2 is a flowchart of an image feature extraction method according to an embodiment of the present application. As shown in fig. 2, the image feature extraction method includes the steps of:
step S201, performing edge detection on the first image and the second image, respectively, to determine a region to be extracted of the first image and a region to be extracted of the second image.
Wherein the region to be extracted represents an effective feature extraction region.
When the reagent to be detected and the control reagent are photographed, the first image and the second image are acquired, and are limited by factors such as the shape of the reagent, photographing angle, photographing distance and the like, part of background areas (namely areas except the reagent in the image) are usually present in the acquired first image and second image. When the image features are extracted, the features of the background region can interfere with the features of the reagent region, thereby affecting the accuracy of the detection result. Therefore, before extracting the features of the first image and the second image, effective detection areas in the first image and the second image are determined through an edge detection operation, so that an area to be extracted is determined, and a feature extraction operation is performed based on the area to be extracted.
For example, the reagent is a formaldehyde detection cartridge having a circular shape, and when taking an image of the reagent, a first image including only the formaldehyde detection cartridge cannot be directly obtained, but a first image including a background area (for example, a table on which the formaldehyde detection cartridge is placed) is obtained. Therefore, edge detection can be performed on the first image, a background area and a reagent area in the first image are determined, the reagent area is used as an area to be extracted of the first image, and in a subsequent feature extraction link, only feature information of the area to be extracted is extracted.
In the case of extracting the fourth feature of the color chart, there is also a problem that the fourth image including only the color chart cannot be obtained directly due to restrictions on the imaging angle, imaging distance, and the like. Therefore, when the feature extraction is performed on the fourth image, edge detection is performed on the fourth image first, a background area and a colorimetric card area in the fourth image can be determined, the colorimetric card area is used as an area to be extracted of the fourth image, and only feature information of the area to be extracted is extracted in a subsequent feature extraction link, so that more accurate fourth features are obtained.
It should be further noted that, in some embodiments, when the detected region to be extracted of the first image or the second image has a tilt, a distortion, or the like, it may be processed based on an image processing technique, so as to obtain an image of the normal region to be extracted.
Step S202, respectively carrying out feature extraction on the region to be extracted of the first image and the region to be extracted of the second image according to feature dimensions to obtain first features of the first image and second features of the second image.
Wherein the feature dimensions include, but are not limited to, a color feature dimension, a gray feature dimension, and a texture feature dimension.
For example, in an application scenario in which formaldehyde concentration detection is performed based on a formaldehyde detection cartridge, color features of a region to be extracted of a first image and color features of a region to be extracted of a second image are extracted. The color characteristics of the region to be extracted of the first image comprise values of all pixel points in the target region in RGB color channels; the color characteristics of the region to be extracted of the second image comprise the values of all pixel points in each color block in an RGB color channel. After the color characteristic information is obtained, the formaldehyde concentration corresponding to the reagent can be obtained according to the concentration detection method disclosed in the embodiment of the application.
For another example, in the application scenario of helicobacter pylori concentration detection based on helicobacter pylori test paper, it is assumed that the color reaction area in the helicobacter pylori test paper is "stripe-shaped", and the color chart is a picture provided in a package box or a specification for determining the detection result. In order to improve the accuracy of the detection result, the texture features and the gray features of the region to be extracted of the first image are extracted, the first features are obtained, the texture features and the gray features of the region to be extracted of the second image are extracted, and the second features are obtained.
After the first feature and the second feature are extracted, the process of detecting the concentration of helicobacter pylori includes: firstly, obtaining a first configuration influence feature according to texture features in the second feature and texture features in a third feature, wherein the third feature is a feature of an image of a contrast agent obtained under a second configuration condition, and the first configuration influence feature can reflect the influence of a configuration condition difference on the texture features; secondly, obtaining a second configuration influence feature according to the gray level feature in the second feature and the gray level feature in the third feature, wherein the second configuration influence feature can reflect the influence of the configuration condition difference on the gray level feature; thirdly, according to the first configuration influence feature and the texture feature in the first feature, a first target feature is obtained, wherein the first target feature is a feature for removing the influence of the configuration condition difference on the texture; then, according to the second configuration influence feature and the gray level feature in the first feature, a second target feature is obtained, wherein the second target feature is a feature for removing the influence of the configuration condition difference on the gray level; further, a first matching result is obtained according to the texture features in the first target feature and the fourth feature, wherein the first matching result is used for representing the matching degree of the reagent to be detected and the colorimetric card with respect to the texture features (for example, the texture features in the fourth feature are in a strip shape, and whether the corresponding shapes are in a strip shape is determined according to the first target feature); and finally, obtaining a second matching result according to the gray scale features in the second target feature and the fourth feature, wherein the second matching result is used for representing the matching degree of the reagent to be tested and the colorimetric card relative to the gray scale features. And determining the target color lump according to the first matching result and the second matching result, and further determining the concentration of helicobacter pylori.
In this embodiment, considering that the background area in the to-be-detected reagent image/the control reagent image/the colorimetric card image may affect the detection result, before the feature extraction, the effective detection area in the image is determined by the edge recognition operation, and only the features of the effective detection area are extracted, so that the influence of multiple detection results in the background area can be effectively reduced, and the accuracy of the detection result is improved.
Fig. 3 is a flowchart of a concentration detection method according to an exemplary embodiment of the present application. As shown in fig. 3, the concentration detection method is applied to a formaldehyde detection process, and specifically comprises the following steps:
step S301, under the second configuration condition, shooting the control reagent and the color comparison card respectively, and obtaining a third image corresponding to the control reagent and a fourth image corresponding to the color comparison card.
In some embodiments, the control reagent uses purified water.
It should be noted that, the size and color of the containers for holding the control reagent and the reagent to be tested should be consistent, so as to avoid the influence on the detection result due to the difference of the containers.
Step S302, edge detection is carried out on the third image, a region to be extracted of the third image is determined, color features of the region to be extracted of the third image are extracted, and color feature components of RGB channels corresponding to all pixels in the region to be extracted of the third image are obtained.
In some implementations, the color feature component of the region of the third image to be extracted is (R3 ij ,G3 ij ,B3 ij ) Wherein i represents the number of rows of pixel points in the region to be extracted of the third image, j represents the number of columns of pixel points in the region to be extracted of the third image, i and j are integers greater than or equal to 1, R3 ij Representing the characteristic component of the red channel of the pixel point corresponding to the ith row and the jth column in the region to be extracted of the third image, G3 ij Representing green channel characteristic components of pixel points corresponding to the ith row and the jth column in the region to be extracted of the third image, and B3 ij And representing the blue channel characteristic component of the pixel point corresponding to the ith row and the jth column in the region to be extracted of the third image.
Step S303, edge detection is performed on the fourth image, a region to be extracted of the fourth image is determined, color features of the region to be extracted of the fourth image are extracted, and color feature components of RGB channels corresponding to each pixel in the region to be extracted of the fourth image are obtained.
In some implementations, the color feature component of the region of the fourth image to be extracted is (R4 mn ,G4 mn ,B4 mn ) Wherein m represents the number of rows of pixel points in the fourth image to-be-extracted area, n represents the number of columns of pixel points in the fourth image to-be-extracted area, m and n are integers greater than or equal to 1, R4 mn Representing red channel characteristic components of pixel points corresponding to the nth row and nth column in a fourth image region to be extracted, G4 mn Representing green channel characteristic components of pixel points corresponding to the nth row and nth column in a fourth image region to be extracted, and B4 mn And representing the blue channel characteristic component of the pixel point corresponding to the nth row and the nth column in the region to be extracted of the fourth image.
Step S304, placing a reagent to be tested and a control reagent in a space to be tested, dripping a color developing agent into the reagent to be tested after the reagent to be tested fully absorbs formaldehyde, waiting for the color of the reagent to be tested to be stable, and respectively shooting the reagent to be tested and the control reagent under a first configuration condition to obtain a first image corresponding to the reagent to be tested and a second image corresponding to the control reagent.
Step S305, edge detection is performed on the first image, a region to be extracted of the first image is determined, color features of the region to be extracted of the first image are extracted, and color feature components of RGB channels corresponding to each pixel in the region to be extracted of the first image are obtained.
In some implementations, the color feature component of the region of the first image to be extracted is (R1 pq ,G1 pq ,B1 pq ) Wherein p represents the number of rows of pixel points in the first image to-be-extracted area, q represents the number of columns of pixel points in the first image to-be-extracted area, p and q are integers greater than or equal to 1, R1 pq Representing the characteristic component of the red channel of the pixel point corresponding to the p-th row and the q-th column in the region to be extracted of the first image, G1 pq Representing green channel characteristic components of pixel points corresponding to the p-th row and the q-th column in a region to be extracted of a first image, B1 pq And representing the blue channel characteristic component of the pixel point corresponding to the p-th row and the q-th column in the region to be extracted of the first image.
Step S306, edge detection is carried out on the second image, a region to be extracted of the second image is determined, color features of the region to be extracted of the second image are extracted, and color feature components of RGB channels corresponding to all pixels in the region to be extracted of the second image are obtained.
In some implementations, the color feature component of the region of the second image to be extracted is (R2 kr ,G2 kr ,B2 kr ) Wherein k represents the number of rows of pixel points in the second image to-be-extracted area, R represents the number of columns of pixel points in the second image to-be-extracted area, k and R are integers greater than or equal to 1, and R2 kr Representing red channel characteristic components of pixel points corresponding to the kth row and the kth column in the region to be extracted of the second image, G2 kr Representing green channel characteristic components of pixel points corresponding to the kth row and the kth column in the region to be extracted of the second image, and B2 kr And representing the blue channel characteristic component of the pixel point corresponding to the kth row and the kth column in the region to be extracted of the second image.
Step S307, determining a configuration influencing feature according to the second feature and the third feature.
In some implementations, differences in the characteristic components of the respective color channels are calculated separately to obtain configuration influencing characteristics.
First, a red channel feature component difference value is calculated according to formula (1).
△R=R2-R3 (1)
Wherein ΔR represents the difference of the characteristic components of the red channel, R2 represents the average value of the characteristic components of the second characteristic in the red channel, and R2 is calculated by kr Adding and dividing the total number of pixels to obtain R3 which represents the total characteristic of the third characteristic in the red channel, wherein R3 can be obtained by ij And performing accumulation and dividing by the total number of pixels.
Δg and Δb may be calculated in a similar manner, where Δg represents the green channel feature component difference and Δb represents the blue channel feature component difference.
To sum up, the configuration influencing features p= { Δr, Δg, Δb }.
Step S308, obtaining target features according to the configuration influence features and the first features.
In some implementations, the total features of the first features in each color channel are first calculated, and the target features are determined based on the total features and configuration influencing features of the first features in each color channel. The method for calculating the total features of the first features in each color channel is similar to step S307, and will not be described herein.
Assuming that the total feature of the first feature in each color channel is { R1, G1, B1}, the target feature can be obtained by the formula (2).
T={R0,G0,B0}={R1-△R,G1-△G,B1-△B} (2)
Wherein T represents the target feature, R0 represents the red channel feature component in the target feature, G0 represents the green channel feature component in the target feature, and B0 represents the blue channel feature component in the target feature.
Step S309, determining the concentration of the reagent according to the target feature and the fourth feature.
In some implementations, it is assumed that the fourth feature includes y color block features. First, feature difference values of the target feature and each color patch are calculated by the formula (3).
Dw=sqrt((R0-Rw) 2 +(G0-Gw) 2 +(B0-Bw) 2 ) (3)
Wherein Dw represents a feature difference value between the target feature and the w-th color patch, w is a positive integer less than or equal to y, rw represents an average value of red channel feature components of the w-th color patch, gw represents an average value of green channel feature components of the w-th color patch, and Bw represents an average value of blue channel feature components of the w-th color patch.
And secondly, determining the target color block according to the characteristic difference value. Specifically, the target patch is determined by formula (4).
D0=min{D1,…,Dy} (4)
Wherein D0 is a characteristic difference value corresponding to the target color block, the min (xi) function is a selected minimum function, and the operation result is equal to the minimum value in the input parameter xi. In other words, the patch corresponding to the minimum value in the feature difference value Dw is determined as the target patch.
And finally, determining the concentration of the to-be-detected object of the to-be-detected reagent according to the corresponding relation between the target color lump and the concentration of the to-be-detected object.
For example, if the concentration of formaldehyde corresponding to the target color lump is 8%o, the concentration of formaldehyde corresponding to the reagent to be measured is also 8%o.
In this embodiment, firstly, the influence of the configuration condition difference on the image of the reagent to be detected is determined, and the influence is removed from the image of the reagent to be detected, so that more accurate image characteristics of the reagent to be detected are obtained, and when the target color lump is determined, the image characteristics are determined based on the image characteristics instead of human eye recognition, so that the subjective influence can be effectively reduced, the objectivity and the accuracy of the determined target color lump are improved, and further, when the concentration of the object to be detected of the reagent is determined according to the target color lump, an objective and accurate result is obtained.
The above steps of the methods are divided, for clarity of description, and may be combined into one step or split into multiple steps when implemented, so long as they include the same logic relationship, and they are all within the protection scope of the present application; it is within the scope of this application to add insignificant modifications to the algorithm or flow or introduce insignificant designs, but not to alter the core design of its algorithm and flow.
A second aspect of the present application provides a concentration detection apparatus. Fig. 4 is a block diagram of a concentration detection apparatus according to an embodiment of the present application. As shown in fig. 4, the concentration detection apparatus 400 includes the following modules:
the image acquisition module 401 is configured to acquire a first image of the test reagent and a second image of the control reagent.
Wherein the first image and the second image are images acquired under a first configuration condition.
The feature extraction module 402 is configured to perform feature extraction on the first image and the second image, respectively, to obtain a first feature of the first image and a second feature of the second image.
An influencing feature determination module 403 configured to determine a configuration influencing feature from the second feature and a third feature of the pre-acquired third image.
The third image is an image of the control reagent acquired under a second configuration condition, the second configuration condition being different from the first configuration condition, and the configuration influencing feature being used to characterize the influence of the difference of the configuration conditions on the image feature.
The target feature acquisition module 404 is configured to obtain the target feature based on the configuration influencing feature and the first feature.
Wherein the target feature is a feature after the configuration influence in the first feature is removed.
A concentration detection module 405 configured to determine concentration information of the test agent based on the target feature and a fourth feature of a fourth image acquired in advance.
The fourth image is an image of a color chart obtained under the second configuration condition, the color chart comprises at least one color lump, and the color lump has a corresponding relation with the concentration of the object to be detected.
It should be noted that, in some embodiments, the concentration detection apparatus 400 further includes a display module, where the display module is configured to display concentration information of the reagent to be detected.
In this embodiment, a first image of the reagent to be measured and a second image of the control reagent are acquired by an image acquisition module; respectively carrying out feature extraction on the first image and the second image by using a feature extraction module to obtain first features of the first image and second features of the second image; determining, by an influence feature determination module, a configuration influence feature according to the second feature and a third feature of a third image acquired in advance; the target feature acquisition module acquires target features according to configuration influence features and the first features; and determining concentration information of the reagent to be detected according to the target feature and a fourth feature of a fourth image acquired in advance by a concentration detection module. The device can determine the influence of the configuration condition difference on the image of the reagent to be detected, remove the influence from the image of the reagent to be detected, thereby obtaining more accurate image characteristics of the reagent to be detected, and determine the target color block based on the image characteristics without depending on human eye recognition, thereby effectively reducing subjective influence, improving the objectivity and accuracy of the determined target color block, and further obtaining objective and accurate results when determining the concentration of the substance to be detected of the reagent according to the target color block.
Fig. 5 is a block diagram of a terminal according to an embodiment of the present application. As shown in fig. 5, a concentration detection apparatus 510 is disposed in the terminal 500, where the concentration detection apparatus 510 includes any one of the concentration detection apparatuses in the embodiments of the present application.
The image acquisition module in the density detection device 510 may be a multiplexing module configured by the terminal 500 itself, or a dedicated photographing module for density detection may be provided separately in the terminal 500, which is not limited in this application.
Fig. 6 is a block diagram of an electronic device according to an embodiment of the present application.
Fig. 6 illustrates a schematic block diagram of an example electronic device 600 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data required for the operation of the device 600 may also be stored. The computing unit 601, ROM602, and RAM603 are connected to each other by a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Various components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, mouse, etc.; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The calculation unit 601 performs the respective methods and processes described above, such as a density detection method. For example, in some embodiments, the concentration detection method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM602 and/or the communication unit 609. When a computer program is loaded into the RAM603 and executed by the computing unit 601, one or more steps of the concentration detection method described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the concentration detection method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be noted that each module in this embodiment is a logic module, and in practical application, one logic unit may be one physical unit, or may be a part of one physical unit, or may be implemented by a combination of multiple physical units. In addition, in order to highlight the innovative part of the present application, elements that are not so close to solving the technical problem presented in the present application are not introduced in the present embodiment, but it does not indicate that other elements are not present in the present embodiment.
It is to be understood that the above embodiments are merely illustrative of the exemplary embodiments employed to illustrate the principles of the present application, however, the present application is not limited thereto. Various modifications and improvements may be made by those skilled in the art without departing from the spirit and substance of the application, and are also considered to be within the scope of the application.

Claims (6)

1. A concentration detection method, comprising:
acquiring a first image of a reagent to be detected and a second image of a control reagent, wherein the first image and the second image are images acquired under a first configuration condition;
extracting features of the first image and the second image respectively to obtain first features of the first image and second features of the second image;
determining a configuration influence feature according to the second feature and a third feature of a pre-acquired third image, wherein the third image is an image of the control reagent acquired under a second configuration condition, the second configuration condition is different from the first configuration condition, and the configuration influence feature is used for representing the influence of the difference of the configuration condition on the image feature;
Obtaining target features according to the configuration influence features and the first features, wherein the target features are features after configuration influence in the first features is removed;
determining concentration information of the reagent to be detected according to the target feature and a fourth feature of a fourth image acquired in advance, wherein the fourth image is an image of a color chart acquired under the second configuration condition, the color chart comprises at least one color lump, and the color lump has a corresponding relation with the concentration of the object to be detected;
wherein the determining a configuration influencing feature according to the second feature and a third feature of a pre-acquired third image comprises: acquiring first feature difference information according to the second feature and the third feature, wherein the second feature and the third feature comprise color feature components; determining the configuration influence feature according to the first feature difference information, wherein the first feature difference information comprises color feature component differences of the second feature and the third feature in each appointed color channel;
the first feature comprises a color feature component and the configuration influencing feature comprises a color feature component difference value; the obtaining the target feature according to the configuration influence feature and the first feature comprises: subtracting the color characteristic component differences of the configuration influence features in the designated color channels from the color characteristic components of the first features in the designated color channels to obtain the target features;
The fourth feature comprises a plurality of color patch features; the determining the concentration information of the reagent to be detected according to the target feature and the fourth feature of the fourth image acquired in advance comprises: acquiring second characteristic difference information according to the target characteristic and the color block characteristic; selecting a target color block from the color blocks according to the second characteristic difference information; and determining the concentration of the to-be-detected object of the to-be-detected reagent according to the corresponding relation between the target color lump and the concentration of the to-be-detected object.
2. The density detection method according to claim 1, wherein the feature extraction is performed on the first image and the second image, respectively, to obtain a first feature of the first image and a second feature of the second image, comprising:
respectively extracting the characteristics of the first image and the second image according to preset characteristic dimensions to obtain the first characteristics of the first image and the second characteristics of the second image;
wherein the feature dimensions include at least one of a color feature dimension, a grayscale feature dimension, and a texture feature dimension.
3. The method according to claim 2, wherein the feature extraction is performed on the first image and the second image according to a preset feature dimension, respectively, to obtain a first feature of the first image and a second feature of the second image, including:
Performing edge detection on the first image and the second image respectively, and determining a region to be extracted of the first image and a region to be extracted of the second image, wherein the region to be extracted represents an effective feature extraction region;
and respectively carrying out feature extraction on the region to be extracted of the first image and the region to be extracted of the second image according to the feature dimension to obtain a first feature of the first image and a second feature of the second image.
4. The concentration detection method of claim 3 wherein the feature dimension comprises a color feature dimension, and the first feature and the second feature comprise color feature components;
and respectively carrying out feature extraction on the region to be extracted of the first image and the region to be extracted of the second image according to the feature dimension to obtain a first feature of the first image and a second feature of the second image, wherein the feature extraction comprises the following steps:
determining the value of a region to be extracted of the first image in a designated color channel, and obtaining the color characteristic component of the first image in the designated color channel;
and determining the value of the region to be extracted of the second image in the appointed color channel, and obtaining the color characteristic component of the second image in the appointed color channel.
5. A concentration detection apparatus, comprising:
an image acquisition module configured to acquire a first image of a reagent to be tested and a second image of a control reagent, wherein the first image and the second image are images acquired under a first configuration condition;
a feature extraction module configured to perform feature extraction on the first image and the second image, respectively, to obtain a first feature of the first image and a second feature of the second image;
an influence feature determination module configured to determine a configuration influence feature from the second feature and a third feature of a pre-acquired third image, wherein the third image is an image of the control reagent acquired under a second configuration condition, the second configuration condition being different from the first configuration condition, the configuration influence feature being used to characterize an influence of a difference in configuration condition on an image feature;
a target feature acquisition module configured to obtain a target feature according to the configuration influence feature and the first feature, wherein the target feature is a feature after the configuration influence in the first feature is removed;
the concentration detection module is configured to determine concentration information of the reagent to be detected according to the target feature and a fourth feature of a fourth image acquired in advance, wherein the fourth image is an image of a color chart acquired under the second configuration condition, the color chart comprises at least one color lump, and the color lump has a corresponding relation with the concentration of the object to be detected;
The influence characteristic determining module determines configuration influence characteristics according to the second characteristics and third characteristics of a pre-acquired third image, and the configuration influence characteristics comprise: acquiring first feature difference information according to the second feature and the third feature, wherein the second feature and the third feature comprise color feature components; determining the configuration influence feature according to the first feature difference information, wherein the first feature difference information comprises color feature component differences of the second feature and the third feature in each appointed color channel;
the first feature comprises a color feature component and the configuration influencing feature comprises a color feature component difference value; the target feature obtaining module obtains a target feature according to the configuration influence feature and the first feature, including: subtracting the color characteristic component differences of the configuration influence features in the designated color channels from the color characteristic components of the first features in the designated color channels to obtain the target features;
the fourth feature comprises a plurality of color patch features; the concentration detection module determines concentration information of the reagent to be detected according to the target feature and a fourth feature of a fourth image acquired in advance, and the concentration detection module comprises: acquiring second characteristic difference information according to the target characteristic and the color block characteristic; selecting a target color block from the color blocks according to the second characteristic difference information; and determining the concentration of the to-be-detected object of the to-be-detected reagent according to the corresponding relation between the target color lump and the concentration of the to-be-detected object.
6. A terminal, comprising: at least one concentration detection means;
the concentration detection apparatus according to claim 5.
CN202111315620.0A 2021-11-08 2021-11-08 Concentration detection method and device Active CN113933294B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111315620.0A CN113933294B (en) 2021-11-08 2021-11-08 Concentration detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111315620.0A CN113933294B (en) 2021-11-08 2021-11-08 Concentration detection method and device

Publications (2)

Publication Number Publication Date
CN113933294A CN113933294A (en) 2022-01-14
CN113933294B true CN113933294B (en) 2023-07-18

Family

ID=79285942

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111315620.0A Active CN113933294B (en) 2021-11-08 2021-11-08 Concentration detection method and device

Country Status (1)

Country Link
CN (1) CN113933294B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106404783A (en) * 2016-09-29 2017-02-15 网易(杭州)网络有限公司 Test paper detection method and device
CN106770224A (en) * 2016-11-25 2017-05-31 友好净控科技(浙江)有限公司 The method and system that a kind of formaldehyde in air content is quick and precisely detected
CN107144531A (en) * 2017-04-19 2017-09-08 黄建国 A kind of content of material detection method, system and device analyzed based on color data
CN108664840A (en) * 2017-03-27 2018-10-16 北京三星通信技术研究有限公司 Image-recognizing method and device
CN108717464A (en) * 2018-05-31 2018-10-30 中国联合网络通信集团有限公司 photo processing method, device and terminal device
KR20190044761A (en) * 2017-10-23 2019-05-02 연세대학교 산학협력단 Apparatus Processing Image and Method thereof
CN111260593A (en) * 2020-01-14 2020-06-09 腾讯科技(深圳)有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN112184701A (en) * 2020-10-22 2021-01-05 中国联合网络通信集团有限公司 Method, device and system for determining detection result

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004348563A (en) * 2003-05-23 2004-12-09 Dds:Kk Apparatus and method for collating face image, portable terminal unit, and face image collating program
WO2005096218A1 (en) * 2004-03-31 2005-10-13 Canon Kabushiki Kaisha Imaging system performance measurement
CN201069389Y (en) * 2007-06-26 2008-06-04 中国华北电力集团公司天津市电力公司 Photoelectric color recognition testing instrument
JP5055241B2 (en) * 2008-10-09 2012-10-24 日本電信電話株式会社 Gas concentration measuring system and measuring method by electronic image colorimetry
JP5066137B2 (en) * 2009-06-05 2012-11-07 日本電信電話株式会社 Gas concentration measuring apparatus and gas concentration measuring method
TWI460426B (en) * 2011-06-21 2014-11-11 Univ Nat Central Method and apparatus of the detection of acidic gases for drugs detection
US9686540B2 (en) * 2014-06-23 2017-06-20 Xerox Corporation Robust colorimetric processing method for paper based sensors
JP2017049974A (en) * 2015-09-04 2017-03-09 キヤノン株式会社 Discriminator generator, quality determine method, and program
CN107924653B (en) * 2015-09-11 2020-02-28 夏普株式会社 Image display device and method for manufacturing image display element
CN105388147A (en) * 2015-10-21 2016-03-09 深圳市宝凯仑生物科技有限公司 Detection method for body fluid based on special test paper
WO2018095412A1 (en) * 2016-11-25 2018-05-31 友好净控科技(浙江)有限公司 Color data analysis-based method and system for detecting substance content
KR102415509B1 (en) * 2017-11-10 2022-07-01 삼성전자주식회사 Face verifying method and apparatus
EP4206782A1 (en) * 2017-11-22 2023-07-05 FUJIFILM Corporation Observation device, method for operating observation device, and observation control program
CN110826372B (en) * 2018-08-10 2024-04-09 浙江宇视科技有限公司 Face feature point detection method and device
CA3119666C (en) * 2018-11-30 2023-10-03 F. Hoffmann-La Roche Ag Method of determining a concentration of an analyte in a bodily fluid
CN109887044B (en) * 2019-03-21 2022-02-11 北京大学第一医院 Reproductive data evaluation method and system
CN110287672A (en) * 2019-06-27 2019-09-27 深圳市商汤科技有限公司 Verification method and device, electronic equipment and storage medium
CN112149476B (en) * 2019-06-28 2024-06-21 京东科技信息技术有限公司 Target detection method, device, equipment and storage medium
JP7370759B2 (en) * 2019-08-08 2023-10-30 キヤノン株式会社 Image processing device, image processing method and program
CN110503725B (en) * 2019-08-27 2023-07-14 百度在线网络技术(北京)有限公司 Image processing method, device, electronic equipment and computer readable storage medium
WO2021102741A1 (en) * 2019-11-27 2021-06-03 深圳加美生物有限公司 Image analysis method and system for immunochromatographic detection
US11574420B2 (en) * 2019-12-31 2023-02-07 Axalta Coating Systems Ip Co., Llc Systems and methods for matching color and appearance of target coatings
JP7436270B2 (en) * 2020-04-09 2024-02-21 株式会社日立ハイテク Biospecimen analysis device
CN111833340B (en) * 2020-07-21 2024-03-26 阿波罗智能技术(北京)有限公司 Image detection method, device, electronic equipment and storage medium
CN111986178B (en) * 2020-08-21 2024-06-18 北京百度网讯科技有限公司 Product defect detection method, device, electronic equipment and storage medium
CN112465886A (en) * 2020-12-09 2021-03-09 苍穹数码技术股份有限公司 Model generation method, device, equipment and readable storage medium
CN113569707A (en) * 2021-07-23 2021-10-29 北京百度网讯科技有限公司 Living body detection method, living body detection device, electronic apparatus, and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106404783A (en) * 2016-09-29 2017-02-15 网易(杭州)网络有限公司 Test paper detection method and device
CN106770224A (en) * 2016-11-25 2017-05-31 友好净控科技(浙江)有限公司 The method and system that a kind of formaldehyde in air content is quick and precisely detected
CN108664840A (en) * 2017-03-27 2018-10-16 北京三星通信技术研究有限公司 Image-recognizing method and device
CN107144531A (en) * 2017-04-19 2017-09-08 黄建国 A kind of content of material detection method, system and device analyzed based on color data
KR20190044761A (en) * 2017-10-23 2019-05-02 연세대학교 산학협력단 Apparatus Processing Image and Method thereof
CN108717464A (en) * 2018-05-31 2018-10-30 中国联合网络通信集团有限公司 photo processing method, device and terminal device
CN111260593A (en) * 2020-01-14 2020-06-09 腾讯科技(深圳)有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN112184701A (en) * 2020-10-22 2021-01-05 中国联合网络通信集团有限公司 Method, device and system for determining detection result

Also Published As

Publication number Publication date
CN113933294A (en) 2022-01-14

Similar Documents

Publication Publication Date Title
US10733763B2 (en) Mura detection device and method of detecting mura using the same
CN112818737B (en) Video identification method, device, storage medium and terminal
CN113808153A (en) Tomato maturity detection method and device, computer equipment and storage medium
CN109389569A (en) Based on the real-time defogging method of monitor video for improving DehazeNet
CN117147561B (en) Surface quality detection method and system for metal zipper
CN114463637A (en) Winter wheat remote sensing identification analysis method and system based on deep learning
CN113933293A (en) Concentration detection method and device
CN113933294B (en) Concentration detection method and device
CN117197479A (en) Image analysis method, device, computer equipment and storage medium applying corn ear outer surface
CN116309364A (en) Transformer substation abnormal inspection method and device, storage medium and computer equipment
CN115205163A (en) Method, device and equipment for processing identification image and storage medium
CN115222653A (en) Test method and device
CN112465780B (en) Method and device for monitoring abnormal film thickness of insulating layer
CN115115596A (en) Electronic component detection method and device and automatic quality inspection equipment
CN111292300B (en) Method and apparatus for detecting bright spot defect of display panel, and readable storage medium
CN114820523A (en) Light sensitive hole glue overflow detection method, device, system, equipment and medium
CN114694128A (en) Pointer instrument detection method and system based on abstract metric learning
CN113284113B (en) Glue overflow flaw detection method, device, computer equipment and readable storage medium
CN113077422B (en) Foggy image detection method, model training method and device
CN117522855B (en) Image-based device fault diagnosis method and device, electronic device and storage medium
CN115632704B (en) Method, device, equipment and medium for testing energy distribution of line laser
CN115239732B (en) Method, device and equipment for judging display unevenness of lighting machine and storage medium
KR20190075283A (en) System and Method for detecting Metallic Particles
CN111445468A (en) Method, device and equipment for detecting oil stain on ground of power converter and storage medium
CN118314607A (en) Fingerprint background data acquisition method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant