CN116843705B - Segmentation recognition method, device, equipment and medium for tank printing image - Google Patents

Segmentation recognition method, device, equipment and medium for tank printing image Download PDF

Info

Publication number
CN116843705B
CN116843705B CN202310920133.XA CN202310920133A CN116843705B CN 116843705 B CN116843705 B CN 116843705B CN 202310920133 A CN202310920133 A CN 202310920133A CN 116843705 B CN116843705 B CN 116843705B
Authority
CN
China
Prior art keywords
image
recognized
identified
tank printing
printing image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310920133.XA
Other languages
Chinese (zh)
Other versions
CN116843705A (en
Inventor
袁娜
朱立国
杨克新
魏戌
冯敏山
于杰
常丽洁
任书英
曾柳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wangjing Hospital Of China Academy Of Chinese Medical Sciences Institute Of Orthopedics And Traumatology China Academy Of Chinese Medical Sciences
Original Assignee
Wangjing Hospital Of China Academy Of Chinese Medical Sciences Institute Of Orthopedics And Traumatology China Academy Of Chinese Medical Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wangjing Hospital Of China Academy Of Chinese Medical Sciences Institute Of Orthopedics And Traumatology China Academy Of Chinese Medical Sciences filed Critical Wangjing Hospital Of China Academy Of Chinese Medical Sciences Institute Of Orthopedics And Traumatology China Academy Of Chinese Medical Sciences
Priority to CN202310920133.XA priority Critical patent/CN116843705B/en
Publication of CN116843705A publication Critical patent/CN116843705A/en
Application granted granted Critical
Publication of CN116843705B publication Critical patent/CN116843705B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and a medium for segmentation and identification of a can print image. In the method, firstly, the pixel difference value of the skin images before and after cupping is used for dividing to obtain a to-be-identified canning image with less noise; secondly, inputting the to-be-identified tank print image into a trained tank print identification model, determining whether the to-be-identified tank print image contains water bubbles, and when the to-be-identified tank print image contains the water bubbles, obtaining an identification result of the to-be-identified tank print image based on pixel values of all pixel points in the to-be-identified tank print image and target characteristic information of the water bubbles, otherwise, obtaining an identification result of the to-be-identified tank print image based on pixel values of all pixel points in the to-be-identified tank print image. Therefore, the technical scheme can improve the efficiency of identifying the can printing image.

Description

Segmentation recognition method, device, equipment and medium for tank printing image
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and a medium for segmentation and identification of a can print image.
Background
The cupping can rapidly dilate and flow local blood vessels of organism, rapidly flow tissue fluid in local soft tissues, improve blood circulation and lymph circulation, and accelerate metabolism of tissues. The cupping can stimulate skin receptors and vascular receptors of human body, and has the function of regulating nervous system. The cupping refers to the purple red spots or ecchymosis on the skin surface at the suction and extraction part after cupping.
In the related art, a doctor generally relies on a manual experience to make a preliminary judgment on the health state of a human body according to the color of a can image. However, this is disadvantageous in improving the efficiency of the recognition of the embossed image.
Based on the above, the invention provides a segmentation and identification method, device, equipment and medium for a can print image to solve the technical problems.
Disclosure of Invention
The invention describes a segmentation and identification method, device, equipment and medium for a can print image, which can improve the efficiency of can print image identification.
According to a first aspect, the present invention provides a segmentation recognition method for a can print image, including:
before the cupping starts, acquiring a first skin image of an area to be cupping;
after the cupping is finished, acquiring a second skin image of the region to be cupping; wherein the resolution of the first skin image and the second skin image are the same;
performing difference on pixel values corresponding to the first skin image and the second skin image to obtain a to-be-identified tank printing image by segmentation;
inputting the to-be-identified tank printing image into a trained tank printing identification model, and determining whether the to-be-identified tank printing image contains blisters or not;
if yes, then execute: determining target characteristic information of the bubble based on pixel values of all pixel points in the to-be-identified tank printing image and contour information of the bubble; obtaining a recognition result of the to-be-recognized tank printing image based on the pixel value of each pixel point in the to-be-recognized tank printing image and the target characteristic information;
if not, then execute: and obtaining the recognition result of the to-be-recognized tank printing image based on the pixel value of each pixel point in the to-be-recognized tank printing image.
According to a second aspect, the present invention provides a segmentation recognition apparatus for a can print image, comprising:
a first acquisition unit configured to acquire a first skin image of an area to be cupping before cupping starts;
a second acquisition unit configured to acquire a second skin image of the area to be cupping after the cupping is finished; wherein the resolution of the first skin image and the second skin image are the same;
the differential dividing unit is configured to divide pixel values corresponding to the first skin image and the second skin image to obtain a to-be-identified tank print image;
a imprint recognition unit configured to perform the following operations:
inputting the to-be-identified tank printing image into a trained tank printing identification model, and determining whether the to-be-identified tank printing image contains blisters or not;
if yes, then execute: determining target characteristic information of the bubble based on pixel values of all pixel points in the to-be-identified tank printing image and contour information of the bubble; obtaining a recognition result of the to-be-recognized tank printing image based on the pixel value of each pixel point in the to-be-recognized tank printing image and the target characteristic information;
if not, then execute: and obtaining the recognition result of the to-be-recognized tank printing image based on the pixel value of each pixel point in the to-be-recognized tank printing image.
According to a third aspect, the present invention provides an electronic device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the method of the first aspect when executing the computer program.
According to a fourth aspect, the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of the first aspect.
According to the segmentation and identification method, device, equipment and medium for the can print image, firstly, the pixel difference value of the skin image before and after cupping is obtained to obtain the can print image to be identified with less noise in a segmentation way; secondly, inputting the to-be-identified tank print image into a trained tank print identification model, determining whether the to-be-identified tank print image contains water bubbles, and when the to-be-identified tank print image contains the water bubbles, obtaining an identification result of the to-be-identified tank print image based on pixel values of all pixel points in the to-be-identified tank print image and target characteristic information of the water bubbles, otherwise, obtaining an identification result of the to-be-identified tank print image based on pixel values of all pixel points in the to-be-identified tank print image. Therefore, the technical scheme can improve the efficiency of identifying the can printing image.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 illustrates a flow diagram of a segmentation recognition method of a can image, according to one embodiment;
FIG. 2 shows a schematic block diagram of a segmentation recognition device of a can-printed image, according to one embodiment;
FIG. 3 illustrates a can image of three different skin colors or different locations, according to one embodiment;
FIG. 4 illustrates a tin print image representing two different health states, according to one embodiment;
FIG. 5 illustrates a schematic diagram of a feature fusion module according to one embodiment;
FIG. 6 illustrates a schematic diagram of a graph inference module, in accordance with one embodiment;
FIG. 7 illustrates a schematic diagram of a graph inference operation, according to one embodiment.
Detailed Description
The scheme provided by the invention is described below with reference to the accompanying drawings.
Fig. 1 shows a flow diagram of a segmentation recognition method of a can image according to one embodiment. It is understood that the method may be performed by any apparatus, device, platform, cluster of devices having computing, processing capabilities. As shown in fig. 1, the method includes:
step 101, before cupping begins, acquiring a first skin image of a region to be cupping;
102, after cupping is finished, acquiring a second skin image of the region to be cupping; wherein the resolution of the first skin image and the second skin image are the same;
step 103, performing difference on pixel values corresponding to the first skin image and the second skin image to obtain a to-be-identified tank print image by segmentation;
step 104, inputting the to-be-identified tank print image into a trained tank print identification model, and determining whether the to-be-identified tank print image contains blisters;
step 105, if yes, executing: determining target characteristic information of the bubble based on pixel values of pixel points in the tank printing image to be identified and contour information of the bubble; obtaining a recognition result of the to-be-recognized tank printing image based on the pixel value and the target characteristic information of each pixel point in the to-be-recognized tank printing image;
step 106, if not, executing: and obtaining the recognition result of the to-be-recognized tank printing image based on the pixel value of each pixel point in the to-be-recognized tank printing image.
In the embodiment, firstly, the pixel difference values of the skin images before and after cupping are used for dividing to obtain the to-be-identified canning image with less noise; secondly, inputting the to-be-identified tank print image into a trained tank print identification model, determining whether the to-be-identified tank print image contains water bubbles, and when the to-be-identified tank print image contains the water bubbles, obtaining an identification result of the to-be-identified tank print image based on pixel values of all pixel points in the to-be-identified tank print image and target characteristic information of the water bubbles, otherwise, obtaining an identification result of the to-be-identified tank print image based on pixel values of all pixel points in the to-be-identified tank print image. Therefore, the technical scheme can improve the efficiency of identifying the can printing image.
The steps are described below.
For steps 101 to 103:
since the different skin colors of the ethnic groups (e.g., white, yellow and black) can seriously affect the recognition of the can mark when cupping is performed. To solve this technical problem, the inventors creatively found that: the segmentation of the foreground and background can be performed by using the skin color of the patient before and after cupping, so that a less noisy imprint image to be identified can be obtained. Therefore, the problem that a single can printing segmentation model cannot accurately and effectively realize segmentation of can printing images can be avoided.
In some embodiments, when the first skin image and the second skin image are obtained by using a camera (please refer to fig. 3, which is a back of a black race person, a front chest of a white race person, and a side arm of a yellow race person in order from top to bottom), the camera may be a CCD (Charge Coupled Device ) camera, or may be a CMOS (Complementary Metal-Oxide Semiconductor, complementary metal oxide semiconductor) camera. Here, the specific camera for capturing the skin image is not particularly limited, and the camera can capture the skin image.
In order to ensure that the segmentation results in a more accurate to-be-identified footprint image, it may be considered that the camera uses the same photographing parameters when acquiring the first skin image and the second skin image, e.g. the resolution of the first skin image and the second skin image is the same.
When the pixel values corresponding to the first skin image and the second skin image are subjected to difference, the first skin image is used as a background image, the second skin image is used as a foreground image, and the difference value of the pixel values corresponding to the first skin image and the second skin image is the pixel value of a foreground target, namely the pixel value of the to-be-identified tank printing image, so that the more accurate to-be-identified tank printing image is obtained through segmentation.
In the related art, a fine content function in OpenCV is generally used to obtain a can image. In order to solve this technical problem, the non-closed image is usually processed into the closed image by morphological processing. Wherein the morphological processing comprises: expansion, corrosion, open/close operation, black/white cap processing, convex hull, connected region marking, small block region deletion, and the like. Compared with the scheme provided by the embodiment of the invention, the tin printing image obtained by the scheme has less noise and is simpler and more convenient to operate.
As a preferred embodiment, step 103 may specifically include:
cutting the first skin image and the second skin image to obtain two images which have the same area size and comprise the area to be pulled out;
and making differences between pixel values corresponding to the two cut images.
In this embodiment, in order to ensure that a more accurate to-be-identified can image is obtained, it is necessary to ensure that the two skin images are identical in area before the pixel values of the two skin images are subjected to the difference, otherwise, some errors may exist in the difference (for example, the colors of different parts of the human body may be different, for example, a person often wears a suspender vest, so that the area on the back may be dark due to frequent sunlight irradiation, and the area not irradiated by sunlight may be bright), so that the first skin image and the second skin image are both cut to obtain two images which have the same area size and both include the to-be-extracted can area.
For step 104:
referring to fig. 4, considering that the health status of a human body cannot be judged only according to the color of a can print image, a situation that a person with serious humidity may generate blisters (blisters generated by cupping are actually "tissue fluid", and tissue fluid (tissue fluid) is fluid existing between cells, which is also called interstitial fluid, and the tissue fluid is collected under the skin by suction and is formed into blisters) is considered. In order to solve the technical problem, the inventor creatively discovers in the research and development process: and whether the watermark image to be identified contains the bubble or not can be identified by using the trained watermark identification model. Specific training procedures are not described here, and are well known to those skilled in the art, and for example, a labeled can be used as a sample image for training.
The feature fusion portion of the imprint recognition model is described below with reference to fig. 5 to 7.
As a preferred embodiment, the watermark recognition model includes a feature extraction module and a feature fusion module, the feature extraction module is used for extracting features of the watermark image to be recognized, and the feature fusion module is used for carrying out feature fusion on the result of the feature extraction so as to output whether the watermark image to be recognized contains a blister or not.
In this embodiment, by performing feature fusion on the result of feature extraction, the global semantic information and the local spatial information of the blister profile (in which the shape and size of the blister are different, so that a more accurate blister profile can be obtained through feature fusion) can be captured, and the contextual features of the profile features are fully aggregated, so that the complete and accurate recognition result of the can image can be output.
Of course, the imprint recognition model may further include a decoding module connected to the feature fusion module, which is well known to those skilled in the art, and will not be described herein.
Referring to fig. 5, as a preferred embodiment, the feature fusion module performs feature fusion by the following formula:
y=F C (F U (F G (F A2 (x)))+F U (F G (F A4 (x)))+F U (F G (F A8 (x)))+F G (x))
wherein x represents the input features of the feature fusion module; y represents the output result of the feature fusion module; f (F) A2 、F A4 And F A8 Pooling and convolution operations representing 2×2, 4×4, 8×8 sizes, respectively, to enable the model to obtain receptive fields of different region sizes; f (F) G Representing graph reasoning operation; f (F) U And F C Respectively an up-sampling operation and a 1 x 1 convolution operation.
In this embodiment, the feature fusion module is composed of 4 conductive paths, and the input feature x needs to be subjected to operations of the pooling kernel sizes of 2×2, 4×4 and 8×8 respectively to obtain blister features with different scales, so as to retain the information of the original scale. The four scale features are respectively transmitted into a Graph reasoning module (namely Graph), global semantic information is further learned, and finally, a more complete output result y of the context information is obtained by aggregating the multi-scale feature information.
Specifically, although the feature fusion module uses pooling cores with different sizes to enable the network to obtain different receptive fields, so as to perceive feature information with different sizes, and the feature fusion module becomes an effective mode for aggregating multi-scale information. However, since the bubble has complex particularities of contour shape, size, number and the like, in order to obtain a more accurate bubble contour, the inventor creatively considers that a graph inference module capable of performing global modeling can be constructed so as to learn and interact global information.
Referring to fig. 6, as a preferred embodiment, the graph inference operation is specifically implemented by the following formula set:
wherein R is C×H×W Representing the output result of graph reasoning operation;representing input features of graph inference operations; C. h, W, N each represent a feature dimension; f (F) C Representing a 1 x 1 convolution operation; f (F) RP 、F R And F P Respectively representing back projection operation, remodelling operation and projection operation, which are all used for changing the shape of the feature; the expression T represents the transpose operation of the matrix.
The graph reasoning operation mainly comprises the following three steps:
first, the input features undergo a 1×1 convolution to adjust the channel dimensions, using F R Operation will featureTransition to->Thus, original characteristic information is reserved, and matrix operation is convenient to carry out with the projected result. Meanwhile, the input features need to be subjected to feature learning and extraction through 1×1 convolution operation, and the original features are +.>Conversion toThen go through->F is carried out P The operation is performed to get->Finally will->And->Performing matrix product operation to obtain +.>This achieves the conversion of the bubble pixels stored in the coordinate space into the bubble pixels stored in the interaction space, namely:
wherein F is C Representing a 1 x 1 convolution operation, F P 、F R Respectively represent weightThe shape and projection of the feature are shaped and essentially change the shape of the original feature.
Secondly, applying Graph Cov to the features in the interaction space to infer, wherein the calculation formula of the Graph Cov is as follows:
in the formula, T represents a transpose operation of the matrix.
Finally, willBack projection is performed to obtain +.>Then, carrying out matrix multiplication operation on the information subjected to graph convolution reasoning and the information, and adding the information with the original input characteristics to obtain a final output result R C×H×W The method comprises the following steps:
wherein F is RP Representing back projection, its function is equal to F P Similarly, for changing the shape of the feature.
The principle of graph inference operation is described below. As shown in fig. 7, the conversion of the bubble pixels stored in the coordinate space into the bubble pixels stored in the interaction space, the storage of the semantic features of the bubbles is completed by means of nodes. The advantage of interaction space over coordinate space is that the model needs to handle the relationships between nodes instead of pixels, which not only reduces the computational effort, but also makes global information modeling easier, namely:
V=Za i
where Z represents the original input feature, V represents the feature after projection into the interaction space, a i Representing projection parameters that need to be learned.
Secondly, the relation among node characteristics is deduced by using two graph convolution operations, and the context information among remote blister characteristics is learned and searched, namely:
V′=VGW i
wherein, G represents the original feature after shape change, W i Representing the parameters that the diagram needs to learn for convolution.
Finally, back projecting the reasoning features to a coordinate space to obtain a result processed by the graph reasoning module, namely:
Z′=V′b i +Z
wherein Z' represents the final output result of the graph inference module, b i Representing the back-projection parameters to be learned.
For step 105:
as a preferred embodiment, the step of determining the target feature information of the blister based on the pixel values of the pixel points in the to-be-identified tank print image and the contour information of the blister may specifically include:
determining a target pixel covered by the bubble based on the contour information of the bubble;
determining target characteristic information of the bubble based on pixel values of target pixel points in the tank printing image to be identified; the target characteristic information is used to characterize blood (for example, the lower image of fig. 4, which represents that the patient has damp toxin), yellow (for example, the upper image of fig. 4, which represents that the patient has damp heat), or clear water (for example, the upper image of fig. 4, which represents that the patient has cold dampness).
In this embodiment, since the watermark recognition model only can recognize the outline of the bubble, and the specific type of the bubble is determined by combining the pixel values of the target pixel points covered by the bubble outline, the target feature information of the bubble can be finally determined.
As a preferred embodiment, the step of obtaining the recognition result of the to-be-recognized footprint image based on the pixel values of the pixels in the to-be-recognized footprint image and the target feature information may specifically include:
comparing the pixel value of each pixel point in the tank printing image to be identified with a preset tank printing color level to obtain target color information of the tank printing image to be identified; the target color information is used for representing light pink, bright red, dark red, off-white or purple black color information;
and obtaining a recognition result of the to-be-recognized tank printing image based on the target color information and the target characteristic information.
In this embodiment, the color level of the can print may be any one of RGB color, YUV color and HSV color, or may be established respectively to perform color comparison of the can print images to be identified one by one, and the comparison result obtained by comprehensively comparing the results is more accurate in analysis and identification of the obtained grading information.
For step 106:
as a preferred embodiment, step 106 may specifically include:
comparing the pixel value of each pixel point in the tank printing image to be identified with a preset tank printing color level to obtain target color information of the tank printing image to be identified; the target color information is used for representing light pink, bright red, dark red, off-white or purple black color information;
and obtaining the recognition result of the to-be-recognized tank printing image based on the target color information.
In this embodiment, the color level of the can print may be any one of RGB color, YUV color and HSV color, or may be established respectively to perform color comparison of the can print images to be identified one by one, and the comparison result obtained by comprehensively comparing the results is more accurate in analysis and identification of the obtained grading information.
The foregoing describes certain embodiments of the present invention. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
According to an embodiment of another aspect, the present invention provides a segmentation recognition apparatus for a can print image. Fig. 2 shows a schematic block diagram of a segmentation recognition device of a can image according to one embodiment. It will be appreciated that the apparatus may be implemented by any means, device, platform or cluster of devices having computing, processing capabilities. As shown in fig. 2, the apparatus includes: a first acquisition unit 201, a second acquisition unit 202, a differential dividing unit 203, and a imprint recognition unit 204. Wherein the main functions of each constituent unit are as follows:
a first acquisition unit 201 configured to acquire a first skin image of an area to be cupping before the cupping starts;
a second acquiring unit 202 configured to acquire a second skin image of the area to be cupping after the cupping is finished; wherein the resolution of the first skin image and the second skin image are the same;
a differential segmentation unit 203 configured to perform a differential motion on pixel values corresponding to the first skin image and the second skin image, so as to segment and obtain a to-be-identified footprint image;
the imprint recognition unit 204 is configured to perform the following operations:
inputting the to-be-identified tank printing image into a trained tank printing identification model, and determining whether the to-be-identified tank printing image contains blisters or not;
if yes, then execute: determining target characteristic information of the bubble based on pixel values of all pixel points in the to-be-identified tank printing image and contour information of the bubble; obtaining a recognition result of the to-be-recognized tank printing image based on the pixel value of each pixel point in the to-be-recognized tank printing image and the target characteristic information;
if not, then execute: and obtaining the recognition result of the to-be-recognized tank printing image based on the pixel value of each pixel point in the to-be-recognized tank printing image.
As a preferred embodiment, the differential dividing unit is configured to perform the following operations:
cutting the first skin image and the second skin image to obtain two images which have the same area size and comprise the to-be-pulled cup area;
and making differences between pixel values corresponding to the two cut images.
As a preferred embodiment, the footprint recognition unit is configured to, when executing the determination of the target feature information of the blister based on the pixel values of the pixel points in the footprint image to be recognized and the contour information of the blister, execute the following operations:
determining a target pixel point covered by the bubble based on the contour information of the bubble;
determining target characteristic information of the bubble based on the pixel value of the target pixel point in the to-be-identified tank printing image; the target characteristic information is used for representing characteristic information of blood, yellow water or clear water.
As a preferred embodiment, the print recognition unit is configured to, when executing the recognition result of the print image to be recognized based on the pixel values of the pixels in the print image to be recognized and the target feature information, execute the following operations:
comparing the pixel value of each pixel point in the to-be-identified tank printing image with a preset tank printing color level to obtain target color information of the to-be-identified tank printing image; wherein the target color information is used for representing light pink, bright red, dark red, off-white or purplish black color information;
based on the target color information and the target characteristic information, obtaining a recognition result of the to-be-recognized can print image;
and/or the number of the groups of groups,
the embossing recognition unit is used for executing the following operations when the recognition result of the embossing image to be recognized is obtained based on the pixel values of all pixel points in the embossing image to be recognized:
comparing the pixel value of each pixel point in the to-be-identified tank printing image with a preset tank printing color level to obtain target color information of the to-be-identified tank printing image; wherein the target color information is used for representing light pink, bright red, dark red, off-white or purplish black color information;
and obtaining the recognition result of the to-be-recognized tank printing image based on the target color information.
As a preferred embodiment, the can print recognition model includes a feature extraction module and a feature fusion module, where the feature extraction module is configured to perform feature extraction on the can print image to be recognized, and the feature fusion module is configured to perform feature fusion on a result of feature extraction, so as to output a result of whether the can print image to be recognized contains blisters.
As a preferred embodiment, the feature fusion module performs feature fusion by the following formula:
y=F C (F U (F G (F A2 (x)))+F U (F G (F A4 (x)))+F U (F G (F A8 (x)))+F G (x))
wherein x represents the input feature of the feature fusion module; y represents the output result of the feature fusion module; f (F) A2 、F A4 And F A8 Pooling and convolution operations representing 2×2, 4×4, 8×8 sizes, respectively, to enable the model to obtain receptive fields of different region sizes; f (F) G Representing graph reasoning operation; f (F) U And F C Respectively an up-sampling operation and a 1 x 1 convolution operation.
As a preferred embodiment, the graph inference operation is specifically implemented by the following formula set:
wherein R is C×H×W Representing the output result of the graph inference operation;input features representing the graph inference operations; C. h, W, N each represent a feature dimension; f (F) C Representing a 1 x 1 convolution operation; f (F) RP 、F R And F P Respectively representing back projection operation, remodelling operation and projection operation, which are all used for changing the shape of the feature; the expression T represents the transpose operation of the matrix.
According to an embodiment of another aspect, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method described in connection with fig. 1.
According to an embodiment of yet another aspect, there is also provided an electronic device including a memory having executable code stored therein and a processor that, when executing the executable code, implements the method described in connection with fig. 1.
The embodiments of the present invention are described in a progressive manner, and the same and similar parts of the embodiments are all referred to each other, and each embodiment is mainly described in the differences from the other embodiments. In particular, for the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments in part.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the present invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
The foregoing embodiments have been provided for the purpose of illustrating the general principles of the present invention in further detail, and are not to be construed as limiting the scope of the invention, but are merely intended to cover any modifications, equivalents, improvements, etc. based on the teachings of the invention.

Claims (4)

1. A segmentation recognition method for a can print image, comprising:
before the cupping starts, acquiring a first skin image of an area to be cupping;
after the cupping is finished, acquiring a second skin image of the region to be cupping; wherein the resolution of the first skin image and the second skin image are the same;
performing difference on pixel values corresponding to the first skin image and the second skin image to obtain a to-be-identified tank printing image by segmentation;
inputting the to-be-identified tank printing image into a trained tank printing identification model, and determining whether the to-be-identified tank printing image contains blisters or not;
if yes, then execute: determining target characteristic information of the bubble based on pixel values of all pixel points in the to-be-identified tank printing image and contour information of the bubble; obtaining a recognition result of the to-be-recognized tank printing image based on the pixel value of each pixel point in the to-be-recognized tank printing image and the target characteristic information;
if not, then execute: obtaining a recognition result of the to-be-recognized tank printing image based on pixel values of all pixel points in the to-be-recognized tank printing image;
the performing the difference between the pixel values corresponding to the first skin image and the second skin image includes:
cutting the first skin image and the second skin image to obtain two images which have the same area size and comprise the to-be-pulled cup area;
making differences between pixel values corresponding to the two cut images;
the determining the target feature information of the blister based on the pixel value of each pixel point in the to-be-identified tank printing image and the contour information of the blister comprises the following steps:
determining a target pixel point covered by the bubble based on the contour information of the bubble;
determining target characteristic information of the bubble based on the pixel value of the target pixel point in the to-be-identified tank printing image; the target characteristic information is used for representing characteristic information of blood, yellow water or clear water;
the obtaining the recognition result of the to-be-recognized tank printing image based on the pixel value of each pixel point in the to-be-recognized tank printing image and the target characteristic information comprises the following steps:
comparing the pixel value of each pixel point in the to-be-identified tank printing image with a preset tank printing color level to obtain target color information of the to-be-identified tank printing image; wherein the target color information is used for representing light pink, bright red, dark red, off-white or purplish black color information;
based on the target color information and the target characteristic information, obtaining a recognition result of the to-be-recognized can print image;
the obtaining the recognition result of the to-be-recognized tank printing image based on the pixel value of each pixel point in the to-be-recognized tank printing image comprises the following steps:
comparing the pixel value of each pixel point in the to-be-identified tank printing image with a preset tank printing color level to obtain target color information of the to-be-identified tank printing image; wherein the target color information is used for representing light pink, bright red, dark red, off-white or purplish black color information;
based on the target color information, obtaining a recognition result of the to-be-recognized tank printing image;
the canning recognition model comprises a feature extraction module and a feature fusion module, wherein the feature extraction module is used for extracting features of the canning image to be recognized, and the feature fusion module is used for carrying out feature fusion on the result of the feature extraction so as to output the result whether the canning image to be recognized contains blisters or not;
the feature fusion module performs feature fusion according to the following formula:
y=F C (F U (F G (F A2 (x)))+F U (F G (F A4 (x)))+F U (F G (F A8 (x)))+F G (x))
wherein x represents the input feature of the feature fusion module; y represents the output result of the feature fusion module; f (F) A2 、F A4 And F A8 Pooling and convolution operations representing 2×2, 4×4, 8×8 sizes, respectively, to enable the model to obtain receptive fields of different region sizes; f (F) G Representing graph reasoning operation; f (F) U And F C Respectively representing an up-sampling operation and a 1×1 convolution operation;
the graph reasoning operation is specifically realized through the following formula group:
wherein R is C×H×W Representing the output result of the graph inference operation;input features representing the graph inference operations; C. h, W, N each represent a feature dimension; f (F) C Representing a 1 x 1 convolution operation; f (F) RP 、F R And F P Respectively representing back projection operation, remodelling operation and projection operation, which are all used for changing the shape of the feature; the expression T represents the transpose operation of the matrix.
2. A segmentation and recognition apparatus for a can print image, comprising:
a first acquisition unit configured to acquire a first skin image of an area to be cupping before cupping starts;
a second acquisition unit configured to acquire a second skin image of the area to be cupping after the cupping is finished; wherein the resolution of the first skin image and the second skin image are the same;
the differential dividing unit is configured to divide pixel values corresponding to the first skin image and the second skin image to obtain a to-be-identified tank print image;
a imprint recognition unit configured to perform the following operations:
inputting the to-be-identified tank printing image into a trained tank printing identification model, and determining whether the to-be-identified tank printing image contains blisters or not;
if yes, then execute: determining target characteristic information of the bubble based on pixel values of all pixel points in the to-be-identified tank printing image and contour information of the bubble; obtaining a recognition result of the to-be-recognized tank printing image based on the pixel value of each pixel point in the to-be-recognized tank printing image and the target characteristic information;
if not, then execute: obtaining a recognition result of the to-be-recognized tank printing image based on pixel values of all pixel points in the to-be-recognized tank printing image;
the differential dividing unit is used for executing the following operations:
cutting the first skin image and the second skin image to obtain two images which have the same area size and comprise the to-be-pulled cup area;
making differences between pixel values corresponding to the two cut images;
the seal recognition unit is used for executing the following operations when executing the determination of the target characteristic information of the bubble based on the pixel values of all pixel points in the seal image to be recognized and the contour information of the bubble:
determining a target pixel point covered by the bubble based on the contour information of the bubble;
determining target characteristic information of the bubble based on the pixel value of the target pixel point in the to-be-identified tank printing image; the target characteristic information is used for representing characteristic information of blood, yellow water or clear water;
the imprinting recognition unit is used for executing the following operations when the recognition result of the imprinting image to be recognized is obtained based on the pixel values of all pixel points in the imprinting image to be recognized and the target characteristic information:
comparing the pixel value of each pixel point in the to-be-identified tank printing image with a preset tank printing color level to obtain target color information of the to-be-identified tank printing image; wherein the target color information is used for representing light pink, bright red, dark red, off-white or purplish black color information;
based on the target color information and the target characteristic information, obtaining a recognition result of the to-be-recognized can print image;
the embossing recognition unit is used for executing the following operations when the recognition result of the embossing image to be recognized is obtained based on the pixel values of all pixel points in the embossing image to be recognized:
comparing the pixel value of each pixel point in the to-be-identified tank printing image with a preset tank printing color level to obtain target color information of the to-be-identified tank printing image; wherein the target color information is used for representing light pink, bright red, dark red, off-white or purplish black color information;
based on the target color information, obtaining a recognition result of the to-be-recognized tank printing image;
the canning recognition model comprises a feature extraction module and a feature fusion module, wherein the feature extraction module is used for extracting features of the canning image to be recognized, and the feature fusion module is used for carrying out feature fusion on the result of the feature extraction so as to output the result whether the canning image to be recognized contains blisters or not;
the feature fusion module performs feature fusion according to the following formula:
y=F C (F U (F G (F A2 (x)))+F U (F G (F A4 (x)))+F U (F G (F A8 (x)))+F G (x))
wherein x represents the input feature of the feature fusion module; y represents the output result of the feature fusion module; f (F) A2 、F A4 And F A8 Pooling and convolution operations representing 2×2, 4×4, 8×8 sizes, respectively, to enable the model to obtain receptive fields of different region sizes; f (F) G Representing graph reasoning operation; f (F) U And F C Respectively representing an up-sampling operation and a 1×1 convolution operation;
the graph reasoning operation is specifically realized through the following formula group:
wherein R is C×H×W Representing the output result of the graph inference operation;input features representing the graph inference operations; C. h, W, N each represent a feature dimension; f (F) C Representing a 1 x 1 convolution operation; f (F) RP 、F R And F P Respectively representing back projection operation, remodelling operation and projection operation, which are all used for changing the shape of the feature; the expression T represents the transpose operation of the matrix.
3. An electronic device comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the method of claim 1 when executing the computer program.
4. A computer readable storage medium, having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of claim 1.
CN202310920133.XA 2023-07-25 2023-07-25 Segmentation recognition method, device, equipment and medium for tank printing image Active CN116843705B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310920133.XA CN116843705B (en) 2023-07-25 2023-07-25 Segmentation recognition method, device, equipment and medium for tank printing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310920133.XA CN116843705B (en) 2023-07-25 2023-07-25 Segmentation recognition method, device, equipment and medium for tank printing image

Publications (2)

Publication Number Publication Date
CN116843705A CN116843705A (en) 2023-10-03
CN116843705B true CN116843705B (en) 2023-12-22

Family

ID=88160036

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310920133.XA Active CN116843705B (en) 2023-07-25 2023-07-25 Segmentation recognition method, device, equipment and medium for tank printing image

Country Status (1)

Country Link
CN (1) CN116843705B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1795827A (en) * 2004-12-24 2006-07-05 重庆融海超声医学工程研究中心有限公司 Image monitoring device and method for skin and subcutaneous tissue damage
CN111358433A (en) * 2020-03-13 2020-07-03 陕西省中医医院 Method for determining cupping condition based on skin image recognition condition
WO2022183730A1 (en) * 2021-03-05 2022-09-09 上海商汤智能科技有限公司 Image segmentation method and apparatus, electronic device, and computer readable storage medium
CN115631350A (en) * 2022-11-17 2023-01-20 博奥生物集团有限公司 Method and device for identifying colors of canned image
CN115984680A (en) * 2023-02-15 2023-04-18 博奥生物集团有限公司 Identification method and device for can printing colors, storage medium and equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1795827A (en) * 2004-12-24 2006-07-05 重庆融海超声医学工程研究中心有限公司 Image monitoring device and method for skin and subcutaneous tissue damage
CN111358433A (en) * 2020-03-13 2020-07-03 陕西省中医医院 Method for determining cupping condition based on skin image recognition condition
WO2022183730A1 (en) * 2021-03-05 2022-09-09 上海商汤智能科技有限公司 Image segmentation method and apparatus, electronic device, and computer readable storage medium
CN115631350A (en) * 2022-11-17 2023-01-20 博奥生物集团有限公司 Method and device for identifying colors of canned image
CN115984680A (en) * 2023-02-15 2023-04-18 博奥生物集团有限公司 Identification method and device for can printing colors, storage medium and equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Graph-based Pyramid Global Context Reasoning with a saliency-aware projection for COVID-19 lung infections segmentation;Huimin Huang 等;ICASSP 2021;全文 *

Also Published As

Publication number Publication date
CN116843705A (en) 2023-10-03

Similar Documents

Publication Publication Date Title
CN110276356B (en) Fundus image microaneurysm identification method based on R-CNN
CN111407245B (en) Non-contact heart rate and body temperature measuring method based on camera
US20210118144A1 (en) Image processing method, electronic device, and storage medium
US11055824B2 (en) Hybrid machine learning systems
CN109558832A (en) A kind of human body attitude detection method, device, equipment and storage medium
CN103914699A (en) Automatic lip gloss image enhancement method based on color space
CN112508806B (en) Endoscopic image highlight removal method based on non-convex low-rank matrix decomposition
CN111080670B (en) Image extraction method, device, equipment and storage medium
KR102177918B1 (en) Deep learning based personal color diagnosis and virtual make-up method and apparatus
CN106023151A (en) Traditional Chinese medicine tongue manifestation object detection method in open environment
CN110363072B (en) Tongue picture identification method, tongue picture identification device, computer equipment and computer readable storage medium
CN111223110B (en) Microscopic image enhancement method and device and computer equipment
JP2020047253A (en) Ocular condition detection system detecting ocular condition utilizing deep layer learning model, and method for operating the ocular condition detection system
CN104182723B (en) A kind of method and apparatus of sight estimation
CN112287765B (en) Face living body detection method, device, equipment and readable storage medium
CN109087310A (en) Dividing method, system, storage medium and the intelligent terminal of Meibomian gland texture region
CN114511502A (en) Gastrointestinal endoscope image polyp detection system based on artificial intelligence, terminal and storage medium
CN113313680A (en) Colorectal cancer pathological image prognosis auxiliary prediction method and system
CN116843705B (en) Segmentation recognition method, device, equipment and medium for tank printing image
CN117197064A (en) Automatic non-contact eye red degree analysis method
Jindal et al. Sign Language Detection using Convolutional Neural Network (CNN)
CN116580445A (en) Large language model face feature analysis method, system and electronic equipment
CN112801238B (en) Image classification method and device, electronic equipment and storage medium
CN114463346B (en) Mobile terminal-based complex environment rapid tongue segmentation device
CN110147715A (en) A kind of retina OCT image Bruch film angle of release automatic testing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant