CN116433528A - Image detail enhancement display method and system for target area detection - Google Patents

Image detail enhancement display method and system for target area detection Download PDF

Info

Publication number
CN116433528A
CN116433528A CN202310419934.8A CN202310419934A CN116433528A CN 116433528 A CN116433528 A CN 116433528A CN 202310419934 A CN202310419934 A CN 202310419934A CN 116433528 A CN116433528 A CN 116433528A
Authority
CN
China
Prior art keywords
image
enhancement
image enhancement
outputting
iteration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310419934.8A
Other languages
Chinese (zh)
Inventor
陈东
赵建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinguangwei Medical Technology Suzhou Co ltd
Original Assignee
Xinguangwei Medical Technology Suzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinguangwei Medical Technology Suzhou Co ltd filed Critical Xinguangwei Medical Technology Suzhou Co ltd
Priority to CN202310419934.8A priority Critical patent/CN116433528A/en
Publication of CN116433528A publication Critical patent/CN116433528A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/771Feature selection, e.g. selecting representative features from a multi-dimensional feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application relates to the technical field of digital image processing, and provides an image detail enhancement display method and system for target area detection, wherein the method comprises the following steps: performing endoscopic imaging acquisition on a target area; extracting video frames from the imaging video information, and outputting effective image frames; classifying the effective image frames to obtain classification results of the image frames; performing feature analysis on each type of image in the classification result, and outputting image display features for identifying each type of image; generating an image enhancement mapping model; and outputting enhancement parameters according to the image enhancement mapping model, enhancing the classification result by using the enhancement parameters, and outputting an image enhancement result. The method can solve the problem that the image recognition efficiency and accuracy are low due to the fact that the fineness of a traditional image enhancement method is low, and can improve the quality and accuracy of an image enhancement result, so that the image recognition efficiency and accuracy are improved.

Description

Image detail enhancement display method and system for target area detection
Technical Field
The application relates to the technical field of digital image processing, in particular to an image detail enhancement display method and system for target area detection.
Background
Image enhancement is one of the common techniques for digital image processing, with the aim of improving the quality of the image and thus the interpretability of the image. The content of the image enhancement process includes contrast enhancement, which consists in improving the interpretation of the classes on the image, and filtering, which is used to extract or suppress the edge and detail features of the image. The traditional image enhancement method generally carries out unified enhancement on image information, does not carry out image enhancement on the parts which need to be enhanced according to the images after carrying out refined classification on the image information, and causes low accuracy of enhanced image identification.
In summary, the problem that the image recognition efficiency and accuracy are low due to the low fineness of the conventional image enhancement method in the prior art exists.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a method and a system for enhancing and displaying image details of target region detection.
A method of image detail enhanced display of target region detection, the method being applied to an endoscopic imaging recognition system communicatively coupled to a first imaging device, the method comprising: performing endoscopic imaging acquisition on a target area according to the first imaging equipment, and outputting imaging video information; extracting video frames from the imaging video information, and outputting N effective image frames, wherein N is a positive integer greater than 0 and is less than or equal to the total video frame number of the imaging video information; classifying the N effective image frames to obtain M classification results of the image frames; outputting image display characteristics of each type of image through characteristic analysis of each type of image in M classification results of the image frames, wherein the image display characteristics comprise a first display characteristic and a second display characteristic … Mth display characteristic; generating an image enhancement mapping model according to the first display characteristic and the M display characteristic of the second display characteristic …, wherein the image enhancement mapping model is a mapping model obtained through training of multiple groups of mapping data; and outputting M groups of enhancement parameters according to the image enhancement mapping model, enhancing the M classification results by using the M groups of enhancement parameters, and outputting an image enhancement result.
In one embodiment, further comprising: analyzing the image enhancement result to obtain an image enhancement index; if the image enhancement index is smaller than a preset image enhancement index, generating an enhancement iteration network layer, wherein the enhancement iteration network layer is embedded in the image enhancement mapping model and is used for optimizing the image enhancement mapping model; and outputting a multi-layer image enhancement result according to the optimized image enhancement model.
In one embodiment, further comprising: acquiring a difference value index of the image enhancement index and the preset image enhancement index, taking the difference value index as an iteration target, taking the enhancement parameter as an input variable, and building the enhancement iteration network layer; and taking the minimized iteration target as a condition for enhancing the convergence of the iteration network layer, and outputting a multi-layer image enhancement result.
In one embodiment, further comprising: outputting the iteration order of the image enhancement mapping model according to the enhancement iteration network layer, wherein the iteration order is the same as the layer number in the multi-layer image enhancement result; configuring the iteration orders of the image enhancement mapping model according to the preset time complexity of the image enhancement mapping model, and outputting the preset iteration orders; and when the iteration order is larger than the preset iteration order, iterating with the preset iteration order.
In one embodiment, further comprising: building a three-layer fully-connected neural network, wherein the neural network is a mapping relation between an input layer and an output layer, and a plurality of groups of mapping data are trained by using the neural network so as to obtain the image enhancement mapping model; the multi-set mapping data comprises a display characteristic training data set, an image enhancement parameter training set, a display characteristic test data set and an image enhancement parameter test set, and each image enhancement parameter comprises image definition, edge pixels, gray scale comparison values and a noisy index.
In one embodiment, further comprising: reading target images in classification results corresponding to each group of enhancement parameters in the M groups of enhancement parameters; and obtaining the row and column numbers of the pixels of the target image, traversing each pixel of the target image through the row and column numbers of the pixels, obtaining the pixels to be enhanced of the target image, performing image enhancement processing on the pixels to be enhanced of the target image, obtaining image enhancement results of the target image, and the like, and obtaining M image enhancement results of the M classification results.
In one embodiment, further comprising: performing feature singleness image screening on each of the M classification results to obtain a secondary screening result of each classification result, wherein the secondary screening result comprises an image for identifying feature singleness; and taking the secondary screening result as a target image in the corresponding classification result, and carrying out gray processing on the target image.
An image detail enhancement display system for target area detection, comprising:
the endoscope imaging acquisition module is used for carrying out endoscope imaging acquisition on a target area according to the first imaging equipment and outputting imaging video information;
the video frame extraction module is used for extracting video frames of the imaging video information and outputting N effective image frames, wherein N is a positive integer greater than 0 and is less than or equal to the total video frame number of the imaging video information;
the effective image frame classification module is used for classifying the N effective image frames and obtaining M classification results of the image frames;
the image feature analysis module is used for outputting image display features for identifying each type of image by carrying out feature analysis on each type of image in M classification results of the image frames, wherein the image display features comprise a first display feature and a second display feature … Mth display feature;
the image enhancement mapping model generation module is used for generating an image enhancement mapping model according to the first display characteristic and the Mth display characteristic of the second display characteristic …, wherein the image enhancement mapping model is a mapping model obtained through training of multiple groups of mapping data;
and the image enhancement result output module is used for outputting M groups of enhancement parameters according to the image enhancement mapping model, enhancing the M classification results by the M groups of enhancement parameters and outputting an image enhancement result.
The image detail enhancement display method and the system for detecting the target area can solve the problem that the image recognition efficiency and accuracy are low due to the fact that the fineness of a traditional image enhancement method is low; classifying the N effective image frames according to the target demand parts to obtain M classification results of the image frames; then, carrying out feature analysis on each class of classification results in the M classes of classification results to obtain image display features of each class of images, wherein the image display features comprise M display features; and constructing an image enhancement mapping model based on a neural network, inputting the image display characteristics into the image enhancement mapping model to obtain M groups of enhancement parameters, and finally enhancing the M classification results according to the M groups of enhancement parameters to obtain an image enhancement result. The quality and the precision of the image enhancement result can be improved, so that the efficiency and the accuracy of image recognition are improved.
The foregoing description is only an overview of the technical solutions of the present application, and may be implemented according to the content of the specification in order to make the technical means of the present application more clearly understood, and in order to make the above-mentioned and other objects, features and advantages of the present application more clearly understood, the following detailed description of the present application will be given.
Drawings
Fig. 1 is a schematic flow chart of a method for enhancing and displaying image details of target region detection;
FIG. 2 is a schematic flow chart of obtaining an image enhancement mapping model in an image detail enhancement display method for target region detection;
FIG. 3 is a schematic flow chart of outputting a multi-layer image enhancement result in an image detail enhancement display method for target region detection;
fig. 4 is a schematic structural diagram of an image detail enhancement display system for target area detection.
Reference numerals illustrate: the system comprises an endoscope imaging acquisition module 1, a video frame extraction module 2, an effective image frame classification module 3, an image feature analysis module 4, an image enhancement mapping model generation module 5 and an image enhancement result output module 6.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
As shown in fig. 1, the present application provides a method for enhancing and displaying image details of target area detection, the method being applied to an endoscopic imaging recognition system, the endoscopic imaging recognition system being communicatively connected to a first imaging device, the method comprising:
step S100: performing endoscopic imaging acquisition on a target area according to the first imaging equipment, and outputting imaging video information;
step S200: extracting video frames from the imaging video information, and outputting N effective image frames, wherein N is a positive integer greater than 0 and is less than or equal to the total video frame number of the imaging video information;
specifically, the method provided by the application is used for carrying out enhanced display on the image details detected by the target area, and the specific implementation method is applied to an endoscope imaging recognition system. The endoscope imaging recognition system is an endoscope image recognition system integrated with an endoscope, a display system and a computer workstation, the first imaging equipment is an endoscope of various models, and the first imaging equipment transmits the collected endoscope influence to the endoscope imaging recognition system through a signal transmission module.
Performing endoscopic imaging acquisition on a target area through the first imaging device, wherein the target area refers to an area to be detected, for example: stomach, esophagus, etc. Endoscopic imaging video information is obtained. And then extracting effective video frames from the imaging video information, wherein the effective video frames comprise a plurality of detection parts, such as: in stomach detection using a gastroscope, abnormal image information of the stomach area can be used as an effective video frame. N effective image frames are obtained, wherein N is a positive integer greater than 0 and is less than or equal to the total video frame number of the imaging video information, and the effective image frames refer to images with practical utilization value after image filtering. By extracting the video frames of the imaging video information, the interference of the worthless images can be avoided, the accuracy of image acquisition is improved, and the efficiency and accuracy of image identification are indirectly improved.
Step S300: classifying the N effective image frames to obtain M classification results of the image frames;
step S400: outputting image display characteristics of each type of image through characteristic analysis of each type of image in M classification results of the image frames, wherein the image display characteristics comprise a first display characteristic and a second display characteristic … Mth display characteristic;
specifically, the N effective image frames are classified according to the target required parts, and M classification results of the image frames are obtained. Wherein the classification method can be custom set by those skilled in the art in combination with the actual situation. For example: for example, in gastroscopy, N effective image frames may be divided into a plurality of portions such as cardiac, fundus, angle, antrum, and the like. And then carrying out feature analysis on each type of image in M classification results of the image frame, wherein the feature analysis refers to extracting the features of each type of image, for example: thickening of gastric wall of gastric antrum, edema and the like. And carrying out image display feature identification on each type of image according to the image feature extraction result to obtain M image display features. The effective image frames are classified according to the target parts, and feature analysis is carried out according to the classification result, so that the image display features of each type of image are obtained, the image processing efficiency can be improved, and support is provided for the next step of image refinement enhancement.
Step S500: generating an image enhancement mapping model according to the first display characteristic and the M display characteristic of the second display characteristic …, wherein the image enhancement mapping model is a mapping model obtained through training of multiple groups of mapping data;
as shown in fig. 2, in one embodiment, step S500 of the present application further includes:
step S510: building a three-layer fully-connected neural network, wherein the neural network is a mapping relation between an input layer and an output layer, and a plurality of groups of mapping data are trained by using the neural network so as to obtain the image enhancement mapping model;
step S520: the multi-set mapping data comprises a display characteristic training data set, an image enhancement parameter training set, a display characteristic test data set and an image enhancement parameter test set, and each image enhancement parameter comprises image definition, edge pixels, gray scale comparison values and a noisy index.
Specifically, an image enhancement mapping model is built based on a neural network, the image enhancement mapping model is a three-layer fully-connected neural network model, continuous iterative optimization can be performed, the image enhancement mapping model is obtained through monitoring training through a training data set, the image enhancement mapping model comprises an input layer, a mapping matching layer and an output layer, input data of the image enhancement mapping model are display feature sets, output data are image enhancement parameter sets, and the input layer and the output layer are in a many-to-many mapping relation. And obtaining a plurality of groups of mapping data, wherein the mapping data can be obtained by calling historical image enhancement data in the endoscope imaging recognition system, and can also be obtained by carrying out data query based on a big data technology. Wherein the plurality of sets of mapping data includes a display feature training data set, an image enhancement parameter training set, a display feature test data set, and an image enhancement parameter test set, and each image enhancement parameter includes image sharpness, edge pixels, a gray scale comparison value, and a noisy index. Performing supervised training on the image enhancement mapping model through the display characteristic data set and the image enhancement parameter training set, and testing the output result of the image enhancement mapping model through the display characteristic test data set and the image enhancement parameter test set when the model output result tends to be in a convergence state, wherein a test accuracy index is preset, and a person skilled in the art can perform custom setting based on actual conditions, for example: 95%. And when the accuracy rate of the model output result is greater than or equal to the test accuracy rate index, obtaining the image enhancement mapping model after training. By constructing the image enhancement mapping model based on the neural network, the accuracy and efficiency of obtaining the image enhancement parameters can be improved.
Step S600: and outputting M groups of enhancement parameters according to the image enhancement mapping model, enhancing the M classification results by using the M groups of enhancement parameters, and outputting an image enhancement result.
In one embodiment, step S600 of the present application further includes:
step S610: reading target images in classification results corresponding to each group of enhancement parameters in the M groups of enhancement parameters;
in one embodiment, step S610 of the present application further includes:
step S611: performing feature singleness image screening on each of the M classification results to obtain a secondary screening result of each classification result, wherein the secondary screening result comprises an image for identifying feature singleness;
step S612: and taking the secondary screening result as a target image in the corresponding classification result, and carrying out gray processing on the target image.
Specifically, the first display feature and the second display feature … mth display feature are respectively input into the image enhancement mapping model to obtain M groups of enhancement parameters. And carrying out feature singleness image screening on each of the M classification results, wherein the feature singleness image screening refers to the process of fusing images with the same features in the classification results, namely using one image to represent a plurality of images with the same features. By performing feature-unique image screening on each of the M classification results, the number of image enhancements may be reduced and the efficiency of image enhancement may be improved. A secondary screening result is obtained for each classification result, wherein the secondary screening result includes an image identifying feature singleness. And taking the secondary screening result as a target image in the corresponding classification result, and then carrying out gray scale processing on the target image, wherein the image gray scale processing refers to converting a color image into a gray scale image, and the step is a necessary step before image enhancement.
Step S620: and obtaining the row and column numbers of the pixels of the target image, traversing each pixel of the target image through the row and column numbers of the pixels, obtaining the pixels to be enhanced of the target image, performing image enhancement processing on the pixels to be enhanced of the target image, obtaining image enhancement results of the target image, and the like, and obtaining M image enhancement results of the M classification results.
Specifically, the pixel row and column numbers of the target image are obtained, the pixel row and column numbers refer to a pixel row and column matrix of the target image, each pixel of the target image is traversed according to the pixel row and column numbers to obtain a pixel to be enhanced of the target image, then the pixel to be enhanced of the target image is subjected to image enhancement processing through an existing image enhancement method, the image enhancement method comprises gray level transformation enhancement, histogram enhancement, image smoothing and other methods, and a person skilled in the art can select a proper method to conduct image enhancement. And obtaining image enhancement results of the target images, and then sequentially enhancing the target images in the M classification results to obtain M image enhancement results. The method solves the problem that the image recognition efficiency and accuracy are low due to the fact that the fineness of a traditional image enhancement method is low, and can improve the quality and accuracy of an image enhancement result, so that the image recognition efficiency and accuracy are improved.
As shown in fig. 3, in one embodiment, step S600 of the present application further includes:
step S630: analyzing the image enhancement result to obtain an image enhancement index;
step S640: if the image enhancement index is smaller than a preset image enhancement index, generating an enhancement iteration network layer, wherein the enhancement iteration network layer is embedded in the image enhancement mapping model and is used for optimizing the image enhancement mapping model;
in one embodiment, step S640 of the present application further includes:
step S641: acquiring a difference value index of the image enhancement index and the preset image enhancement index, taking the difference value index as an iteration target, taking the enhancement parameter as an input variable, and building the enhancement iteration network layer;
specifically, image quality comparison analysis is performed according to the target image and the image enhancement result, standardized evaluation is performed on the target image quality and the image quality of the image enhancement result, the image quality is represented by an index, and the image index of the image enhancement result is subtracted from the image index of the target image to obtain an image enhancement index. And presetting an image enhancement index, wherein a person skilled in the art can customize the preset image enhancement index based on the image quality requirement, when the image enhancement index is smaller than the preset image enhancement index, the image enhancement index represents that the image enhancement result does not meet the requirement, subtracting the image enhancement index from the preset image enhancement index to obtain a difference index, and taking the difference index as an iteration target, wherein the iteration target refers to the optimization direction of the enhancement iteration network layer, and the enhancement parameter is taken as an input variable to construct the enhancement iteration network layer. The enhancement iteration network layer is embedded in the image enhancement mapping model and used for optimizing the image enhancement mapping model.
Step S642: and taking the minimized iteration target as a condition for enhancing the convergence of the iteration network layer, and outputting a multi-layer image enhancement result.
In one embodiment, step S642 of the present application further comprises:
step S6421: outputting the iteration order of the image enhancement mapping model according to the enhancement iteration network layer, wherein the iteration order is the same as the layer number in the multi-layer image enhancement result;
step S6422: configuring the iteration orders of the image enhancement mapping model according to the preset time complexity of the image enhancement mapping model, and outputting the preset iteration orders;
step S6423: and when the iteration order is larger than the preset iteration order, iterating with the preset iteration order.
Step S650: and outputting a multi-layer image enhancement result according to the optimized image enhancement model.
Specifically, the purpose of convergence of the enhanced iterative network layer is to minimize the difference index, and according to the enhanced iterative network layer, an iteration order of the image enhancement mapping model is obtained, where the iteration order refers to the iteration number of the image enhancement mapping model, and the iteration order is the same as the number of layers in the multi-layer image enhancement result. The preset time complexity is used for limiting the iteration time of the image enhancement mapping model, and the preset time complexity can be set by a person skilled in the art in a self-defined manner based on practical conditions, for example: 2 minutes. And configuring the iteration orders of the image enhancement mapping model according to the preset time complexity to obtain a preset iteration order. And comparing the iteration order with the preset iteration order, and when the iteration order is larger than the preset iteration order, iterating with the preset iteration order, wherein the model iteration time can be set according to actual conditions by setting the preset iteration order, so that the adaptation degree of the model is improved. And finally, carrying out image enhancement on the target image according to the optimized image enhancement model to obtain a multi-layer image enhancement result.
In one embodiment, as shown in FIG. 4, there is provided an image detail enhancement display system for target region detection, comprising: an endoscope imaging acquisition module 1, a video frame extraction module 2, an effective image frame classification module 3, an image feature analysis module 4, an image enhancement mapping model generation module 5, an image enhancement result output module 6, wherein:
the endoscope imaging acquisition module 1 is used for carrying out endoscope imaging acquisition on a target area according to the first imaging equipment and outputting imaging video information;
the video frame extraction module 2 is used for extracting video frames of the imaging video information and outputting N effective image frames, wherein N is a positive integer greater than 0 and is less than or equal to the total video frame number of the imaging video information;
the effective image frame classification module 3, wherein the effective image frame classification module 3 is used for classifying the N effective image frames to obtain M classification results of the image frames;
the image feature analysis module 4 is configured to output image display features identifying each type of image by performing feature analysis on each type of image in M classification results of the image frame, where the image display features include a first display feature and a second display feature … mth display feature;
the image enhancement mapping model generating module 5 is configured to generate an image enhancement mapping model according to the first display feature and the second display feature … and the mth display feature, where the image enhancement mapping model is a mapping model obtained through training of multiple sets of mapping data;
the image enhancement result output module 6 is configured to output M groups of enhancement parameters according to the image enhancement mapping model, enhance the M classification results with the M groups of enhancement parameters, and output an image enhancement result.
In one embodiment, the system further comprises:
the image enhancement index obtaining module is used for analyzing the image enhancement result to obtain an image enhancement index;
the enhanced iterative network layer generation module is used for generating an enhanced iterative network layer if the image enhancement index is smaller than a preset image enhancement index, wherein the enhanced iterative network layer is embedded in the image enhancement mapping model and is used for optimizing the image enhancement mapping model;
and the multi-layer image enhancement result output module is used for outputting a multi-layer image enhancement result according to the optimized image enhancement model.
In one embodiment, the system further comprises:
the enhancement iteration network layer building module is used for obtaining a difference value index of the image enhancement index and the preset image enhancement index, taking the difference value index as an iteration target, taking the enhancement parameter as an input variable, and building the enhancement iteration network layer;
and the multi-layer image enhancement result output module is used for outputting a multi-layer image enhancement result under the condition that the iteration target is minimized as the convergence condition of the enhancement iteration network layer.
In one embodiment, the system further comprises:
the iteration order output module is used for outputting the iteration order of the image enhancement mapping model according to the enhancement iteration network layer, wherein the iteration order is the same as the layer number in the multi-layer image enhancement result;
the preset iteration order output module is used for configuring the iteration order of the image enhancement mapping model according to the preset time complexity of the image enhancement mapping model and outputting the preset iteration order;
and the iteration judgment module is used for carrying out iteration according to the preset iteration order when the iteration order is larger than the preset iteration order.
In one embodiment, the system further comprises:
the image enhancement mapping model acquisition module is used for building a three-layer fully-connected neural network, wherein the neural network is a mapping relation between an input layer and an output layer, and a plurality of groups of mapping data are trained by using the neural network so as to acquire the image enhancement mapping model;
the mapping data summarization module is used for summarizing the mapping data, wherein the plurality of groups of mapping data comprise a display characteristic training data set, an image enhancement parameter training set, a display characteristic test data set and an image enhancement parameter test set, and each image enhancement parameter comprises image definition, edge pixels, gray scale comparison values and noise indexes.
In one embodiment, the system further comprises:
the target image reading module is used for reading target images in the classification results corresponding to each group of enhancement parameters in the M groups of enhancement parameters;
the image enhancement processing module is used for acquiring the pixel row number of the target image, traversing each pixel of the target image through the pixel row number, acquiring the pixel to be enhanced of the target image, carrying out image enhancement processing on the pixel to be enhanced of the target image to obtain an image enhancement result of the target image, and the like to obtain M image enhancement results of the M classification results.
In one embodiment, the system further comprises:
the secondary screening result obtaining module is used for carrying out characteristic singleness image screening on each of the M classification results to obtain a secondary screening result of each classification result, wherein the secondary screening result comprises an image for identifying characteristic singleness;
and the gray processing module is used for taking the secondary screening result as a target image in the corresponding classification result and carrying out gray processing on the target image.
In summary, the present application provides an image detail enhancement display method and system for target area detection, which have the following technical effects:
1. the method solves the problem that the image recognition efficiency and accuracy are low due to the fact that the fineness of a traditional image enhancement method is low, and can improve the quality and accuracy of an image enhancement result, so that the image recognition efficiency and accuracy are improved.
2. The imaging video information is subjected to video frame extraction, so that interference of a worthless image can be avoided, the accuracy of image acquisition is improved, and the accuracy and efficiency of image enhancement parameter acquisition can be improved by constructing an image enhancement mapping model based on a neural network.
3. By setting the preset iteration order, the model iteration time can be set according to actual conditions, and the adaptation degree of the model is improved.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (8)

1. A method of image detail enhancement display of target area detection, the method being applied to an endoscopic imaging recognition system communicatively coupled to a first imaging device, the method comprising:
performing endoscopic imaging acquisition on a target area according to the first imaging equipment, and outputting imaging video information;
extracting video frames from the imaging video information, and outputting N effective image frames, wherein N is a positive integer greater than 0 and is less than or equal to the total video frame number of the imaging video information;
classifying the N effective image frames to obtain M classification results of the image frames;
outputting image display characteristics of each type of image through characteristic analysis of each type of image in M classification results of the image frames, wherein the image display characteristics comprise a first display characteristic and a second display characteristic … Mth display characteristic;
generating an image enhancement mapping model according to the first display characteristic and the M display characteristic of the second display characteristic …, wherein the image enhancement mapping model is a mapping model obtained through training of multiple groups of mapping data;
and outputting M groups of enhancement parameters according to the image enhancement mapping model, enhancing the M classification results by using the M groups of enhancement parameters, and outputting an image enhancement result.
2. The method of claim 1, wherein the method further comprises:
analyzing the image enhancement result to obtain an image enhancement index;
if the image enhancement index is smaller than a preset image enhancement index, generating an enhancement iteration network layer, wherein the enhancement iteration network layer is embedded in the image enhancement mapping model and is used for optimizing the image enhancement mapping model;
and outputting a multi-layer image enhancement result according to the optimized image enhancement model.
3. The method of claim 2, wherein the method further comprises:
acquiring a difference value index of the image enhancement index and the preset image enhancement index, taking the difference value index as an iteration target, taking the enhancement parameter as an input variable, and building the enhancement iteration network layer;
and taking the minimized iteration target as a condition for enhancing the convergence of the iteration network layer, and outputting a multi-layer image enhancement result.
4. A method as claimed in claim 3, wherein the method further comprises:
outputting the iteration order of the image enhancement mapping model according to the enhancement iteration network layer, wherein the iteration order is the same as the layer number in the multi-layer image enhancement result;
configuring the iteration orders of the image enhancement mapping model according to the preset time complexity of the image enhancement mapping model, and outputting the preset iteration orders;
and when the iteration order is larger than the preset iteration order, iterating with the preset iteration order.
5. The method of claim 1, wherein the method further comprises:
building a three-layer fully-connected neural network, wherein the neural network is a mapping relation between an input layer and an output layer, and a plurality of groups of mapping data are trained by using the neural network so as to obtain the image enhancement mapping model;
the multi-set mapping data comprises a display characteristic training data set, an image enhancement parameter training set, a display characteristic test data set and an image enhancement parameter test set, and each image enhancement parameter comprises image definition, edge pixels, gray scale comparison values and a noisy index.
6. The method of claim 5, wherein the M classification results are enhanced with the M sets of enhancement parameters, and wherein the image enhancement results are output, the method further comprising:
reading target images in classification results corresponding to each group of enhancement parameters in the M groups of enhancement parameters;
and obtaining the row and column numbers of the pixels of the target image, traversing each pixel of the target image through the row and column numbers of the pixels, obtaining the pixels to be enhanced of the target image, performing image enhancement processing on the pixels to be enhanced of the target image, obtaining image enhancement results of the target image, and the like, and obtaining M image enhancement results of the M classification results.
7. The method of claim 6, wherein the target image in the classification result corresponding to each set of enhancement parameters is read, the method further comprising:
performing feature singleness image screening on each of the M classification results to obtain a secondary screening result of each classification result, wherein the secondary screening result comprises an image for identifying feature singleness;
and taking the secondary screening result as a target image in the corresponding classification result, and carrying out gray processing on the target image.
8. An image detail enhancement display system for target area detection, the system comprising:
the endoscope imaging acquisition module is used for carrying out endoscope imaging acquisition on a target area according to the first imaging equipment and outputting imaging video information;
the video frame extraction module is used for extracting video frames of the imaging video information and outputting N effective image frames, wherein N is a positive integer greater than 0 and is less than or equal to the total video frame number of the imaging video information;
the effective image frame classification module is used for classifying the N effective image frames and obtaining M classification results of the image frames;
the image feature analysis module is used for outputting image display features for identifying each type of image by carrying out feature analysis on each type of image in M classification results of the image frames, wherein the image display features comprise a first display feature and a second display feature … Mth display feature;
the image enhancement mapping model generation module is used for generating an image enhancement mapping model according to the first display characteristic and the Mth display characteristic of the second display characteristic …, wherein the image enhancement mapping model is a mapping model obtained through training of multiple groups of mapping data;
and the image enhancement result output module is used for outputting M groups of enhancement parameters according to the image enhancement mapping model, enhancing the M classification results by the M groups of enhancement parameters and outputting an image enhancement result.
CN202310419934.8A 2023-04-19 2023-04-19 Image detail enhancement display method and system for target area detection Pending CN116433528A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310419934.8A CN116433528A (en) 2023-04-19 2023-04-19 Image detail enhancement display method and system for target area detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310419934.8A CN116433528A (en) 2023-04-19 2023-04-19 Image detail enhancement display method and system for target area detection

Publications (1)

Publication Number Publication Date
CN116433528A true CN116433528A (en) 2023-07-14

Family

ID=87082993

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310419934.8A Pending CN116433528A (en) 2023-04-19 2023-04-19 Image detail enhancement display method and system for target area detection

Country Status (1)

Country Link
CN (1) CN116433528A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116758069A (en) * 2023-08-17 2023-09-15 济南宝林信息技术有限公司 Medical image enhancement method for intestinal endoscope

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116758069A (en) * 2023-08-17 2023-09-15 济南宝林信息技术有限公司 Medical image enhancement method for intestinal endoscope
CN116758069B (en) * 2023-08-17 2023-11-14 济南宝林信息技术有限公司 Medical image enhancement method for intestinal endoscope

Similar Documents

Publication Publication Date Title
CN111524137B (en) Cell identification counting method and device based on image identification and computer equipment
CN111784671B (en) Pathological image focus region detection method based on multi-scale deep learning
CN110889852B (en) Liver segmentation method based on residual error-attention deep neural network
CN110889853A (en) Tumor segmentation method based on residual error-attention deep neural network
CN109903299B (en) Registration method and device for heterogenous remote sensing image of conditional generation countermeasure network
CN110956581B (en) Image modality conversion method based on dual-channel generation-fusion network
CN112733950A (en) Power equipment fault diagnosis method based on combination of image fusion and target detection
CN112819093A (en) Man-machine asynchronous recognition method based on small data set and convolutional neural network
CN115909006B (en) Mammary tissue image classification method and system based on convolution transducer
CN114266898A (en) Liver cancer identification method based on improved EfficientNet
CN116433528A (en) Image detail enhancement display method and system for target area detection
CN106469436A (en) Image denoising system and image de-noising method
CN110648331A (en) Detection method for medical image segmentation, medical image segmentation method and device
CN115661649B (en) BP neural network-based shipborne microwave radar image oil spill detection method and system
CN113420614A (en) Method for identifying mildewed peanuts by using near-infrared hyperspectral images based on deep learning algorithm
CN116664929A (en) Laryngoscope image multi-attribute classification method based on multi-modal information fusion
CN109978897B (en) Registration method and device for heterogeneous remote sensing images of multi-scale generation countermeasure network
CN112927215A (en) Automatic analysis method for digestive tract biopsy pathological section
CN117152179A (en) Segmentation and classification method for realizing rectal cancer CT image based on U-Net and SENet
CN111539931A (en) Appearance abnormity detection method based on convolutional neural network and boundary limit optimization
CN113963427B (en) Method and system for rapid in-vivo detection
CN115602294A (en) Medical image cause-and-effect rationality detection method based on dual-channel condition fusion
CN115578325A (en) Image anomaly detection method based on channel attention registration network
CN112488125B (en) Reconstruction method and system based on high-speed visual diagnosis and BP neural network
CN114462558A (en) Data-augmented supervised learning image defect classification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination