CN113192067A - Intelligent prediction method, device, equipment and medium based on image detection - Google Patents
Intelligent prediction method, device, equipment and medium based on image detection Download PDFInfo
- Publication number
- CN113192067A CN113192067A CN202110600178.XA CN202110600178A CN113192067A CN 113192067 A CN113192067 A CN 113192067A CN 202110600178 A CN202110600178 A CN 202110600178A CN 113192067 A CN113192067 A CN 113192067A
- Authority
- CN
- China
- Prior art keywords
- image
- pixel
- prediction
- target area
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 63
- 238000001514 detection method Methods 0.000 title claims abstract description 51
- 230000011218 segmentation Effects 0.000 claims abstract description 57
- 238000012545 processing Methods 0.000 claims abstract description 48
- 238000004364 calculation method Methods 0.000 claims description 35
- 238000013528 artificial neural network Methods 0.000 claims description 21
- 238000001035 drying Methods 0.000 claims description 17
- 238000003709 image segmentation Methods 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 15
- 230000009466 transformation Effects 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 7
- 238000011049 filling Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 abstract description 6
- 238000006243 chemical reaction Methods 0.000 description 16
- 238000012549 training Methods 0.000 description 12
- 238000012360 testing method Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 208000017667 Chronic Disease Diseases 0.000 description 5
- 239000013598 vector Substances 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 4
- 238000011176 pooling Methods 0.000 description 4
- 230000002207 retinal effect Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 238000012886 linear function Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000004256 retinal image Effects 0.000 description 2
- 230000004382 visual function Effects 0.000 description 2
- 241000282414 Homo sapiens Species 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10101—Optical tomography; Optical coherence tomography [OCT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an intelligent prediction method, an intelligent prediction device, intelligent prediction equipment and an intelligent prediction medium based on image detection, wherein the method comprises the following steps: the method comprises the steps of carrying out pixel segmentation on a first image input by a user to obtain an image pixel segmentation result and further obtain a corresponding target area image, carrying out denoising processing on the target area image to obtain a target area image without noise points and calculating to obtain a corresponding central concave point, further obtaining basic parameter information of the target area image without the noise points according to the central concave point, and inputting the basic parameter information into a prediction model to obtain a corresponding prediction result. The invention belongs to the technical field of image processing, and can intelligently process a first image to obtain basic parameter information, and obtain a corresponding prediction result based on the basic parameter information, thereby improving the efficiency of analyzing the image and greatly improving the accuracy of the obtained prediction result.
Description
Technical Field
The invention relates to the technical field of image processing, belongs to an application scene for carrying out intelligent prediction based on an image detection technology in a smart city, and particularly relates to an intelligent prediction method, an intelligent prediction device, intelligent prediction equipment and an intelligent prediction medium based on image detection.
Background
The chronic diseases always troubles the health of human beings, the recovery probability of the chronic diseases can be greatly improved by accurately diagnosing the chronic diseases in the early stage of the chronic diseases, and the pain of chronic disease inspectors can be relieved. And partial diseases can be shot by a non-invasive mode, and analysis results are obtained by analyzing the local images of the focus and are used as reference for doctors to diagnose the diseases. However, the inventor finds that the prior art methods adopt observation and manual measurement to analyze the local image of the focus; for example, when the visual function of an inspector is analyzed based on a retinal image to determine whether the visual function is abnormal, the conventional technical method only uses observation and manual measurement of the change of the retinal image to obtain an analysis result, which affects the efficiency and accuracy of analyzing the image, and thus it is difficult to obtain an accurate analysis result by using a manual analysis method. Therefore, the prior art method has the problem of low accuracy in image analysis.
Disclosure of Invention
The embodiment of the invention provides an intelligent prediction method, device, equipment and medium based on image detection, and aims to solve the problem of low accuracy of image analysis in the prior art.
In a first aspect, an embodiment of the present invention provides an intelligent prediction method based on image detection, including:
if a first image input by a user is received, performing pixel segmentation on the first image according to a preset image segmentation model to obtain an image pixel segmentation result;
acquiring a target area image corresponding to the image pixel segmentation result in the first image according to the image pixel segmentation result;
denoising the target area image to obtain a target area image with noise points removed;
acquiring a central concave point corresponding to the target region image without the noise point according to a preset central concave point calculation model;
acquiring basic parameter information of the target region image without the noise points according to the central concave point;
and inputting the basic parameter information into a preset prediction model for prediction to obtain a prediction result corresponding to the first image.
In a second aspect, an embodiment of the present invention provides an intelligent prediction apparatus based on image detection, including:
the image segmentation processing unit is used for carrying out pixel segmentation on a first image according to a preset image segmentation model to obtain an image pixel segmentation result if the first image input by a user is received;
a target area image obtaining unit, configured to obtain a target area image corresponding to the image pixel segmentation result in the first image according to the image pixel segmentation result;
the de-noising processing unit is used for de-noising the target area image to obtain a target area image with noise points removed;
the central concave point obtaining unit is used for obtaining a central concave point corresponding to the target area image without the noise point according to a preset central concave point calculation model;
a basic parameter information obtaining unit, configured to obtain basic parameter information of the target region image without the noise point according to the center concave point;
and the prediction result acquisition unit is used for inputting the basic parameter information into a preset prediction model for prediction to obtain a prediction result corresponding to the first image.
In a third aspect, an embodiment of the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the processor implements the intelligent prediction method based on image detection according to the first aspect.
In a fourth aspect, the present invention further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and the computer program, when executed by a processor, causes the processor to execute the intelligent prediction method based on image detection according to the first aspect.
The embodiment of the invention provides an intelligent prediction method, an intelligent prediction device and a computer readable storage medium based on image detection. The method comprises the steps of carrying out pixel segmentation on a first image input by a user to obtain an image pixel segmentation result and further obtain a corresponding target area image, carrying out denoising processing on the target area image to obtain a target area image without noise points and calculating to obtain a corresponding central concave point, further obtaining basic parameter information of the target area image without the noise points according to the central concave point, and inputting the basic parameter information into a prediction model to obtain a corresponding prediction result. By the method, the first image can be intelligently processed to obtain the basic parameter information, and the corresponding prediction result can be obtained based on the basic parameter information, so that the efficiency of analyzing the image can be improved, and the accuracy of the obtained prediction result is greatly improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of an intelligent prediction method based on image detection according to an embodiment of the present invention;
FIG. 2 is a sub-flowchart of an intelligent prediction method based on image detection according to an embodiment of the present invention;
FIG. 3 is a schematic view of another sub-flow chart of an intelligent prediction method based on image detection according to an embodiment of the present invention;
FIG. 4 is a schematic view of another sub-flow chart of an intelligent prediction method based on image detection according to an embodiment of the present invention;
FIG. 5 is a schematic view of another sub-flow chart of an intelligent prediction method based on image detection according to an embodiment of the present invention;
FIG. 6 is a schematic view of another sub-flow chart of an intelligent prediction method based on image detection according to an embodiment of the present invention;
FIG. 7 is a schematic view of another sub-flow chart of an intelligent prediction method based on image detection according to an embodiment of the present invention;
fig. 8 is a schematic effect diagram of an intelligent prediction method based on image detection according to an embodiment of the present invention;
FIG. 9 is a schematic block diagram of an intelligent prediction device based on image detection according to an embodiment of the present invention;
FIG. 10 is a schematic block diagram of a computer device provided by an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an intelligent prediction method based on image detection according to an embodiment of the present invention; the intelligent prediction method based on the image detection is applied to a user terminal or a management server, the intelligent prediction method based on the image detection is executed through application software installed in the user terminal or the management server, the user terminal is a terminal device which can receive a first image input by a user and perform intelligent prediction based on the image detection, such as a desktop computer, a notebook computer, a tablet computer or a mobile phone, and the management server is a server end which can receive the first image sent by the user through the terminal and perform intelligent prediction based on the image detection, such as a server constructed by an enterprise, a medical institution or a government department. As shown in fig. 1, the method includes steps S110 to S160.
S110, if a first image input by a user is received, performing pixel segmentation on the first image according to a preset image segmentation model to obtain an image pixel segmentation result.
And if a first image input by a user is received, performing pixel segmentation on the first image according to a preset image segmentation model to obtain an image pixel segmentation result. The image segmentation model comprises convolution processing rules and a pixel classification neural network. The user can input a first image, namely the image needing intelligent prediction, for example, the first image can be a scanned image of a fundus retinal macular region of an examiner obtained by a fundus OCT scanning device, and the first image can be a color image or a gray-scale image. The image segmentation model is a processing model for performing pixel segmentation on the first image, the first image can be subjected to pixel segmentation processing through the image segmentation model to obtain an image pixel segmentation result, the first image can be subjected to convolution processing through a convolution processing rule of the image segmentation model to obtain pixel convolution characteristics corresponding to each pixel, and the pixel convolution characteristics are classified according to the pixel classification neural network to obtain classification information of each pixel.
In one embodiment, as shown in FIG. 2, step S110 includes sub-steps S111 and S112.
And S111, performing convolution processing on the first image according to the convolution processing rule to obtain a pixel convolution characteristic corresponding to each pixel in the first image.
The convolution processing rule can be adopted to carry out convolution processing on the first image to obtain the pixel convolution characteristic corresponding to each pixel, specifically, 3 × 3 convolution cores can be adopted to carry out continuous convolution operation with the step length of 1 on the first image, each continuous convolution operation is composed of three layers of convolutions, the extraction and integration of the characteristics are realized by adopting a pooling method or an upsampling method between two continuous convolution operations, the pooling operation can be realized by adopting a 2 × 2 maximum value pooling (max Pooling) operation, the upsampling operation can be realized by adopting a 2 × 2 deconvolution operation, and finally the pixel convolution characteristic corresponding to each pixel in the first image is obtained. Each pixel convolution feature corresponds to a pixel in the first image, the pixel convolution feature may be a multi-dimensional feature vector, each dimension includes a feature value, the value range of each feature value is [0,1], the feature values of all the dimensions are combined into a pixel convolution feature, for example, each pixel convolution feature is a feature vector of 1 × 128 dimensions.
And S112, classifying each pixel convolution characteristic according to the pixel classification neural network, and taking classification information of each pixel convolution characteristic obtained through classification as the image pixel segmentation result.
Specifically, the pixel classification neural network may be composed of an input layer, a plurality of intermediate layers, and an output layer, where the input layer and the intermediate layers, the intermediate layers and other adjacent intermediate layers, and the intermediate layers and the output nodes are all associated by association formulas, each association formula may be represented as a linear function, the linear function includes corresponding parameter values, each input node in the input layer corresponds to a feature value of one dimension in the pixel convolution feature, and then the number of input nodes included in the input layer is equal to a dimension value of a feature vector in the pixel convolution feature, for example, the input layer may include 128 input nodes, and the 128 input nodes correspond to feature vectors of 1 × 128 dimensions in the pixel convolution feature; each output node corresponds to one pixel class, and in this embodiment, two output nodes may be provided, the pixel of the first output node being classified as a valid pixel and the pixel of the second output node being classified as an invalid pixel. Inputting a pixel convolution characteristic into the pixel classification neural network from the input layer, namely acquiring a corresponding output result from the output layer, wherein the output result is the matching degree between the pixel convolution characteristic and each pixel classification, the value range of the matching degree is [0,1], and one pixel classification with the highest matching degree can be acquired as the classification information corresponding to the pixel convolution characteristic.
Before the pixel classification neural network is used, the pixel classification neural network can be trained through a pre-stored training image set and a pre-stored testing image set to obtain the trained pixel classification neural network. For example, 2000 fundus OCT images can be acquired as labeled images to construct a training image set and a testing image set, labeling and sketching are performed on retinal regions manually (for example, mask labeling is performed on retinal regions, the pixel value of mask is 100), the retinal mask labels corresponding to 2000 OCT images are acquired as labeled images, the labeled images are randomly grouped according to a proportion to obtain the training image set and the testing image set, for example, the labeled images can be randomly grouped according to 4:1 or 7:1, a convolution processing rule is adopted to process one labeled image in the training image set to obtain corresponding pixel convolution characteristics, the pixel convolution characteristics of the labeled images are classified through an initial pixel classification neural network to obtain classification information as a labeled image pixel segmentation result, the pixel overlap ratio between pixels of which are classified as effective pixels and the mask labels of the labeled images is calculated according to the labeled image pixel segmentation result, the pixel contact ratio is used as a loss function and a pixel classification neural network is trained once based on a gradient descent rule, the training process is to optimize and adjust the parameter value of a primary function in the pixel classification neural network, multiple labeled images contained in one training image set can carry out multiple times of iterative training on the pixel classification neural network, multiple training image sets carry out iterative training on an initial pixel classification neural network respectively and can correspondingly obtain multiple trained alternative pixel classification neural networks, a test image set can test the multiple alternative pixel classification neural networks respectively to obtain the test accuracy corresponding to each alternative pixel classification neural network, and one alternative pixel classification neural network with the highest test accuracy is selected as the trained pixel classification neural network according to the test accuracy for use.
In an embodiment, the image segmentation model further includes a size transformation rule, and as shown in fig. 3, step S1101 is further included before step S111.
S1101, performing size transformation on the first image according to the size transformation rule to obtain a first image subjected to size transformation.
In order to improve the processing efficiency of the first image, before the first image is specifically processed, the size of the first image can be converted, the size conversion rule is a specific rule for converting the size of the first image, and the first image after size conversion meets the corresponding size requirement in the size conversion rule. For example, the size of the first image input by the user is 1024 × 640, the upper area and the lower area of the first image are filled to obtain an image having the same width and height and having a resolution of 1024 × 1024, the image is processed by bilinear interpolation scaling in the size conversion rule to obtain an image having a resolution of 512 × 512, and the finally obtained 512 × 512 image is the first image subjected to size conversion. The size of the first image after size conversion is uniform and contains fewer pixel points, so that the processing efficiency can be improved in the subsequent processing process.
And S120, acquiring a target area image corresponding to the image pixel segmentation result in the first image according to the image pixel segmentation result.
And acquiring a target area image corresponding to the image pixel segmentation result in the first image according to the image pixel segmentation result. Specifically, the image pixel segmentation result includes classification information corresponding to each pixel convolution feature, and the target area image corresponding to the first image can be obtained according to the classification information.
In an embodiment, as shown in fig. 4, step S120 includes substeps S121 and S122.
And S121, segmenting the first image according to the classification information of each pixel in the image pixel segmentation result to obtain an effective pixel area.
If the first image input by the user is not subjected to size conversion processing, the classification information corresponding to the convolution feature of each pixel is the classification information corresponding to each pixel in the original first image, and each classification information is an effective pixel or an ineffective pixel, the original first image can be subjected to segmentation processing according to the classification information, and an area which only contains the effective pixel in the original first image is segmented to be used as an effective pixel area. If the first image input by the user is subjected to size conversion processing, the classification information corresponding to the convolution feature of each pixel is the classification information corresponding to each pixel in the first image after size conversion, the first image after size conversion is subjected to segmentation processing according to the classification information of each pixel in the first image after size conversion, and an area only containing effective pixels in the first image after size conversion can be segmented as an effective pixel area.
And S122, filling the pixel values of the effective pixel area to obtain a target area image.
The obtained effective pixel area is subjected to pixel value filling to obtain a corresponding target area image, specifically, the pixel value of each pixel point included in the effective pixel area can be filled to a specific gray value, and the obtained target area image is a gray image, for example, the pixel value of the pixel point in the effective pixel area is filled to a gray value of 235, and the pixel values of other pixel points are defaulted to 0, so that the corresponding target area image can be obtained.
And S130, denoising the target region image to obtain a target region image with the noise points removed.
And denoising the target area image to obtain a target area image with the noise points removed. In order to further enhance the accuracy of intelligent prediction based on the image, after the target area image is obtained, denoising processing can be carried out on the target area image to obtain the target area image without noise points.
In an embodiment, as shown in fig. 5, step S130 includes sub-steps S131, S132, and S133.
S131, acquiring an invalid pixel connected domain in the target area image; s132, filtering invalid pixel connected domains with areas smaller than the area threshold value in the target area image according to a preset area threshold value to obtain a filtered image.
Specifically, an invalid pixel connected domain in the target area image can be obtained, the invalid pixel connected domain is only composed of invalid pixel points in the target area image, the invalid pixel points in the target area image can be obtained, a plurality of communicated invalid pixel points are combined to form an invalid pixel connected domain, namely any two invalid pixel connected domains are not communicated, and the invalid pixel connected domain at least comprises one invalid pixel point. And judging whether the area of each invalid pixel connected domain is smaller than a preset area threshold value or not, and filtering the invalid pixel connected domains with the areas smaller than the area threshold value. For example, the area threshold may be set to 2000, and if the invalid pixel connected domain includes less than 2000 invalid pixel points, the area of the invalid pixel connected domain is determined to be smaller than the area threshold, and filtering is performed, and the invalid pixel connected domain whose area is determined to be not smaller than the area threshold is retained.
And S133, performing pixel closing operation on the effective pixel connected domain in the filtered image to obtain a target region image without noise points.
And performing pixel closing operation on the effective pixel connected domain in the obtained filtering image, wherein the region except the residual ineffective pixel connected domain in the filtering image is the effective pixel connected domain, the effective pixel connected domain is subjected to pixel closing operation, the pixel closing operation is pixel operation of firstly expanding and then corroding, and the pixel closing operation has the functions of filling fine holes in the region and connecting the adjacent image region with the smooth image region boundary. The pixel closing operation can be carried out on the effective pixel connected domain in the target area image to obtain a drying pixel connected domain, the obtained target area image without noise contains the drying pixel connected domain, and the drying pixel connected domain does not contain noise and has a smooth boundary.
S140, acquiring a central concave point corresponding to the target region image without the noise point according to a preset central concave point calculation model.
And acquiring a central concave point corresponding to the target region image without the noise point according to a preset central concave point calculation model. The central concave point calculation model comprises a convex hull calculation rule and a distance value acquisition rule. The target area image without the noise point contains a dryness-removing effective pixel connected domain, a convex hull contour line corresponding to the dryness-removing effective pixel connected domain can be obtained through convex hull calculation rule calculation, a plurality of corresponding normal line distance values are obtained according to the convex hull contour line and the distance value obtaining rule, and the central concave point is further determined.
In an embodiment, as shown in fig. 6, step S140 includes sub-steps S141, S142, S143, and S144.
And S141, calculating the convex hull contour line of the connected domain of the drying pixels in the target region image without the noise points according to the convex hull calculation rule.
And sequentially obtaining a plurality of external contour points of the drying pixel connected domain, and calculating the plurality of external contour points through a convex hull calculation rule to obtain a convex hull contour line of the drying pixel connected domain. In particular, the Convex Hull (Convex Hull) is a concept in computational geometry (graphics), and in a real vector space V, the intersection S of all Convex sets containing X for a given set X is called the Convex Hull of X. The convex hull of X can use all points in X (X)1,...Xn) Is constructed, in two-dimensional euclidean space, the convex hull can be thought of as a rubber band just surrounding all points (all points within X). The convex hull calculation rule can be a calculation rule based on a Graham scanning method, and the Graham scanning idea is to find a point on a convex hull, then to find the points on the convex hull one by one from the point in a counterclockwise direction, actually to perform polar angle sequencing, and then to query and use the points to construct and obtain a convex hull contour line. Fig. 8 is a schematic effect diagram of the intelligent prediction method based on image detection according to the embodiment of the present invention, where a white region in fig. 8 is a connected domain of drying pixels in an image of a target region where noise is removed, and a light gray line on the periphery of the white region in fig. 8 is a convex hull contour line calculated from a plurality of outer contour points of the connected domain of drying pixels.
And S142, determining a corresponding external contour line according to the edge pixel coordinate value of the drying pixel connected domain.
The edge of the drying pixel connected domain is the boundary of the drying pixel connected domain in the target area image and the invalid pixel connected domain in the target area image, the target area image comprises the pixel coordinate value of each pixel, the edge pixel coordinate value of the drying pixel connected domain can be obtained, and the external contour line of the drying pixel connected domain is determined according to the edge pixel coordinate value.
S143, calculating a normal distance value between each point in the convex hull contour line and the intersection point of the external contour line in the normal direction according to the distance value obtaining rule.
The convex hull contour line is formed by combining a plurality of points, the intersection point of each point in the convex hull contour line and the external contour line in the normal direction can be obtained, and the normal distance value corresponding to the intersection point is obtained. As shown in fig. 8, one point in the convex hull contour line is marked as a point a, the normal of the point a is a perpendicular line drawn along the convex hull contour line with the point a as a base point, the obtained normal of the point a is a white straight line passing through the point a in fig. 8, the intersection point of the white straight line and the convex hull contour line is also the point a, the intersection point of the white straight line and the external contour line is marked as a point b, and the line segment distance between the point a and the point b is a normal distance value corresponding to the intersection point b.
S144, determining a corresponding intersection point of the normal distance value with the maximum value in the external contour line as the central concave point.
And acquiring a corresponding intersection point of the normal distance value with the maximum numerical value in the external contour line, and taking the intersection point as a central concave point corresponding to the target area image. And if only one normal distance value with the maximum value exists in the plurality of normal distance values, finally determining that only one central concave point exists in the obtained target area image.
As shown in fig. 8, the line segment distance between the point a and the point b is the normal distance value with the largest value among all the normal distance values, and if the intersection point of the normal distance value in the outer contour line is b, it is determined that the point b is the foveal point corresponding to the target area image.
S150, acquiring basic parameter information of the target region image without the noise points according to the central concave point.
And acquiring basic parameter information of the target region image without the noise points according to the central concave point. Specifically, a central concave point region corresponding to the central concave point may be determined according to the central concave point and a preset region range threshold, for example, if the preset region range threshold is 50, a region of 50 pixels around the central concave point of the drying pixel connected domain is obtained as the central concave point region, and an average thickness value, a maximum thickness value, and a minimum thickness value of the central concave point region are obtained, where the thickness of the central concave point region is the number of pixels included in each pixel point of the central concave point region in the vertical direction. And calculating the difference between the maximum thickness value and the minimum thickness value to obtain a central concave depth value, wherein the average thickness value and the central concave depth value can be used as basic parameter information corresponding to the central concave point region, namely the basic parameter information of the target region image without the noise points is obtained.
And S160, inputting the basic parameter information into a preset prediction model for prediction to obtain a prediction result corresponding to the first image.
And inputting the basic parameter information into a preset prediction model for prediction to obtain a prediction result corresponding to the first image. The prediction model comprises a prediction calculation formula and a prediction rule. The prediction model is a concrete model for analyzing the basic parameter information to obtain a prediction result, the basic parameter information is input into a prediction calculation formula, a prediction score value can be calculated, a prediction grade can be correspondingly determined based on the prediction score value and a prediction rule, and the prediction grade can be used as the prediction result corresponding to the first image.
In one embodiment, as shown in fig. 7, step S160 includes sub-steps S161 and S162.
And S161, inputting the basic parameter information into the prediction calculation formula to obtain a corresponding prediction score value.
And inputting the basic parameter information into the prediction calculation formula to obtain a corresponding prediction score value, wherein the prediction calculation formula comprises a plurality of parameter values, and the prediction calculation formula can be trained through massive training data so as to optimize and adjust the parameter values contained in the prediction calculation formula to obtain the trained prediction calculation formula for use. Specifically, the predictive calculation formula can be represented by formula (1):
risk (p) is a calculated prediction fraction value, R _ Tavg is a thickness average value in basic parameter information, R _ dph is a central concave depth value in the basic parameter information, v _ age is an inspector age value which is input into the prediction calculation formula in advance, and beta0、β1、β2And beta3Are all the parameter values of the predictive calculation formula. In a specific embodiment, β is obtained by training a large amount of massive training data0=20、β1=-0.0275、β2=-0.0124、β3=0.081。
And S162, acquiring a prediction grade corresponding to the prediction score value according to the prediction rule as a prediction result corresponding to the first image.
The prediction rule comprises a plurality of scoring areas and grades corresponding to the scoring areas, one scoring area corresponding to the prediction scoring value in the prediction rule can be obtained, the grade corresponding to the scoring area is determined to be the prediction grade corresponding to the prediction scoring value, and the prediction grade can be used as the prediction result of the first image. For example, the correspondence between the partition and the level in the prediction rule may be: [0, 0.12] -primary, (0.12, 0.3] -secondary, (0.3, 0.5] -tertiary, (0.5, + ∞) -quaternary.
The technical method can be applied to application scenes including intelligent prediction based on image detection, such as intelligent communities/intelligent medical treatment and the like, so that the construction of intelligent cities is promoted.
In the intelligent prediction method based on image detection provided by the embodiment of the invention, a first image input by a user is subjected to pixel segmentation to obtain an image pixel segmentation result and further obtain a corresponding target area image, the target area image is subjected to denoising processing to obtain a target area image without noise points, a corresponding central concave point is obtained through calculation, basic parameter information of the target area image without noise points is further obtained according to the central concave point, and the basic parameter information is input into a prediction model to obtain a corresponding prediction result. By the method, the first image can be intelligently processed to obtain the basic parameter information, and the corresponding prediction result can be obtained based on the basic parameter information, so that the efficiency of analyzing the image can be improved, and the accuracy of the obtained prediction result is greatly improved.
The embodiment of the invention also provides an intelligent prediction device based on image detection, which can be configured in a user terminal or a management server and is used for executing any embodiment of the intelligent prediction method based on image detection. Specifically, referring to fig. 9, fig. 9 is a schematic block diagram of an intelligent prediction apparatus based on image detection according to an embodiment of the present invention.
As shown in fig. 9, the image detection-based smart prediction apparatus 100 includes a pixel division processing unit 110, a target area image acquisition unit 120, a coring processing unit 130, a center pit acquisition unit 140, a basic parameter information acquisition unit 150, and a prediction result acquisition unit 160.
The pixel segmentation processing unit 110 is configured to, if a first image input by a user is received, perform pixel segmentation on the first image according to a preset image segmentation model to obtain an image pixel segmentation result.
In one embodiment, the pixel division processing unit 110 includes sub-units: the pixel convolution characteristic acquisition unit is used for carrying out convolution processing on the first image according to the convolution processing rule so as to obtain a pixel convolution characteristic corresponding to each pixel in the first image; and the pixel convolution feature classification unit is used for classifying each pixel convolution feature according to the pixel classification neural network and taking the classification information of each pixel convolution feature obtained by classification as the image pixel segmentation result.
In an embodiment, the pixel division processing unit 110 further includes sub-units: and the size conversion unit is used for carrying out size conversion on the first image according to the size conversion rule to obtain the first image subjected to size conversion.
A target area image obtaining unit 120, configured to obtain a target area image corresponding to the image pixel segmentation result in the first image according to the image pixel segmentation result.
In a specific embodiment, the target area image acquiring unit 120 includes sub-units: an effective pixel area obtaining unit, configured to perform segmentation processing on the first image according to classification information of each pixel in the image pixel segmentation result to obtain an effective pixel area; and the pixel value filling unit is used for filling the pixel values of the effective pixel area to obtain a target area image.
And the denoising processing unit 130 is configured to perform denoising processing on the target region image to obtain a target region image with noise removed.
In one embodiment, the denoising processing unit 130 includes sub-units: an invalid pixel connected domain obtaining unit, configured to obtain an invalid pixel connected domain in the target area image; the filtered image obtaining unit is used for filtering an invalid pixel connected domain with an area smaller than an area threshold value in the target area image according to a preset area threshold value to obtain a filtered image; and the pixel closing operation processing unit is used for carrying out pixel closing operation on the effective pixel connected domain in the filtered image to obtain a target area image without noise points.
And a central concave point obtaining unit 140, configured to obtain a central concave point corresponding to the target region image without the noise point according to a preset central concave point calculation model.
In one embodiment, the central pit acquisition unit 140 includes sub-units: the convex hull contour line obtaining unit is used for calculating the convex hull contour line of the noise point removed pixel connected domain in the target area image according to the convex hull calculation rule; the external contour line determining unit is used for determining a corresponding external contour line according to the edge pixel coordinate value of the drying pixel connected domain; a normal distance value obtaining unit, configured to calculate a normal distance value between each point in the convex hull contour line in the normal direction and an intersection point of the external contour line according to the distance value obtaining rule; and the central concave point determining unit is used for determining a corresponding intersection point of the normal distance value with the maximum value in the external contour line as the central concave point.
And a basic parameter information obtaining unit 150, configured to obtain, according to the central concave point, basic parameter information of the target region image from which the noise point is removed.
A prediction result obtaining unit 160, configured to input the basic parameter information into a preset prediction model for prediction, so as to obtain a prediction result corresponding to the first image.
In one embodiment, the prediction result obtaining unit 160 includes sub-units: a prediction score value obtaining unit, configured to input the basic parameter information into the prediction calculation formula to obtain a corresponding prediction score value; the prediction level acquisition unit is configured to acquire a prediction level corresponding to the prediction score value as a prediction result corresponding to the first image according to the prediction rule.
The intelligent prediction device based on image detection provided by the embodiment of the invention applies the intelligent prediction method based on image detection, performs pixel segmentation on a first image input by a user to obtain an image pixel segmentation result and further obtains a corresponding target region image, performs denoising processing on the target region image to obtain a target region image without noise points and calculates to obtain a corresponding central concave point, further obtains basic parameter information of the target region image without noise points according to the central concave point, and inputs the basic parameter information into a prediction model to obtain a corresponding prediction result. By the method, the first image can be intelligently processed to obtain the basic parameter information, and the corresponding prediction result can be obtained based on the basic parameter information, so that the efficiency of analyzing the image can be improved, and the accuracy of the obtained prediction result is greatly improved.
The above-mentioned intelligent prediction apparatus based on image detection may be implemented in the form of a computer program, which may be run on a computer device as shown in fig. 10.
Referring to fig. 10, fig. 10 is a schematic block diagram of a computer device according to an embodiment of the present invention. The computer device may be a user terminal or a management server for performing an intelligent prediction method based on image detection for intelligent prediction based on image detection.
Referring to fig. 10, the computer device 500 includes a processor 502, memory, and a network interface 505 connected by a system bus 501, where the memory may include a storage medium 503 and an internal memory 504.
The storage medium 503 may store an operating system 5031 and a computer program 5032. The computer program 5032, when executed, may cause the processor 502 to perform an intelligent prediction method based on image detection, wherein the storage medium 503 may be a volatile storage medium or a non-volatile storage medium.
The processor 502 is used to provide computing and control capabilities that support the operation of the overall computer device 500.
The internal memory 504 provides an environment for the execution of the computer program 5032 in the storage medium 503, and when the computer program 5032 is executed by the processor 502, the processor 502 can be enabled to execute the intelligent prediction method based on image detection.
The network interface 505 is used for network communication, such as providing transmission of data information. Those skilled in the art will appreciate that the configuration shown in fig. 10 is a block diagram of only a portion of the configuration associated with aspects of the present invention and is not intended to limit the computing device 500 to which aspects of the present invention may be applied, and that a particular computing device 500 may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
The processor 502 is configured to run a computer program 5032 stored in the memory to implement the corresponding functions of the intelligent prediction method based on image detection.
Those skilled in the art will appreciate that the embodiment of a computer device illustrated in fig. 10 does not constitute a limitation on the specific construction of the computer device, and that in other embodiments a computer device may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components. For example, in some embodiments, the computer device may only include a memory and a processor, and in such embodiments, the structures and functions of the memory and the processor are consistent with those of the embodiment shown in fig. 10, and are not described herein again.
It should be understood that, in the embodiment of the present invention, the Processor 502 may be a Central Processing Unit (CPU), and the Processor 502 may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In another embodiment of the invention, a computer-readable storage medium is provided. The computer readable storage medium may be a volatile or non-volatile computer readable storage medium. The computer readable storage medium stores a computer program, wherein the computer program, when executed by a processor, implements the steps included in the above-described intelligent prediction method based on image detection.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses, devices and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided by the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only a logical division, and there may be other divisions when the actual implementation is performed, or units having the same function may be grouped into one unit, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a computer-readable storage medium, which includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned computer-readable storage media comprise: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (10)
1. An intelligent prediction method based on image detection, characterized in that the method comprises:
if a first image input by a user is received, performing pixel segmentation on the first image according to a preset image segmentation model to obtain an image pixel segmentation result;
acquiring a target area image corresponding to the image pixel segmentation result in the first image according to the image pixel segmentation result;
denoising the target area image to obtain a target area image with noise points removed;
acquiring a central concave point corresponding to the target region image without the noise point according to a preset central concave point calculation model;
acquiring basic parameter information of the target region image without the noise points according to the central concave point;
and inputting the basic parameter information into a preset prediction model for prediction to obtain a prediction result corresponding to the first image.
2. The intelligent prediction method based on image detection according to claim 1, wherein the image segmentation model includes convolution processing rules and a pixel classification neural network, and the performing pixel segmentation on the first image according to a preset image segmentation model to obtain an image pixel segmentation result includes:
performing convolution processing on the first image according to the convolution processing rule to obtain a pixel convolution characteristic corresponding to each pixel in the first image;
and classifying each pixel convolution characteristic according to the pixel classification neural network, and taking classification information of each pixel convolution characteristic obtained by classification as the image pixel segmentation result.
3. The intelligent prediction method based on image detection according to claim 2, wherein the image segmentation model further includes a size transformation rule, and before performing convolution processing on the first image according to the convolution processing rule to obtain a pixel convolution feature corresponding to each pixel in the first image, the method further includes:
and carrying out size transformation on the first image according to the size transformation rule to obtain a first image subjected to size transformation.
4. The intelligent prediction method based on image detection according to claim 1, wherein the obtaining a target area image corresponding to the image pixel segmentation result in the first image according to the image pixel segmentation result comprises:
performing segmentation processing on the first image according to the classification information of each pixel in the image pixel segmentation result to obtain an effective pixel area;
and filling the pixel value of the effective pixel area to obtain a target area image.
5. The intelligent prediction method based on image detection according to claim 1, wherein the denoising the target region image to obtain a noise-removed target region image comprises:
acquiring an invalid pixel connected domain in the target area image;
filtering an invalid pixel connected domain with an area smaller than the area threshold value in the target area image according to a preset area threshold value to obtain a filtered image;
and carrying out pixel closing operation on the effective pixel connected domain in the filtered image to obtain a target area image without noise points.
6. The image detection-based intelligent prediction method according to claim 1, wherein the foveal point calculation model includes a convex hull calculation rule and a distance value acquisition rule, and the acquiring of the foveal point corresponding to the noise-removed target region image according to the preset foveal point calculation model includes:
calculating a convex hull contour line of a drying pixel removal connected domain in the target region image without the noise points according to the convex hull calculation rule;
determining a corresponding external contour line according to the edge pixel coordinate value of the drying pixel connected domain;
calculating a normal distance value between each point in the convex hull contour line and the intersection point of the external contour line in the normal direction according to the distance value acquisition rule;
and determining a corresponding intersection point of the normal distance value with the maximum value in the external contour line as the central concave point.
7. The intelligent prediction method based on image detection according to claim 1, wherein the prediction model includes a prediction calculation formula and a prediction rule, and the step of inputting the basic parameter information into a preset prediction model for prediction to obtain a prediction result corresponding to the first image includes:
inputting the basic parameter information into the prediction calculation formula to obtain a corresponding prediction score value;
and acquiring a prediction grade corresponding to the prediction score value according to the prediction rule as a prediction result corresponding to the first image.
8. An intelligent prediction apparatus based on image detection, the apparatus comprising:
the image segmentation processing unit is used for carrying out pixel segmentation on a first image according to a preset image segmentation model to obtain an image pixel segmentation result if the first image input by a user is received;
a target area image obtaining unit, configured to obtain a target area image corresponding to the image pixel segmentation result in the first image according to the image pixel segmentation result;
the de-noising processing unit is used for de-noising the target area image to obtain a target area image with noise points removed;
the central concave point obtaining unit is used for obtaining a central concave point corresponding to the target area image without the noise point according to a preset central concave point calculation model;
a basic parameter information obtaining unit, configured to obtain basic parameter information of the target region image without the noise point according to the center concave point;
and the prediction result acquisition unit is used for inputting the basic parameter information into a preset prediction model for prediction to obtain a prediction result corresponding to the first image.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the intelligent prediction method based on image detection according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the intelligent prediction method based on image detection according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110600178.XA CN113192067B (en) | 2021-05-31 | 2021-05-31 | Intelligent prediction method, device, equipment and medium based on image detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110600178.XA CN113192067B (en) | 2021-05-31 | 2021-05-31 | Intelligent prediction method, device, equipment and medium based on image detection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113192067A true CN113192067A (en) | 2021-07-30 |
CN113192067B CN113192067B (en) | 2024-03-26 |
Family
ID=76985847
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110600178.XA Active CN113192067B (en) | 2021-05-31 | 2021-05-31 | Intelligent prediction method, device, equipment and medium based on image detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113192067B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113744200A (en) * | 2021-08-11 | 2021-12-03 | 深圳市鑫信腾科技股份有限公司 | Camera contamination detection method, device and equipment |
CN116433761A (en) * | 2023-03-09 | 2023-07-14 | 北京瓦特曼智能科技有限公司 | Stack type workpiece coordinate positioning method, apparatus and medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110705576A (en) * | 2019-09-29 | 2020-01-17 | 慧影医疗科技(北京)有限公司 | Region contour determining method and device and image display equipment |
CN110782421A (en) * | 2019-09-19 | 2020-02-11 | 平安科技(深圳)有限公司 | Image processing method, image processing device, computer equipment and storage medium |
CN111429451A (en) * | 2020-04-15 | 2020-07-17 | 深圳市嘉骏实业有限公司 | Medical ultrasonic image segmentation method and device |
CN112365434A (en) * | 2020-11-10 | 2021-02-12 | 大连理工大学 | Unmanned aerial vehicle narrow passage detection method based on double-mask image segmentation |
CN112529004A (en) * | 2020-12-08 | 2021-03-19 | 平安科技(深圳)有限公司 | Intelligent image recognition method and device, computer equipment and storage medium |
SG10202007348TA (en) * | 2020-08-01 | 2021-03-30 | Sensetime Int Pte Ltd | Target object identification method and apparatus |
CN112740269A (en) * | 2020-05-13 | 2021-04-30 | 华为技术有限公司 | Target detection method and device |
-
2021
- 2021-05-31 CN CN202110600178.XA patent/CN113192067B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110782421A (en) * | 2019-09-19 | 2020-02-11 | 平安科技(深圳)有限公司 | Image processing method, image processing device, computer equipment and storage medium |
CN110705576A (en) * | 2019-09-29 | 2020-01-17 | 慧影医疗科技(北京)有限公司 | Region contour determining method and device and image display equipment |
CN111429451A (en) * | 2020-04-15 | 2020-07-17 | 深圳市嘉骏实业有限公司 | Medical ultrasonic image segmentation method and device |
CN112740269A (en) * | 2020-05-13 | 2021-04-30 | 华为技术有限公司 | Target detection method and device |
SG10202007348TA (en) * | 2020-08-01 | 2021-03-30 | Sensetime Int Pte Ltd | Target object identification method and apparatus |
CN112365434A (en) * | 2020-11-10 | 2021-02-12 | 大连理工大学 | Unmanned aerial vehicle narrow passage detection method based on double-mask image segmentation |
CN112529004A (en) * | 2020-12-08 | 2021-03-19 | 平安科技(深圳)有限公司 | Intelligent image recognition method and device, computer equipment and storage medium |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113744200A (en) * | 2021-08-11 | 2021-12-03 | 深圳市鑫信腾科技股份有限公司 | Camera contamination detection method, device and equipment |
CN113744200B (en) * | 2021-08-11 | 2024-04-05 | 深圳市鑫信腾科技股份有限公司 | Camera dirt detection method, device and equipment |
CN116433761A (en) * | 2023-03-09 | 2023-07-14 | 北京瓦特曼智能科技有限公司 | Stack type workpiece coordinate positioning method, apparatus and medium |
CN116433761B (en) * | 2023-03-09 | 2024-03-12 | 北京瓦特曼智能科技有限公司 | Stack type workpiece coordinate positioning method, apparatus and medium |
Also Published As
Publication number | Publication date |
---|---|
CN113192067B (en) | 2024-03-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105957063B (en) | CT image liver segmentation method and system based on multiple dimensioned weighting similarity measure | |
Flores et al. | Improving classification performance of breast lesions on ultrasonography | |
CN114758137B (en) | Ultrasonic image segmentation method and device and computer readable storage medium | |
CN109767448B (en) | Segmentation model training method and device | |
CN111127387B (en) | Quality evaluation method for reference-free image | |
CN113192067B (en) | Intelligent prediction method, device, equipment and medium based on image detection | |
Akkasaligar et al. | Classification of medical ultrasound images of kidney | |
Pan et al. | Prostate segmentation from 3d mri using a two-stage model and variable-input based uncertainty measure | |
CN113223015A (en) | Vascular wall image segmentation method, device, computer equipment and storage medium | |
Koprowski et al. | Assessment of significance of features acquired from thyroid ultrasonograms in Hashimoto's disease | |
Sindhwani et al. | Semi‐automatic outlining of levator hiatus | |
CN115018863A (en) | Image segmentation method and device based on deep learning | |
CN115601299A (en) | Intelligent liver cirrhosis state evaluation system and method based on images | |
Kanca et al. | Learning hand-crafted features for k-NN based skin disease classification | |
CN115100494A (en) | Identification method, device and equipment of focus image and readable storage medium | |
CN117274278B (en) | Retina image focus part segmentation method and system based on simulated receptive field | |
Queiroz et al. | Endoscopy image restoration: A study of the kernel estimation from specular highlights | |
CN116524315A (en) | Mask R-CNN-based lung cancer pathological tissue section identification and segmentation method | |
Delmoral et al. | Segmentation of pathological liver tissue with dilated fully convolutional networks: A preliminary study | |
Tamilmani et al. | Early detection of brain cancer using association allotment hierarchical clustering | |
WO2008094446A2 (en) | Circular intensity distribution analysis for the detection of convex, concave and flat surfaces | |
CN116563647A (en) | Age-related maculopathy image classification method and device | |
CN115881304A (en) | Risk assessment method, device, equipment and medium based on intelligent detection | |
CN115423779A (en) | Method for predicting bone age of children | |
CN112862787B (en) | CTA image data processing method, device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |