CN115423817A - Image segmentation method, device, electronic device and medium - Google Patents

Image segmentation method, device, electronic device and medium Download PDF

Info

Publication number
CN115423817A
CN115423817A CN202210954796.9A CN202210954796A CN115423817A CN 115423817 A CN115423817 A CN 115423817A CN 202210954796 A CN202210954796 A CN 202210954796A CN 115423817 A CN115423817 A CN 115423817A
Authority
CN
China
Prior art keywords
image
segmentation
model
segmentation model
sample library
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210954796.9A
Other languages
Chinese (zh)
Inventor
袁元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Ugion Technology Co ltd
Original Assignee
Shanghai Ugion Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Ugion Technology Co ltd filed Critical Shanghai Ugion Technology Co ltd
Priority to CN202210954796.9A priority Critical patent/CN115423817A/en
Publication of CN115423817A publication Critical patent/CN115423817A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20152Watershed segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an image segmentation method, an image segmentation device, electronic equipment and a medium, and belongs to the field of image processing. The method comprises the following steps: acquiring a first image and a first segmentation model, wherein the first image is an image to be segmented, the first segmentation model is a segmentation model for segmenting a second image in advance, and the second image is an image similar to the first image; and inputting the first image into the first segmentation model for segmentation to obtain a first segmentation image.

Description

Image segmentation method, device, electronic device and medium
Technical Field
The present application relates to the field of image processing, and in particular, to an image segmentation method, apparatus, electronic device, and medium.
Background
With the birth of photosensitive components, image acquisition and processing gradually go deep into daily production and life practices. Image segmentation also belongs to image processing; for example, it may be necessary to segment organ regions from biomedical images for analysis by a doctor, and for example, it may be necessary to segment information regions from document images for document information extraction.
At present, in the related art of computer vision, a plurality of segmentation models for segmenting images have been proposed, but for the same image, the segmentation effect after segmentation by using different segmentation models is different.
Disclosure of Invention
An embodiment of the present application provides an image segmentation method, an image segmentation apparatus, an electronic device, a medium, a chip, and a computer program product, which can solve a problem that image segmentation cannot be performed on an image to be segmented by adaptively matching a more appropriate segmentation model.
In a first aspect, an embodiment of the present application provides an image segmentation method, including: acquiring a first image and a first segmentation model, wherein the first image is an image to be segmented, the first segmentation model is a segmentation model for segmenting a second image in advance, and the second image is an image similar to the first image; and inputting the first image into the first segmentation model for segmentation to obtain a first segmentation image.
In a second aspect, an embodiment of the present application provides an image segmentation apparatus, including: the image segmentation method comprises a first acquisition module, a second acquisition module and a segmentation module, wherein the first acquisition module is used for acquiring a first image and a first segmentation model, the first image is an image to be segmented, the first segmentation model is a segmentation model for segmenting a second image in advance, and the second image is an image similar to the first image; and the first input module is used for inputting the first image into the first segmentation model for segmentation to obtain a first segmentation image.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, stored on a storage medium, for execution by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, a first image and a first segmentation model are obtained, wherein the first image is an image to be segmented, the first segmentation model is a segmentation model for segmenting a second image in advance, and the second image is an image similar to the first image; and inputting the first image into the first segmentation model for segmentation to obtain a first segmentation image. The similar images follow a similar rule when being segmented, so that the first image is segmented by utilizing the first segmentation model adopted when the second image is segmented in advance, and the image segmentation is realized by the segmentation model which is more suitable for self-adaptive matching of the first image to be segmented, namely the first segmentation model.
Drawings
Fig. 1 schematically shows a flowchart of an image segmentation method provided in an embodiment of the present application;
FIG. 2 schematically shows another flowchart of an image segmentation method provided by an embodiment of the present application;
FIG. 3 is a flow chart that schematically illustrates sub-steps of one step in an image segmentation method provided by an embodiment of the present application;
FIG. 4 is a flow chart that schematically illustrates sub-steps of another step in an image segmentation method provided by an embodiment of the present application;
FIG. 5 is a further flow chart illustrating an image segmentation method provided by an embodiment of the present application;
FIG. 6 is a further flowchart of an image segmentation method provided by an embodiment of the present application;
fig. 7 schematically illustrates an image segmentation apparatus provided in an embodiment of the present application;
FIG. 8 schematically illustrates an image segmentation system provided by an embodiment of the present application;
fig. 9 schematically illustrates an electronic device according to an embodiment of the present application;
fig. 10 schematically shows a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived from the embodiments in the present application by a person skilled in the art, are within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced other than those illustrated or described herein, and that the terms "first", "second", etc. generally refer to one category of object and do not limit the number of objects, e.g., the first image may be one or more. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The term "segmentation model" in the description and claims of the present application may be a segmentation algorithm, a segmentation model constructed based on a segmentation algorithm, or a program or instruction constructed based on a segmentation algorithm.
In productive life practice, more or less all industries are concerned with image processing, and in particular with image segmentation. For example, in a production line, product inspection is performed by using a computer technology vision, and a product area needs to be segmented from an acquired image; for another example, in image document processing, it is often necessary to segment and locate text regions from an image document for OCR (optical character recognition) recognition; for another example, in astronomy or meteorology, a corresponding space region or administrative region needs to be segmented from the satellite remote sensing image for an astronomy or meteorologist to analyze; also for example, in medical imaging departments, it is desirable to segment organ regions from biomedical images for analysis by physicians.
At present, in the related art of computer vision, a plurality of segmentation models for segmenting images have been proposed, but for the same image, the segmentation effect after segmentation by using different segmentation models is different. When each image segmentation model performs image segmentation, when each area in an image is emphasized, edge information of each area is lost more or less in the segmented image. The less edge information is lost after segmentation by the segmentation model, the more the segmentation model fits the image, relative to an image. Therefore, in some scenes, in order to obtain a higher quality image after segmentation (i.e. less loss of edge information), a plurality of segmentation models are usually used to segment one image, and then the segmented images segmented by the plurality of segmentation models are compared to select an image with the best segmentation effect, i.e. the least loss of edge information. Obviously, a conclusion can be obtained only after a plurality of segmentation models are required to complete segmentation and are compared, so that the processing efficiency is low; moreover, multiple segmentation processes through multiple segmentation models may result in image corruption by error. If the image segmentation can be rapidly carried out on the segmentation model which is suitable for the rapid self-adaptive matching of the image to be segmented, the image segmentation efficiency can be effectively improved.
In view of this, an object of the embodiments of the present application is to provide an image segmentation method, which can solve the problem that image segmentation cannot be performed on an image to be segmented by adaptively matching a more suitable segmentation model.
The image segmentation method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
As shown in fig. 1, a flowchart of an image segmentation method provided by an embodiment of the present application is exemplarily shown. Referring to fig. 1, the image segmentation method includes the following steps S101 to S102:
s101, acquiring a first image and a first segmentation model. The first image is an image to be segmented, the first segmentation model is a segmentation model that segments a second image in advance, and the second image is an image similar to the first image.
In the embodiment of the present application, the type and content of the first image and the second image are not limited. Illustratively, the first image and the second image may each be a video frame image in a VR video, in particular an image in a VR biomedical video; the first image and the second image may also be video frame images in a normal video; the first image and the second image may also be plain pictures. Of course, the first image and the second image may also be images generated based on other image generation techniques or captured by photographic techniques.
In the embodiment of the present application, the first image generation time or the capturing time is not particularly limited. For example, the first image may be a temporarily captured or generated image; the first image may be an image captured or generated in advance and stored in a database, and the first image may be read from the database when the image is divided. It should be noted that the second image is previously subjected to the segmentation process, i.e., the second image is subjected to the segmentation process prior to the segmentation of the first image. Thus, the second image is captured or generated at an earlier time than the first image.
In the embodiment of the present application, the first segmentation model is not particularly limited. Illustratively, the first segmentation model includes at least one of: the method comprises the following steps of (1) an edge detection model, a threshold segmentation model, a region growing model, a watershed segmentation model, a connected domain model and a neural network model for image segmentation; the threshold segmentation model comprises a fixed threshold segmentation model and an adaptive threshold segmentation model
The second image is previously over-segmented using the first segmentation model. After the first image is obtained, an image similar to the first image is determined, and if the determined similar image is the second image, a first segmentation model for segmenting the second image in advance is obtained.
S102, inputting the first image into the first segmentation model for segmentation to obtain a first segmentation image.
Namely, the first image is segmented by the first segmentation model, so that a first segmented image obtained by segmenting the first image is obtained.
Similar images will follow a substantially similar segmentation rule when segmented. In the embodiment of the application, the similarity of the images is utilized, and the first image is segmented by adopting the first segmentation model adopted when the second image similar to the first image is segmented in advance, so that the image segmentation is carried out on the first image to be segmented by adopting the segmentation model which is more suitable for self-adaptive matching. Compared with the scheme that the conclusion can be obtained only after the images are segmented by multiple segmentation models and compared in the related technology, the processing efficiency of image segmentation is effectively improved. And the image error damage caused by multiple segmentation of multiple segmentation models can be avoided to a certain extent.
In order to obtain the first segmentation model quickly, in the embodiment of the present application, a sample library is further constructed in advance, in which a large number of sample images and a segmentation model mapped to each sample image are stored in a manner of mapping to each other, for example, the second image and the first segmentation model are stored in the sample library in a relationship of mapping to each other, and the first segmentation model can be obtained directly from the sample library. The process of constructing the sample library is exemplarily described below by taking the second image as an example.
As shown in fig. 2, another flowchart of an image segmentation method provided in the embodiment of the present application is exemplarily shown. Referring to fig. 2, the image segmentation method includes the following steps S201 to S202:
s201, determining a segmentation model from a plurality of preset segmentation models as a first segmentation model.
The pre-configuration is based on a plurality of segmentation models. The plurality of segmentation models comprises at least one of: the method comprises the following steps of (1) an edge detection model, a threshold segmentation model, a region growing model, a watershed segmentation model, a connected domain model and a neural network model for image segmentation; the threshold segmentation model comprises a fixed threshold segmentation model and an adaptive threshold segmentation model. These segmentation models are relatively mature segmentation models, and only a brief description is given below, and more specifically, please refer to related technologies, which are not described herein again.
When the edge detection model performs image segmentation, the edge of each region in the image can be detected based on the gray value distribution of each pixel point in the image, especially discontinuous sudden change of the gray value is detected, and the image is segmented based on the detected edge. These edges can describe the approximate outline of each region in the image, i.e. the boundary or boundary of different regions, and substantially the reaction of discontinuous and abrupt change of the gray value of each pixel point in the image.
The threshold segmentation model is a model that substantially performs binarization processing on an image when the image is segmented. Determining a threshold, comparing the pixel value of each pixel point in the image with the determined threshold, setting the pixel value of the point to be 0 when the threshold of the pixel point is larger than the determined pixel value, and setting the pixel value of the point to be 255 when the threshold of the pixel point is smaller than the determined pixel value, so that the image is changed into black and white. When the threshold is determined, the region may be segmented from the image as needed for determination, so as not to binarize the target region into a background portion. In some alternative embodiments, the threshold segmentation model is a fixed threshold segmentation model; in some alternative embodiments, the threshold segmentation model is a segmentation model capable of determining a threshold value adaptively to the image to be segmented, for example, a threshold segmentation model constructed based on Otsu's oksu method.
When the region production segmentation model carries out image segmentation, firstly determining a similarity criterion, then growing adjacent pixels in all directions around from selected seed pixels based on the similarity criterion, merging the pixels meeting the similarity criterion into a region, continuing growing in all directions around by taking the merged new pixels as new seed pixels based on the similarity criterion, and repeating the process until no pixel meeting the similarity criterion exists, thereby obtaining the image of the region where the seed pixels are located. A plurality of seed pixel points may be determined in an image. For example, it is determined that the similarity criterion is that the pixel difference between two pixels is smaller than 5, the pixel value of a seed pixel is 125, the pixel value of one pixel adjacent to the seed pixel is 126, the pixel difference between the two pixels is smaller than 5, the pixel value of the seed pixel is merged with the seed pixel, the pixel value of the other pixel adjacent to the seed pixel is 135, and the pixel difference between the two pixels is larger than 5, the pixel is not merged with the seed pixel, and the merged pixel with the pixel value of 126 is used as a new seed pixel to continue growing around until no pixel meeting the similarity criterion exists, so as to obtain an image of a region where the seed pixel is located.
When the watershed segmentation model is used for segmenting an image, the image is regarded as a geographic topological terrain, the gray value of each pixel point in the image is regarded as a corresponding geographic altitude, an area formed by a pixel set with a smaller gray value is a basin, a higher pixel is a peak with a higher altitude, the edge of the basin with the lower altitude forms a segmentation line, the gray is mapped to a three-dimensional space, the watershed segmentation is simulated by an immersion method provided by Vincent and Soille, namely, the basin with the lower altitude is immersed firstly, when the water level rises to the mountain height, the water surfaces of the basin areas begin to converge, dams are constructed at the edge places where the basins converge, and the dams correspond to the segmentation lines of the image to divide the image into different parts.
The connected component generally refers to an image region composed of pixels having the same pixel value and adjacent positions in an image. Therefore, when performing image segmentation, the connected component model finds and marks each connected component in the image. It should be noted that the segmentation method is generally more suitable for binarized images and therefore can be used in conjunction with a threshold segmentation model.
The neural network model is used for image segmentation, a large number of source images and the mapping relation between each target area in the source images are recorded in the neural network model, and when an image to be segmented is input into the neural network model, the neural network model automatically determines and segments each target area in the input image based on the recorded mapping relation. A large amount of abundant sample images marked with target areas need to be selected in advance to carry out multiple iterative training on the seed neural network model until fitting is achieved, so that the method can be used for image segmentation.
In some optional embodiments, when no data is stored in the sample library, one of the segmentation models may be randomly selected as the first segmentation model, and the second image may be segmented by using the first segmentation model to obtain a third segmentation image.
S202, storing the second image and the first segmentation model in a sample library in a mutual mapping relationship.
S203, acquiring a first image and a first segmentation model, including: obtaining the first image and obtaining the first segmentation model from the sample library. The first image is an image to be segmented, the first segmentation model is a segmentation model that segments a second image in advance, and the second image is an image similar to the first image.
Step S203 differs from step S101 in that the acquisition of the first segmentation model from the sample library is defined in step S203, and in particular, can be realized by the following steps S301 to S303. As shown in fig. 3, a flow chart schematically illustrates sub-steps of one step in the image segmentation method provided by the embodiment of the present application. Referring to fig. 3, S203 includes the following steps S301 to S303:
s301, a first image is acquired.
Similar to the step S101, please refer to the related description above, and the description thereof is omitted.
S302, searching the sample library for images similar to the first image.
In some alternative embodiments, the image features of the first image are extracted, and then based on the first image features, the sample library is searched for images that are similar to the image features of the first image. The image features may be one or more of color features, texture features, shape features, and spatial relationship features of the image.
In some alternative embodiments, the sample library is searched for images that are close to the image parameters of the first image based on the image parameters of the first image. As shown in fig. 4, a flow chart schematically illustrates sub-steps of another step in the image segmentation method provided in the embodiment of the present application. Referring to fig. 4, S302 includes the following steps S401 to S402:
s401: comparing image parameters of the first image with image parameters of images in the sample library, wherein the image parameters comprise at least one of the following: sharpness, gray scale, and pixels;
s402, searching a second image similar to the first image under the condition that the comparison result is that the image parameters of the first image and the image parameters of the second image in the sample library are within a preset allowable range.
Comparing the definition, the gray scale and the pixel of the first image with the definition, the gray scale and the pixel of each image in the sample library, and when the comparison result shows that the definition difference, the gray scale difference and the pixel difference between the first image and one image are all within the allowable range, determining that the image parameter of the first image and the image parameter of the image are within the preset allowable range, namely the image is an image similar to the first image, for example, comparing that the second image is similar to the first image.
And S303, under the condition that a second image similar to the first image is searched from the sample library, acquiring a first segmentation model which has a mapping relation with the first image from the sample library.
After searching the second image similar to the first image from the sample library, the first segmentation model having a mapping relation with the second image can be extracted from the sample library.
S204, inputting the first image into the first segmentation model for segmentation to obtain a first segmentation image.
Step S204 is partially the same as or similar to step S102, and please refer to the related description above, which is not repeated herein.
As shown in fig. 5, another flowchart of the image segmentation method provided in the embodiment of the present application is exemplarily shown. Referring to fig. 5, the image segmentation method includes the following steps S501 to S502:
s501, acquiring a first image and a first segmentation model, wherein the first image is an image to be segmented, the first segmentation model is a segmentation model for segmenting a second image in advance, and the second image is an image similar to the first image; the second image and the first segmentation model are stored in a sample repository in a mapped relationship to each other.
Step S501 is similar to step S101 and step S203, and for details, reference is made to the foregoing description, and details are not repeated herein.
S502, inputting the first image into the first segmentation model for segmentation to obtain a first segmentation image.
Step S502 is similar to step S102 and step S204, and for details, reference is made to the foregoing description, and details are not repeated herein.
S503, acquiring a second segmentation model.
The second division model may be any one of the following models: the method comprises the following steps of (1) an edge detection model, a threshold segmentation model, a region growing model, a watershed segmentation model, a connected domain model and a neural network model for image segmentation; the threshold segmentation model comprises a fixed threshold segmentation model and an adaptive threshold segmentation model. These segmentation models have been described above, and more specifically, please refer to the above description, and they are not described herein again.
In some alternative embodiments, one of the models described above is randomly selected as the second segmentation model.
S504, inputting the first image into the second segmentation model for segmentation to obtain a second segmentation image.
Namely, the first image is segmented by the second segmentation model, so that a second segmented image obtained by segmenting the first image is obtained.
It should be noted that, although S502, S503 and S504 are sequence numbers, the execution sequence of these three steps is not limited, that is, steps S503 and S504 may be executed prior to S502. Of course, steps S503 and S504 may also be performed prior to "acquiring the first segmentation model". That is, the "obtaining the first segmentation model" and S502 as a whole and S503 and S504 as a whole may be performed in series, or may be performed in parallel, synchronously or asynchronously.
And S505, comparing the first segmentation image with the second segmentation image.
S506, when the comparison result indicates that the segmentation effect of the second segmented image is better than the segmentation effect of the first segmented image, deleting the second image and the first segmented model stored in the sample library in a mapping relationship, and storing the first image and the second segmented model in the sample library in a mutual mapping relationship.
In some alternative embodiments, when the first divided image is compared with the second divided image, the definition, the gray scale distribution and the pixel distribution of the first divided image are all compared with the definition, the gray scale distribution and the pixel distribution of the second divided image, and when the definition, the gray scale distribution and the pixel distribution of the first divided image are all better than the definition, the gray scale distribution and the pixel distribution of the second divided image, the division effect of the second divided image is determined to be better than the division effect of the first divided image. It is explained that the second image stored in the sample library and the mapped first segmentation model are not sub-optimal samples for segmenting such images (images similar to the first image and/or the second image), and therefore the second image and the first segmentation model stored in a mapping relationship in the sample library are deleted and the first image and the second segmentation model are stored in a mapping relationship with each other into the sample library as new samples of such images.
In the embodiment of the application, when the segmentation effect of the selected segmentation model is better than that of the segmentation model mapped by the similar image in the sample library, the samples in the sample library are updated, so that the quality of the samples in the sample library can be improved, and the segmentation effect is better when the image is subsequently segmented.
In some optional embodiments, there may also be no sample image corresponding to the first image in the sample library, and therefore, the image segmentation method further includes:
s507, if an image similar to the first image is not searched from the sample library, storing the first image and the second segmentation model in a mutual mapping relationship into the sample library.
When the image similar to the first image is not searched from the sample library, the sample library is lack of the sample similar to the first image, so that the first image and the second segmentation model are stored in the sample library in a mutual mapping relationship to serve as the sample of the image similar to the first image, and the sample library is enriched.
As shown in fig. 6, a further flowchart of the image segmentation method provided in the embodiment of the present application is exemplarily shown. Referring to fig. 6, the image segmentation method includes the following steps S601 to S603:
s601, acquiring a first image and a first segmentation model, wherein the first image is an image to be segmented, the first segmentation model is a segmentation model for segmenting a second image in advance, and the second image is an image similar to the first image.
Step S601 is similar to or the same as steps S101, S203, and S501, for details, refer to the foregoing description, and are not repeated herein.
S602, preprocessing the first image; and/or receiving an editing operation instruction aiming at the first image, and editing the first image according to the editing operation instruction.
The pre-treatment comprises at least one of: performing gray scale conversion processing on the first image; carrying out gray inversion processing on the first image; performing histogram equalization processing on the first image; performing image smoothing processing on the first image; and carrying out image sharpening processing on the first image.
The gray level transformation changes the gray level value of each pixel in the source image point by point according to a certain transformation relation under a certain condition, and aims to improve the image quality and enable the display effect of the image to be clearer.
Grayscale inversion linearly or nonlinearly inverts the grayscale range of an image to produce an image with the inverse grayscale of the input image.
The gray scale transformation and gray scale inversion can cooperate to gray scale the set of pixels.
Histogram equalization is the adjustment of contrast using image histograms. By tuning, the luminance can be better distributed over the histogram. Can be used to enhance local contrast without affecting overall contrast.
The image smoothing is to smooth and reduce noise of the image when the image has noise. And selecting different mask values for filtering to eliminate certain noise.
Image sharpening is used to enhance edge portions of an image.
After the image smoothing processing, the image usually becomes blurred, that is, a part of edge portions are filtered out when noise is reduced, and the edge portions of the image can be enhanced through image sharpening so as to enhance the image.
In some alternative embodiments, the person may also edit the image and segment it after editing. Therefore, an editing operation instruction for the first image is received, and the first image is edited according to the editing operation instruction. The editing operation may be, for example, a geometric transformation such as a cut, a rotate, etc.
S603, inputting the preprocessed first image, the edited first image or the preprocessed and edited first image into the first segmentation model for segmentation.
In the embodiment of the application, before image segmentation, the image can be preprocessed or edited, so that image segmentation scenes under different requirements are met.
In the image segmentation method provided by the embodiment of the application, the execution subject can be an image segmentation device. In the embodiment of the present application, a method for performing image segmentation by an image segmentation apparatus is taken as an example, and the apparatus for performing image segmentation provided in the embodiment of the present application is described.
As shown in fig. 7, an image segmentation apparatus 700 provided in an embodiment of the present application is schematically illustrated, and includes:
a first obtaining module 701, configured to obtain a first image and a first segmentation model, where the first image is an image to be segmented, the first segmentation model is a segmentation model obtained by segmenting a second image in advance, and the second image is an image similar to the first image;
a first input module 702, configured to input the first image into the first segmentation model for segmentation, so as to obtain a first segmented image.
Optionally, wherein the second image and the first segmentation model are stored in a sample library in a mapping relationship with each other; the image segmentation apparatus 700 further includes:
the second acquisition module is used for acquiring a second segmentation model;
the second input module is used for inputting the first image into the second segmentation model for segmentation to obtain a second segmentation image;
the first comparison module is used for comparing the first segmentation image with the second segmentation image;
and the first storage module is used for deleting the second image and the first segmentation model which are stored in the sample library in a mapping relationship under the condition that the comparison result shows that the segmentation effect of the second segmentation image is better than that of the first segmentation image, and storing the first image and the second segmentation model in the sample library in a mutual mapping relationship.
Optionally, the first obtaining module 701 includes:
a first acquisition unit configured to acquire a first image;
a first searching unit configured to search the sample library for an image similar to the first image;
and the second acquisition unit is used for acquiring a first segmentation model which has a mapping relation with the first image from the sample library when a second image similar to the first image is searched from the sample library.
Optionally, the image segmentation apparatus 700 further includes:
a second storage module, configured to store the first image and the second segmentation model in a mapping relationship with each other into the sample library if an image similar to the first image is not searched from the sample library.
Optionally, the first search unit includes:
a first comparison subunit, configured to compare image parameters of the first image with image parameters of images in the sample library, where the image parameters include at least one of: sharpness, gray scale, and pixels;
and the first searching subunit is used for searching a second image similar to the first image under the condition that the comparison result is that the image parameters of the first image and the image parameters of the second images in the sample library are within a preset allowable range.
Optionally, the image segmentation apparatus 700 further includes:
a first determination module for determining one segmentation model from a plurality of preset segmentation models as a first segmentation model,
a third storage module, configured to store the second image and the first segmentation model in a sample library in a mutually mapped relationship;
the first obtaining module 701 includes:
a first obtaining sub-module, configured to obtain the first image and obtain the first segmentation model from the sample library;
wherein the content of the first and second substances,
the plurality of segmentation models comprises at least one of: the method comprises the following steps of (1) an edge detection model, a threshold segmentation model, a region growing model, a watershed segmentation model, a connected domain model and a neural network model for image segmentation; the threshold segmentation model comprises a fixed threshold segmentation model and an adaptive threshold segmentation model.
Optionally, the image segmentation apparatus 700 further includes:
the preprocessing module is used for preprocessing the first image;
the first input module 702 includes:
and the first input submodule is used for inputting the preprocessed first image into the first segmentation model for segmentation.
Optionally, the pre-treatment comprises at least one of:
performing gray scale conversion processing on the first image;
carrying out gray inversion processing on the first image;
performing histogram equalization processing on the first image;
performing image smoothing processing on the first image;
and carrying out image sharpening processing on the first image.
Optionally, the image segmentation apparatus 700 further includes:
the editing module is used for receiving an editing operation instruction aiming at the first image and editing the first image according to the editing operation instruction;
the first input module 702 includes:
and the second input sub-module is used for inputting the edited first image into the first segmentation model for segmentation.
Similar images will follow a substantially similar segmentation law when segmented. In the embodiment of the application, the similarity of the images is utilized, and the first image is segmented by adopting the first segmentation model adopted when the second image similar to the first image is segmented in advance, so that the image segmentation is carried out on the first segmentation model which is a segmentation model suitable for self-adaptive matching of the first image to be segmented. Compared with the scheme that the conclusion can be obtained only after the images are segmented by multiple segmentation models and compared in the related technology, the processing efficiency of image segmentation is effectively improved. And the image error damage caused by multiple segmentation of multiple segmentation models can be avoided to a certain extent.
The image segmentation apparatus in the embodiment of the present application may be an electronic device, or may be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (Network Attached Storage, NAS), a personal computer (NAS), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not limited in particular.
The image segmentation apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which is not specifically limited in the embodiment of the present application.
The image segmentation device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to 6, and is not described here again to avoid repetition.
As shown in fig. 8, an image segmentation system provided by an embodiment of the present application is schematically illustrated. Referring to fig. 8, the image segmentation system 800 includes:
the database 801 is used for storing images to be segmented and a sample library.
The statistical module 802 is a module for performing statistics on the processing procedure and the processing steps of the currently processed image and the final optimal processing method, specifically for performing statistical comparison on the processing effect of the image data by the image segmentation module 807 and the image editing module 808. Illustratively, in a case where the comparison result is that the segmentation effect of the second segmented image is better than that of the first segmented image, the second image and the first segmented model stored in the sample library in a mapping relationship are deleted, and the first image and the second segmented model are stored in the sample library in a mutually mapping relationship.
The comparison module 803 is configured to, when an image is input, search the sample library in the database 801 for an image similar to the image, and send a segmentation model of the image similar to the image to the recommendation module.
A recommending module 804, configured to transmit the segmentation model recommended by the comparing module 803 to the image segmentation module 807, so that the image segmentation module 807 performs segmentation on the image.
The operation module 805 is configured to read in an image in the database transmitted by the recommendation module, and close the transmission channel after completely receiving one image, so as to interfere the currently processed image with a surface subsequent image, reset an image with an error in the processing process, and cancel image processing.
An image preprocessing module 806, configured to preprocess the image to be segmented.
An image segmentation module 807 for segmenting the image to be segmented according to the segmentation model pushed by the recommendation module.
And the image editing module 808 is configured to receive an editing operation instruction, and perform editing operation on the image according to the editing operation instruction.
And a display module 809 for displaying the segmented image on a display screen.
And an output module 810, configured to output or transmit the segmented image to the database 801.
The image segmentation system provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to 6, and is not described here again to avoid repetition.
Optionally, as shown in fig. 9, an electronic device 900 is further provided in this embodiment of the present application, and includes a processor 901 and a memory 902, where the memory 902 stores a program or an instruction that can be executed on the processor 901, and when the program or the instruction is executed by the processor 901, the steps of the embodiment of the image segmentation method are implemented, and the same technical effects can be achieved, and are not described again to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 10 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. Drawing (A)10The electronic device structure shown in (1) does not constitute a limitation on the electronic deviceIt is to be understood that the electronic device may include more or fewer components than illustrated, or some components may be combined, or a different arrangement of components may be used and will not be described further herein.
The processor 110 is configured to execute the following steps:
acquiring a first image and a first segmentation model, wherein the first image is an image to be segmented, the first segmentation model is a segmentation model for segmenting a second image in advance, and the second image is an image similar to the first image;
and inputting the first image into the first segmentation model for segmentation to obtain a first segmentation image.
Similar images will follow a substantially similar segmentation law when segmented. In the embodiment of the application, the similarity of the images is utilized, and the first image is segmented by adopting the first segmentation model adopted when the second image similar to the first image is segmented in advance, so that the image segmentation is carried out on the first segmentation model which is a segmentation model suitable for self-adaptive matching of the first image to be segmented. Compared with the scheme that the conclusion can be obtained only after the images are segmented and compared by a plurality of segmentation models in the related technology, the processing efficiency of image segmentation is effectively improved. And the image error damage caused by multiple segmentation of multiple segmentation models can be avoided to a certain extent.
It should be understood that, in the embodiment of the present application, the input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics Processing Unit 1041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes at least one of a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a first storage area storing a program or an instruction and a second storage area storing data, wherein the first storage area may store an operating system, an application program or an instruction (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 109 may include volatile memory or nonvolatile memory, or the memory x09 may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct bus RAM (DRRAM). The memory 109 in the embodiments of the subject application includes, but is not limited to, these and any other suitable types of memory.
Processor 110 may include one or more processing units; optionally, the processor 110 integrates an application processor, which mainly handles operations related to the operating system, user interface, application programs, etc., and a modem processor, which mainly handles wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image segmentation method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the embodiment of the image segmentation method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing embodiment of the image segmentation method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' 8230; \8230;" does not exclude the presence of another like element in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatuses in the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions recited, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the present embodiments are not limited to those precise embodiments, which are intended to be illustrative rather than restrictive, and that various changes and modifications may be effected therein by one skilled in the art without departing from the scope of the appended claims.

Claims (10)

1. An image segmentation method, characterized in that the method comprises:
acquiring a first image and a first segmentation model, wherein the first image is an image to be segmented, the first segmentation model is a segmentation model for segmenting a second image in advance, and the second image is an image similar to the first image;
and inputting the first image into the first segmentation model for segmentation to obtain a first segmentation image.
2. The method of claim 1, wherein the second image and the first segmentation model are stored in a sample repository in a mapped relationship to each other;
the method further comprises the following steps:
acquiring a second segmentation model;
inputting the first image into the second segmentation model for segmentation to obtain a second segmentation image;
comparing the first segmentation image with a second segmentation image;
and under the condition that the comparison result shows that the segmentation effect of the second segmented image is better than that of the first segmented image, deleting the second image and the first segmented model which are stored in the sample library in a mapping relationship, and storing the first image and the second segmented model in the sample library in a mutual mapping relationship.
3. The method of claim 2, wherein the obtaining the first image and the first segmentation model comprises:
acquiring a first image;
searching the sample library for images similar to the first image;
and in the case that a second image similar to the first image is searched from the sample library, acquiring a first segmentation model which has a mapping relation with the first image from the sample library.
4. The method of claim 3, further comprising:
storing the first image and the second segmentation model in a mutual mapping relationship into the sample library in a case where an image similar to the first image is not searched from the sample library.
5. The method according to claim 3 or 4, wherein the searching the sample library for images similar to the first image comprises:
comparing image parameters of the first image with image parameters of images in the sample library, wherein the image parameters comprise at least one of the following: sharpness, gray scale, and pixels;
and searching a second image similar to the first image under the condition that the comparison result is that the image parameters of the first image and the image parameters of the second images in the sample library are within a preset allowable range.
6. The method of claim 1, wherein prior to said obtaining the first image and the first segmentation model, the method further comprises:
determining a segmentation model from a plurality of preset segmentation models as a first segmentation model;
storing the second image and the first segmentation model in a sample library in a mapped relationship to each other;
the acquiring the first image and the first segmentation model includes:
obtaining the first image and the first segmentation model from the sample library;
wherein, the first and the second end of the pipe are connected with each other,
the plurality of segmentation models comprises at least one of: the method comprises the following steps of (1) an edge detection model, a threshold segmentation model, a region growing model, a watershed segmentation model, a connected domain model and a neural network model for image segmentation; the threshold segmentation model comprises a fixed threshold segmentation model and an adaptive threshold segmentation model.
7. The method according to any one of claims 1 to 4 or claim 6, wherein prior to said inputting said first image into said first segmentation model for segmentation, said method further comprises:
preprocessing the first image;
the inputting the first image into the first segmentation model for segmentation comprises:
and inputting the preprocessed first image into the first segmentation model for segmentation.
8. An image segmentation apparatus, characterized in that the apparatus comprises:
a first obtaining module, configured to obtain a first image and a first segmentation model, where the first image is an image to be segmented, the first segmentation model is a segmentation model obtained by segmenting a second image in advance, and the second image is an image similar to the first image;
and the first input module is used for inputting the first image into the first segmentation model for segmentation to obtain a first segmentation image.
9. An electronic device comprising a processor and a memory, the memory storing a program or instructions executable on the processor, the program or instructions when executed by the processor implementing the steps of the image segmentation method according to any one of claims 1 to 8.
10. A readable storage medium, on which a program or instructions are stored which, when executed by a processor, carry out the steps of the image segmentation method according to any one of claims 1 to 8.
CN202210954796.9A 2022-08-10 2022-08-10 Image segmentation method, device, electronic device and medium Pending CN115423817A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210954796.9A CN115423817A (en) 2022-08-10 2022-08-10 Image segmentation method, device, electronic device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210954796.9A CN115423817A (en) 2022-08-10 2022-08-10 Image segmentation method, device, electronic device and medium

Publications (1)

Publication Number Publication Date
CN115423817A true CN115423817A (en) 2022-12-02

Family

ID=84195779

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210954796.9A Pending CN115423817A (en) 2022-08-10 2022-08-10 Image segmentation method, device, electronic device and medium

Country Status (1)

Country Link
CN (1) CN115423817A (en)

Similar Documents

Publication Publication Date Title
Li et al. Multi-scale single image dehazing using laplacian and gaussian pyramids
CN106682632B (en) Method and device for processing face image
CN108830780B (en) Image processing method and device, electronic device and storage medium
CN110705583A (en) Cell detection model training method and device, computer equipment and storage medium
US11978216B2 (en) Patch-based image matting using deep learning
CN101999231A (en) System and method for enhancing the visibility of an object in a digital picture
CN112752158B (en) Video display method and device, electronic equipment and storage medium
Xiao et al. Single image dehazing based on learning of haze layers
CN112907569A (en) Head image area segmentation method and device, electronic equipment and storage medium
Diaz-Ramirez et al. Real-time haze removal in monocular images using locally adaptive processing
CN110751004A (en) Two-dimensional code detection method, device, equipment and storage medium
Frantc et al. Machine learning approach for objective inpainting quality assessment
CN113538304B (en) Training method and device for image enhancement model, and image enhancement method and device
CN114332553A (en) Image processing method, device, equipment and storage medium
CN112508832A (en) Object-oriented remote sensing image data space-time fusion method, system and equipment
CN115345895B (en) Image segmentation method and device for visual detection, computer equipment and medium
CN114511862B (en) Form identification method and device and electronic equipment
CN115423817A (en) Image segmentation method, device, electronic device and medium
Lopez et al. Line-based image segmentation method: a new approach to segment VHSR remote sensing images automatically
CN113920023A (en) Image processing method and device, computer readable medium and electronic device
CN113537359A (en) Training data generation method and device, computer readable medium and electronic equipment
CN114764839A (en) Dynamic video generation method and device, readable storage medium and terminal equipment
Wu et al. Contrast enhancement based on discriminative co-occurrence statistics
Zheng et al. An improved NAMLab image segmentation algorithm based on the earth moving distance and the CIEDE2000 color difference formula
Zhou et al. Real-time defogging hardware accelerator based on improved dark channel prior and adaptive guided filtering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination