CN109447992B - Image segmentation method and device - Google Patents
Image segmentation method and device Download PDFInfo
- Publication number
- CN109447992B CN109447992B CN201811235796.3A CN201811235796A CN109447992B CN 109447992 B CN109447992 B CN 109447992B CN 201811235796 A CN201811235796 A CN 201811235796A CN 109447992 B CN109447992 B CN 109447992B
- Authority
- CN
- China
- Prior art keywords
- image
- target
- result
- splicing
- segmentation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003709 image segmentation Methods 0.000 title claims abstract description 221
- 238000000034 method Methods 0.000 title claims abstract description 58
- 238000013459 approach Methods 0.000 claims description 12
- 238000012549 training Methods 0.000 claims description 9
- 230000003068 static effect Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000013215 result calculation Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The application provides an image segmentation method and device, wherein a target image to be segmented is input into an image segmentation model, and an image segmentation result of the target image is obtained, wherein the image segmentation result comprises at least one image block used for being spliced into the target image and coordinates of pixel points of a target position of each image block in the target image. And then splicing at least one image block based on the coordinates of the pixel points of the target position of each image block in the image segmentation result in the target image to restore the target image.
Description
Technical Field
The present invention relates to the field of image segmentation technologies, and in particular, to an image segmentation method and an image segmentation apparatus.
Background
With the development of image processing technology, image segmentation technology is more and more widely applied. Image segmentation is a technique and process that divides an image into several specific regions with unique properties and proposes objects of interest, which is a key step from image processing to image analysis.
The existing image segmentation technology focuses on extracting and analyzing the segmented regions with unique properties, and does not consider how to restore the image after the image segmentation is completed. Especially for a static webpage, a user generally considers image segmentation on the static webpage to analyze different image blocks to obtain properties of contents corresponding to the different image blocks in the static webpage, however, how to restore the static webpage after completing segmentation on the static webpage is not considered.
In view of the above, it is an urgent need to provide an image segmentation method to facilitate image restoration after image segmentation is completed.
Disclosure of Invention
In view of the above, the present invention provides an image segmentation method and an image segmentation device, so as to facilitate image restoration after image segmentation is completed.
The technical scheme is as follows:
an image segmentation method comprising:
acquiring a target image to be segmented;
inputting the target image into a pre-trained image segmentation model to obtain an image segmentation result of the target image; the image segmentation result comprises at least one image block spliced into the target image and coordinates of pixel points of the target position of each image block in the target image;
the image segmentation model is trained by taking a first image sample of a plurality of pre-calibrated image segmentation results as input information of the image segmentation model to be trained until a predicted image segmentation result of the first image sample by the image segmentation model to be trained approaches the pre-calibrated image segmentation result of the first image sample, and then the image segmentation model is obtained.
Preferably, the method further comprises the following steps:
splicing the at least one image block based on the coordinates of the pixel points of the target position of the image block in the target image to obtain an image splicing result;
determining whether the image stitching result is the same as the target image;
if so, determining that the image segmentation model is accurate.
Preferably, the image segmentation result further includes a coordinate range of a pixel point of each image block in the target image,
the step of splicing the at least one image block based on the coordinates of the pixel points of the target position of the image block in the target image to obtain an image splicing result comprises the following steps:
determining the coordinate of the target position of the image block in the target image and the coordinate range of the image block, and calculating the coordinate range of the image block in the target image;
and generating an image splicing result, wherein the image splicing result comprises the coordinate range of each image block in at least one image block in the target image.
Preferably, the determining whether the image stitching result is the same as the target image includes:
determining whether the coordinate ranges of the image blocks in the at least one image block have coordinate overlapping;
if not, determining that the image splicing result is the same as the target image;
and if so, determining that the image splicing result is different from the target image.
Preferably, if it is determined that the image stitching result is different from the target image, the method further includes:
obtaining a plurality of second image samples of which image segmentation results are calibrated in advance, taking the image segmentation model as an image segmentation model to be trained, and taking the plurality of second image samples as input information of the image segmentation model to be trained to continue training the image segmentation model to be trained until a predicted image segmentation result of the image segmentation model to be trained on the second image samples approaches to the image segmentation result of the second image samples calibrated in advance, so as to obtain the image segmentation model;
the image segmentation difficulty of the second image sample is greater than the image segmentation difficulty of the first image sample.
Preferably, the method further comprises the following steps:
aiming at each image block in the image segmentation result of the target image, calculating the relative position coordinates of the target position of the image block in the target image according to the coordinates of the pixel point of the target position of the image block in the target image;
and generating a target image segmentation result of the target image, wherein the target image segmentation result comprises at least one image block used for splicing the target image and a relative position coordinate of a target position of each image block in the target image.
Preferably, the method further comprises the following steps:
splicing the at least one image block based on the relative position coordinates of the target position of the image block in the target image to obtain an image splicing result;
determining whether the image stitching result is the same as the target image;
if so, determining that the image segmentation model is accurate.
Preferably, the image segmentation result further includes a coordinate range of a pixel point of each image block in the target image,
the image splicing method comprises the following steps of splicing at least one image block based on the relative position coordinates of the target position of the image block in the target image to obtain an image splicing result, wherein the image splicing result comprises the following steps:
determining the relative position coordinate of the target position of the image block in the target image and the coordinate range of the image block, and calculating the relative coordinate range of the image block in the target image;
and generating an image splicing result, wherein the image splicing result comprises the relative coordinate range of each image block in at least one image block in the target image.
Preferably, the determining whether the image stitching result is the same as the target image includes:
determining whether relative coordinate ranges of each image block in the at least one image block overlap;
if not, determining that the image splicing result is the same as the target image;
and if so, determining that the image splicing result is different from the target image.
An image segmentation apparatus comprising:
the target image acquisition unit is used for acquiring a target image to be segmented;
the image segmentation unit is used for inputting the target image into a pre-trained image segmentation model to obtain an image segmentation result of the target image; the image segmentation result comprises at least one image block spliced into the target image and coordinates of pixel points of the target position of each image block in the target image;
the image segmentation model is trained by taking a first image sample of a plurality of pre-calibrated image segmentation results as input information of the image segmentation model to be trained until a predicted image segmentation result of the first image sample by the image segmentation model to be trained approaches the pre-calibrated image segmentation result of the first image sample, and then the image segmentation model is obtained.
The application provides an image segmentation method and device, wherein a target image to be segmented is input into an image segmentation model, and an image segmentation result of the target image is obtained, wherein the image segmentation result comprises at least one image block used for being spliced into the target image and coordinates of pixel points of a target position of each image block in the target image. And then splicing at least one image block based on the coordinates of the pixel points of the target position of each image block in the image segmentation result in the target image to restore the target image.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of an image segmentation method according to an embodiment of the present disclosure;
FIG. 2 is a flowchart of another image stitching method according to an embodiment of the present disclosure;
fig. 3 is a flowchart of a method for obtaining an image stitching result by stitching at least one image block based on coordinates of pixel points of a target position of the image block in a target image according to the embodiment of the present application;
FIG. 4 is a flowchart of another image segmentation method provided in the embodiments of the present application;
FIG. 5 is a flowchart of another image segmentation method provided in the embodiments of the present application;
fig. 6 is a schematic structural diagram of an image segmentation apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example (b):
fig. 1 is a flowchart of an image segmentation method according to an embodiment of the present application.
As shown in fig. 1, the method includes:
s101, obtaining a target image to be segmented;
the image segmentation method provided by the embodiment of the application takes an image to be subjected to image segmentation as a target image.
S102, inputting a target image into a pre-trained image segmentation model to obtain an image segmentation result of the target image; the image segmentation result comprises at least one image block which is used for splicing into a target image and coordinates of pixel points of a target position of each image block in the target image;
optionally, the target image is input to a pre-trained image segmentation model as input information, an output result of the image segmentation model is an image segmentation result of the target image, the image segmentation result includes at least one image block and coordinates of pixel points of a target position of each image block in the target image, and the at least one image block is used for being spliced into the target image.
In the embodiment of the present application, the target position of the image block may be a center point of the image block. The above is only a preferred manner of the target position of the image block provided in the embodiment of the present application, and the inventor may set the target position of the image block arbitrarily according to his own requirements, for example, the target position of the image block is set to a point at the upper right corner of the image block, which is not limited herein. Again, it should be noted that: the target positions of the image blocks in the image segmentation result output by the image segmentation model are the same (for example, the target positions of the image blocks in the image segmentation result are the central points of the image blocks), but the coordinates of the pixel points of the target positions of the image blocks in the target image are different (for example, the coordinates of the pixel points of the central points of each image block in the image segmentation result in the target image are different).
The image segmentation model is obtained by training an image segmentation model to be trained by taking a first image sample with a plurality of image segmentation results calibrated in advance as input information of the image segmentation model to be trained until a predicted image segmentation result of the image segmentation model to be trained on the first image sample approaches to the image segmentation result of the first image sample calibrated in advance.
In the embodiment of the present application, the pre-training of the image segmentation model may be as follows: acquiring a plurality of first image samples, wherein each first image sample is calibrated with an image segmentation result; selecting an unselected first image sample, inputting the first image sample as input information into an image segmentation model to be trained to train the image segmentation model to be trained, obtaining a prediction result of the image segmentation model to be trained on the first image sample, and adjusting parameters in the image segmentation model to be trained based on the prediction result if the prediction result does not approach the calibrated image segmentation result of the first image sample; after the parameters of the image segmentation model to be trained are adjusted, selecting a next unselected first image sample to continue training the image segmentation model to be trained until the prediction result of the image segmentation model to be trained on the first image sample approaches the calibrated image segmentation result of the first image sample, and the image segmentation model to be trained converges, and taking the image segmentation model to be trained at the moment as a pre-trained image segmentation model.
Fig. 2 is a flowchart of another image stitching method according to an embodiment of the present application.
As shown in fig. 2, the method includes:
s201, obtaining a target image to be segmented;
s202, inputting a target image into a pre-trained image segmentation model to obtain an image segmentation result of the target image; the image segmentation result comprises at least one image block which is used for splicing into a target image and coordinates of pixel points of a target position of each image block in the target image;
s203, splicing at least one image block based on the coordinates of the pixel points of the target position of the image block in the target image to obtain an image splicing result;
s204, determining whether the image splicing result is the same as the target image; if yes, go to step S205; if not, go to step S206;
further, in the image stitching method provided by the embodiment of the present application, when it is determined that the image stitching result is different from the target image, the method further includes step S206.
S205, determining that the image segmentation model is accurate;
s206, obtaining a plurality of second image samples of the pre-calibrated image segmentation result, taking the image segmentation model as an image segmentation model to be trained, and taking the plurality of second image samples as input information of the image segmentation model to be trained to continue training the image segmentation model to be trained until the image segmentation result of the image segmentation model to be trained on the predicted image of the second image sample approaches to the pre-calibrated image segmentation result of the second image sample, so as to obtain the image segmentation model.
In the embodiment of the present application, the image segmentation difficulty of the second image sample is greater than the image segmentation difficulty of the first image sample.
In this embodiment of the present application, the image segmentation result further includes a coordinate range of a pixel point of each image block in the target image, and fig. 3 is a flowchart of a method for obtaining an image stitching result by stitching at least one image block based on a coordinate of a pixel point of a target position of an image block in the target image according to the embodiment of the present application.
As shown in fig. 3, the method includes:
s301, determining the coordinate of the target position of the image block in the target image and the coordinate range of the image block, and calculating the coordinate range of the image block in the target image;
s302, generating an image splicing result, wherein the image splicing result comprises a coordinate range of each image block in at least one image block in the target image.
Correspondingly, the method for determining whether the image stitching result is the same as that of the target image includes determining whether the coordinate ranges of the image blocks in at least one image block have coordinate overlapping; if not, determining that the image splicing result is the same as the target image; and if so, determining that the image splicing result is different from the target image.
Further, when it is determined that the coordinate ranges of the image blocks in the at least one image block are not overlapped, the embodiment of the present application may further determine whether the coordinate ranges of the image blocks in the at least one image block are the coordinate ranges of all pixel points in the target image, if so, it is determined that the image stitching result is the same as the target image, and if not, it is determined that the image stitching result is different from the target image.
The above is only a preferable way to determine whether the image stitching result is the same as the target image provided in the embodiment of the present application, and regarding a specific way to determine whether the image stitching result is the same as the target image, the inventor may perform setting according to his own needs, and is not limited herein.
Fig. 4 is a flowchart of another image segmentation method according to an embodiment of the present application.
As shown in fig. 4, the method includes:
s401, obtaining a target image to be segmented;
s402, inputting the target image into a pre-trained image segmentation model to obtain an image segmentation result of the target image; the image segmentation result comprises at least one image block which is used for splicing into a target image and coordinates of pixel points of a target position of each image block in the target image;
s403, calculating the relative position coordinates of the target position of each image block in the target image according to the coordinates of the pixel points of the target position of the image block in the target image aiming at each image block in the image segmentation result of the target image;
in this embodiment of the present application, calculating the coordinates of the target position of the image block in the target image according to the coordinates of the pixel points of the target position of the image block in the target image includes: and calculating the relative position coordinate of the pixel point of the target position of the image block in the target image relative to the origin coordinate of the target image, and determining the calculated relative position coordinate as the relative position coordinate of the target position of the image block in the target image.
S404, generating a target image segmentation result of the target image, wherein the target image segmentation result comprises at least one image block used for splicing the target image and relative position coordinates of a target position of each image block in the target image.
Fig. 5 is a flowchart of another image segmentation method according to an embodiment of the present application.
As shown in fig. 5, the method includes:
s501, obtaining a target image to be segmented;
s502, inputting a target image into a pre-trained image segmentation model to obtain an image segmentation result of the target image; the image segmentation result comprises at least one image block which is used for splicing into a target image and coordinates of pixel points of a target position of each image block in the target image;
s503, aiming at each image block in the image segmentation result of the target image, calculating the relative position coordinates of the target position of the image block in the target image according to the coordinates of the pixel point of the target position of the image block in the target image;
s504, generating a target image segmentation result of the target image, wherein the target image segmentation result comprises at least one image block used for splicing the target image and a relative position coordinate of a target position of each image block in the target image;
s505, splicing at least one image block based on the relative position coordinates of the target position of the image block in the target image to obtain an image splicing result;
s506, determining whether the image splicing result is the same as the target image; if yes, go to step S507;
in this embodiment of the present application, the image segmentation result further includes a coordinate range of a pixel point of each image block in the target image, and at least one image block is stitched based on a relative position coordinate of a target position of the image block in the target image to obtain an image stitching result, where the method includes: determining the relative position coordinates of the target position of the image block in the target image and the coordinate range of the image block, and calculating the relative coordinate range of the image block in the target image; and generating an image splicing result, wherein the image splicing result comprises a relative coordinate range of each image block in the at least one image block in the target image.
Correspondingly, in the embodiment of the present application, determining whether the image stitching result is the same as the target image includes: determining whether relative coordinate ranges of each image block in the at least one image block overlap; if not, determining that the image splicing result is the same as the target image; and if so, determining that the image splicing result is different from the target image.
In the embodiment of the present application, preferably, when it is determined that the image stitching result is different from the target image, the step S206 may be further performed; or when the image splicing result is determined to be different from the target image, a second image sample can be further obtained to train the image segmentation model; or, when the image stitching result is determined to be different from the target image, determining whether the step S503 has an error, and if the step S503 has an error, further acquiring a second image sample to train the image segmentation model.
And S507, determining that the image segmentation model is accurate.
Fig. 6 is a schematic structural diagram of an image segmentation apparatus according to an embodiment of the present application. As shown in fig. 6, the apparatus includes:
a target image acquisition unit 61 for acquiring a target image to be segmented;
an image segmentation unit 62, configured to input the target image into a pre-trained image segmentation model, and obtain an image segmentation result of the target image; the image segmentation result comprises at least one image block which is used for splicing into a target image and coordinates of pixel points of a target position of each image block in the target image;
the image segmentation model is obtained by training an image segmentation model to be trained by taking a first image sample with a plurality of image segmentation results calibrated in advance as input information of the image segmentation model to be trained until a predicted image segmentation result of the image segmentation model to be trained on the first image sample approaches to the image segmentation result of the first image sample calibrated in advance.
Further, an image segmentation apparatus provided in an embodiment of the present application further includes an image segmentation checking unit, where the image segmentation checking unit includes: the image splicing result generating unit is used for splicing at least one image block based on the coordinates of the pixel points of the target position of the image block in the target image to obtain an image splicing result; the comparison unit is used for determining whether the image splicing result is the same as the target image; and if so, determining that the image segmentation model is accurate.
In the embodiment of the application, the image stitching result generating unit is specifically configured to determine a coordinate range of an image block in a target image according to a coordinate of a target position of the image block in the target image and a coordinate range of the image block, and calculate the coordinate range of the image block in the target image; and generating an image splicing result, wherein the image splicing result comprises the coordinate range of each image block in the at least one image block in the target image.
In an embodiment of the present application, the comparing unit is specifically configured to determine whether there is coordinate overlap in a coordinate range of each image block in at least one image block; if not, determining that the image splicing result is the same as the target image; and if so, determining that the image splicing result is different from the target image.
Further, the image segmentation device provided in the embodiment of the present application further includes an image segmentation model training unit, configured to, if it is determined that the image stitching result is different from the target image, obtain a plurality of second image samples that are calibrated in advance with respect to the image segmentation result, use the image segmentation model as an image segmentation model to be trained, continue training the image segmentation model to be trained with the plurality of second image samples as input information of the image segmentation model to be trained, and obtain the image segmentation model until a predicted image segmentation result of the second image sample by the image segmentation model to be trained approaches the image segmentation result that is calibrated in advance for the second image sample; the image segmentation difficulty of the second image sample is greater than the image segmentation difficulty of the first image sample.
Further, the image segmentation device provided by the embodiment of the present application further includes a target image segmentation result calculation unit, configured to calculate, for each image block in the image segmentation result of the target image, a relative position coordinate of a target position of the image block in the target image according to a coordinate of a pixel point of the target position of the image block in the target image; and generating a target image segmentation result of the target image, wherein the target image segmentation result comprises at least one image block used for splicing the target image and relative position coordinates of the target position of each image block in the target image.
In the embodiment of the application, the image stitching result generating unit is specifically configured to determine a relative coordinate range of an image block in a target image according to a relative position coordinate of a target position of the image block in the target image and a coordinate range of the image block; and generating an image splicing result, wherein the image splicing result comprises a relative coordinate range of each image block in the at least one image block in the target image.
In the embodiment of the present application, the comparing unit is specifically configured to determine whether there is an overlap in the relative coordinate ranges of the image blocks in the at least one image block; if not, determining that the image splicing result is the same as the target image; and if so, determining that the image splicing result is different from the target image.
The application provides an image segmentation method and device, wherein a target image to be segmented is input into an image segmentation model, and an image segmentation result of the target image is obtained, wherein the image segmentation result comprises at least one image block used for being spliced into the target image and coordinates of pixel points of a target position of each image block in the target image. And then splicing at least one image block based on the coordinates of the pixel points of the target position of each image block in the image segmentation result in the target image to restore the target image.
The image segmentation method and the image segmentation device provided by the invention are described in detail above, and a specific example is applied in the text to explain the principle and the implementation of the invention, and the description of the above embodiment is only used to help understanding the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include or include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (9)
1. An image segmentation method, comprising:
acquiring a target image to be segmented;
inputting the target image into a pre-trained image segmentation model to obtain an image segmentation result of the target image; the image segmentation result comprises at least two image blocks which are spliced into the target image and coordinates of pixel points of the target position of each image block in the target image; the target positions of the image blocks in the image blocks are the same, and the target positions of the image blocks in the target image are different;
splicing the at least two image blocks based on the coordinates of the pixel points of the target positions of the image blocks in the target image to obtain an image splicing result;
determining whether the image stitching result is the same as the target image;
if so, determining that the image segmentation model is accurate;
the image segmentation model is trained by taking a first image sample of a plurality of pre-calibrated image segmentation results as input information of the image segmentation model to be trained until a predicted image segmentation result of the first image sample by the image segmentation model to be trained approaches the pre-calibrated image segmentation result of the first image sample, and then the image segmentation model is obtained.
2. The method according to claim 1, wherein the image segmentation result further includes a coordinate range of pixel points of each of the image blocks in the target image,
the method for splicing the at least two image blocks based on the coordinates of the pixel points of the target positions of the image blocks in the target image to obtain an image splicing result comprises the following steps:
calculating the coordinate range of the image block in the target image according to the coordinate of the target position of the image block in the target image and the coordinate range of the image block;
and generating an image splicing result, wherein the image splicing result comprises the coordinate range of each image block in the at least two image blocks in the target image.
3. The method of claim 2, wherein the determining whether the image stitching result is the same as the target image comprises:
determining whether the coordinate ranges of the image blocks in the at least two image blocks have coordinate overlapping;
if not, determining that the image splicing result is the same as the target image;
and if so, determining that the image splicing result is different from the target image.
4. The method of claim 1, wherein if it is determined that the image stitching result is different from the target image, the method further comprises:
obtaining a plurality of second image samples of which image segmentation results are calibrated in advance, taking the image segmentation model as an image segmentation model to be trained, and taking the plurality of second image samples as input information of the image segmentation model to be trained to continue training the image segmentation model to be trained until a predicted image segmentation result of the image segmentation model to be trained on the second image samples approaches to the image segmentation result of the second image samples which are calibrated in advance, so as to obtain the image segmentation model;
the image segmentation difficulty of the second image sample is greater than the image segmentation difficulty of the first image sample.
5. The method of claim 1, further comprising:
aiming at each image block in the image segmentation result of the target image, calculating the relative position coordinates of the target position of the image block in the target image according to the coordinates of the pixel point of the target position of the image block in the target image;
and generating a target image segmentation result of the target image, wherein the target image segmentation result comprises at least two image blocks used for splicing the target image, and a relative position coordinate of a target position of each image block in the target image.
6. The method according to claim 5, wherein the at least two image blocks are spliced based on coordinates of pixel points of the target positions of the image blocks in the target image to obtain an image splicing result; the method comprises the following steps:
and splicing the at least two image blocks based on the relative position coordinates of the target positions of the image blocks in the target image to obtain an image splicing result.
7. The method according to claim 6, wherein the image segmentation result further includes a coordinate range of pixel points of each of the image blocks in the target image,
the image splicing method comprises the following steps of splicing at least two image blocks based on the relative position coordinates of the target positions of the image blocks in the target image to obtain an image splicing result, wherein the image splicing result comprises the following steps:
calculating the relative coordinate range of the image block in the target image according to the relative position coordinate of the target position of the image block in the target image and the coordinate range of the image block;
and generating an image splicing result, wherein the image splicing result comprises the relative coordinate range of each image block in the at least two image blocks in the target image.
8. The method of claim 7, wherein the determining whether the image stitching result is the same as the target image comprises:
determining whether relative coordinate ranges of each of the at least two image blocks overlap;
if not, determining that the image splicing result is the same as the target image;
and if so, determining that the image splicing result is different from the target image.
9. An image segmentation apparatus, comprising:
the target image acquisition unit is used for acquiring a target image to be segmented;
the image segmentation unit is used for inputting the target image into a pre-trained image segmentation model to obtain an image segmentation result of the target image; the image segmentation result comprises at least two image blocks which are spliced into the target image and coordinates of pixel points of the target position of each image block in the target image; the target positions of the image blocks in the image blocks are the same, and the target positions of the image blocks in the target image are different;
the image splicing result generating unit is used for splicing at least two image blocks based on the coordinates of pixel points of the target positions of the image blocks in the target image to obtain an image splicing result;
the comparison unit is used for determining whether the image splicing result is the same as the target image; if so, determining that the image segmentation model is accurate;
the image segmentation model is trained by taking a first image sample of a plurality of pre-calibrated image segmentation results as input information of the image segmentation model to be trained until a predicted image segmentation result of the first image sample by the image segmentation model to be trained approaches the pre-calibrated image segmentation result of the first image sample, and then the image segmentation model is obtained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811235796.3A CN109447992B (en) | 2018-10-23 | 2018-10-23 | Image segmentation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811235796.3A CN109447992B (en) | 2018-10-23 | 2018-10-23 | Image segmentation method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109447992A CN109447992A (en) | 2019-03-08 |
CN109447992B true CN109447992B (en) | 2022-04-01 |
Family
ID=65548205
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811235796.3A Active CN109447992B (en) | 2018-10-23 | 2018-10-23 | Image segmentation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109447992B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111161351B (en) * | 2019-12-18 | 2023-05-16 | 万翼科技有限公司 | Target component coordinate acquisition method and system |
CN110930419A (en) * | 2020-02-13 | 2020-03-27 | 北京海天瑞声科技股份有限公司 | Image segmentation method and device, electronic equipment and computer storage medium |
CN111369515A (en) * | 2020-02-29 | 2020-07-03 | 上海交通大学 | Tunnel water stain detection system and method based on computer vision |
CN112116068B (en) * | 2020-08-27 | 2024-09-13 | 山东浪潮科学研究院有限公司 | Method, equipment and medium for splicing all-around images |
WO2023044935A1 (en) * | 2021-09-27 | 2023-03-30 | 西门子股份公司 | Method and apparatus for segmenting bulk object image, and computer-readable storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103164848A (en) * | 2011-12-09 | 2013-06-19 | 腾讯科技(深圳)有限公司 | Image processing method and system |
CN105374023A (en) * | 2015-08-25 | 2016-03-02 | 上海联影医疗科技有限公司 | Target area segmentation method, image reconstruction method and image reconstruction device |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015176305A1 (en) * | 2014-05-23 | 2015-11-26 | 中国科学院自动化研究所 | Human-shaped image segmentation method |
-
2018
- 2018-10-23 CN CN201811235796.3A patent/CN109447992B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103164848A (en) * | 2011-12-09 | 2013-06-19 | 腾讯科技(深圳)有限公司 | Image processing method and system |
CN105374023A (en) * | 2015-08-25 | 2016-03-02 | 上海联影医疗科技有限公司 | Target area segmentation method, image reconstruction method and image reconstruction device |
Non-Patent Citations (1)
Title |
---|
基于深度全卷积神经网络的大田稻穗分割;段凌凤等;《农业工程学报》;20180630;第202-207页 * |
Also Published As
Publication number | Publication date |
---|---|
CN109447992A (en) | 2019-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109447992B (en) | Image segmentation method and device | |
CN107615050B (en) | Detection device and detection method | |
CN110929729B (en) | Image annotation method, image annotation device and computer storage medium | |
CN108121984B (en) | Character recognition method and device | |
CN109901996B (en) | Auxiliary test method and device, electronic equipment and readable storage medium | |
JP6719008B2 (en) | Inspection result retrieval device and method | |
JP7287823B2 (en) | Information processing method and information processing system | |
CN111738316B (en) | Zero sample learning image classification method and device and electronic equipment | |
US20230401691A1 (en) | Image defect detection method, electronic device and readable storage medium | |
CN114092949A (en) | Method and device for training class prediction model and identifying interface element class | |
CN111738252B (en) | Text line detection method, device and computer system in image | |
CN113420727A (en) | Training method and device of form detection model and form detection method and device | |
CN110619597A (en) | Semitransparent watermark removing method and device, electronic equipment and storage medium | |
CN110673125B (en) | Sound source positioning method, device, equipment and storage medium based on millimeter wave radar | |
Qureshi et al. | Factors affecting the implementation of automated progress monitoring of rebar using vision-based technologies | |
CN111079523A (en) | Object detection method, object detection device, computer equipment and storage medium | |
CN111198960A (en) | Method and device for determining user portrait data, electronic equipment and storage medium | |
CN110363189B (en) | Document content restoration method and device, electronic equipment and readable storage medium | |
CN105630807B (en) | Method and device for analyzing incidence relation between unknown road and known road | |
CN110633457A (en) | Content replacement method and device, electronic equipment and readable storage medium | |
US10877641B2 (en) | Image adjustment method, apparatus, device and computer readable storage medium | |
CN111753625B (en) | Pedestrian detection method, device, equipment and medium | |
CN105824503B (en) | Interface moving method and device | |
CN111127593B (en) | Document content erasing method and device, electronic equipment and readable storage medium | |
CN111428465B (en) | Auxiliary calibration method for direct-current control protection software modification work of converter station |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |