CN110517278B - Image segmentation and training method and device of image segmentation network and computer equipment - Google Patents

Image segmentation and training method and device of image segmentation network and computer equipment Download PDF

Info

Publication number
CN110517278B
CN110517278B CN201910724764.8A CN201910724764A CN110517278B CN 110517278 B CN110517278 B CN 110517278B CN 201910724764 A CN201910724764 A CN 201910724764A CN 110517278 B CN110517278 B CN 110517278B
Authority
CN
China
Prior art keywords
segmentation
image
result
edge
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910724764.8A
Other languages
Chinese (zh)
Other versions
CN110517278A (en
Inventor
陈文科
姚聪
陈益民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN201910724764.8A priority Critical patent/CN110517278B/en
Publication of CN110517278A publication Critical patent/CN110517278A/en
Application granted granted Critical
Publication of CN110517278B publication Critical patent/CN110517278B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The present application relates to an image segmentation method, a training method for an image segmentation network, an image segmentation apparatus, a training apparatus for an image segmentation network, a computer device, and a readable storage medium. The method comprises the following steps: acquiring an image to be segmented; inputting an image to be segmented into a preset image segmentation network for image segmentation processing to obtain a target segmentation result of the image to be segmented; the image segmentation network is obtained based on the body loss and edge loss training of a training sample image; the body loss is the loss between the segmentation labeling result and the segmentation prediction result of the training sample image aiming at the body region of the target object, and the edge loss is the loss between the segmentation labeling result and the segmentation prediction result of the training sample image aiming at the edge region of the target object; and according to the target segmentation result, segmenting an image of the main body region aiming at the target object from the image to be segmented. The method can improve the precision of edge segmentation.

Description

Image segmentation and training method and device of image segmentation network and computer equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image segmentation method, an image segmentation network training method, an image segmentation apparatus, an image segmentation network training apparatus, a computer device, and a readable storage medium.
Background
In application scenes of various images or videos such as live broadcast, video and virtualization, a portrait part in the image is often required to be subjected to matting processing so as to extract the image of the portrait part; the portrait part can be a head portrait, a half-body portrait, a whole-body portrait and the like to support the personalized operation of the user.
However, because of various factors such as the human body posture, the changeable spatial position relationship between people, the changeable spatial position variation between people and objects, the complicated background noise, the changeable shooting angle, the changeable light intensity, the changeable infrared intensity, etc., the traditional portrait matting method has the problems of bad performance on the portrait edge, blurred portrait edge, error, etc., and poor matting accuracy.
Disclosure of Invention
In view of the above, it is necessary to provide an image segmentation method, an image segmentation network training method, an image segmentation apparatus, an image segmentation network training apparatus, a computer device, and a readable storage medium capable of improving the accuracy of edge segmentation.
In a first aspect, a method of image segmentation, the method comprising:
acquiring an image to be segmented;
inputting the image to be segmented into a preset image segmentation network for image segmentation processing to obtain a target segmentation result of the image to be segmented; wherein the image segmentation network is obtained based on the training of the body loss and the edge loss of the training sample image; the body loss is the loss between the segmentation labeling result and the segmentation prediction result of the training sample image for the body region of the target object, and the edge loss is the loss between the segmentation labeling result and the segmentation prediction result of the training sample image for the edge region of the target object;
and according to the target segmentation result, segmenting an image of a main body region aiming at the target object from the image to be segmented.
In one embodiment, the method further comprises:
acquiring the training sample image; the training sample image is marked with an image segmentation marking result, and the image segmentation marking result comprises a segmentation marking result of a main body area and a segmentation marking result of an edge area;
inputting the training sample image into an initial image segmentation network to carry out image segmentation processing, and obtaining a segmentation prediction result of a main body region and a segmentation prediction result of an edge region of the training sample image;
calculating a loss between a segmentation labeling result and a segmentation prediction result of the body region as the body loss, and calculating a loss between a segmentation labeling result and a segmentation prediction result of the edge region as the edge loss;
and adjusting the parameters to be trained of the initial image segmentation network according to the main body loss and the edge loss to obtain the image segmentation network.
In one embodiment, the initial image segmentation network comprises a convolutional neural subnetwork, a subject segmentation subnetwork, and an edge segmentation subnetwork, the subject segmentation subnetwork and the edge segmentation subnetwork sharing the convolutional neural subnetwork;
inputting the training sample image into an initial image segmentation network to perform image segmentation processing, so as to obtain a segmentation prediction result of a main region and a segmentation prediction result of an edge region of the training sample image, wherein the method comprises the following steps:
inputting the training sample image into the convolutional neural sub-network for feature extraction processing to obtain an output feature map of the convolutional neural sub-network;
and respectively inputting the output feature maps of the convolutional neural sub-networks into the main segmentation sub-network and the edge segmentation sub-network for decoding, so as to respectively obtain the segmentation prediction result of the main region and the segmentation prediction result of the edge region.
In one embodiment, the main segmentation sub-network and the edge segmentation sub-network are both of a feature pyramid network type, and have horizontal connections between a plurality of levels with the convolutional neural sub-networks respectively;
inputting the output feature maps of the convolutional neural sub-networks into the main segmentation sub-network and the edge segmentation sub-network respectively for decoding processing to obtain the segmentation prediction result of the main region and the segmentation prediction result of the edge region respectively, including:
inputting the output feature maps of the multiple hierarchies of the convolutional neural sub-network into corresponding hierarchies of the main segmentation sub-network to perform feature fusion processing to obtain a main feature map, and decoding the main feature map to obtain a segmentation prediction result of the main region;
and inputting the output feature maps of the multiple hierarchies into corresponding hierarchies of the edge segmentation sub-network to perform feature fusion processing to obtain edge feature maps, and performing decoding processing on the edge feature maps to obtain segmentation prediction results of the edge regions.
In one embodiment, the segmentation prediction result of the body region includes: a first decoding processing result of a main feature map output by a last level in the main segmentation sub-network, wherein a segmentation prediction result of the edge region comprises: a second decoding processing result of the edge feature map output by the last level in the edge segmentation sub-network;
accordingly, calculating a loss between the segmentation labeling result and the segmentation prediction result of the body region as the body loss, and calculating a loss between the segmentation labeling result and the segmentation prediction result of the edge region as the edge loss, includes:
calculating the main body loss according to the segmentation labeling result of the main body region and the first decoding processing result; and calculating the edge loss according to the segmentation labeling result of the edge region and the second decoding processing result.
In one embodiment, the calculating the body loss according to the segmentation labeling result of the body region and the first decoding processing result; and calculating the edge loss according to the segmentation labeling result of the edge region and the second decoding processing result, including:
calculating a loss between the segmentation labeling result of the body region and the first decoding processing result as the body loss, and calculating a loss between the segmentation labeling result of the edge region and the second decoding processing result as the edge loss.
In one embodiment, the segmentation prediction result of the body region further includes: a third decoding processing result of the main feature map output by at least one intermediate level in the main segmentation sub-network, wherein the segmentation prediction result of the edge region further includes: a fourth decoding processing result of the edge feature map output by at least one intermediate level in the edge segmentation sub-network;
calculating the main body loss according to the segmentation labeling result of the main body region and the first decoding processing result; and calculating the edge loss according to the segmentation labeling result of the edge region and the second decoding processing result, including:
respectively carrying out downsampling processing on the segmentation labeling result of the main area and the segmentation labeling result of the edge area to obtain a middle segmentation labeling result of the main area and a middle segmentation labeling result of the edge area;
calculating the loss between the segmentation labeling result of the main body region and the first decoding processing result, calculating the loss between the middle segmentation labeling result of the main body region and the third decoding processing result, and summing to obtain the main body loss;
and calculating the loss between the segmentation labeling result of the edge region and the second decoding processing result, calculating the loss between the middle segmentation labeling result of the edge region and the fourth decoding processing result, and summing to obtain the edge loss.
In one embodiment, the method further comprises:
determining a characteristic region in the edge region of the target object in the training sample image according to the image segmentation and labeling result;
determining the weight of each pixel point in the training sample image according to the characteristic region; the main body loss and the edge loss of the training sample image are related to the weight of each pixel point; and the weight of the pixel points in the characteristic region is higher than the weight of the pixel points in other regions except the characteristic region in the training sample image.
In one embodiment, the determining, according to the image segmentation labeling result, a feature region in an edge region of the training sample image for the target object includes:
determining an edge area aiming at a target object in the training sample image according to the image segmentation and annotation result;
performing morphological processing on the edge region, and determining a feature region with preset morphological characteristics in the edge region according to a morphological processing result;
and/or the presence of a gas in the gas,
and calculating the local curvature of the edge region, and determining the characteristic region of which the local curvature is greater than a preset curvature threshold value.
In one embodiment, the adjusting the parameter to be trained of the initial image segmentation network according to the body loss and the edge loss includes:
adjusting a parameter to be trained of the subject segmentation sub-network based on the subject loss;
adjusting parameters to be trained of the edge segmentation sub-network based on the edge loss;
adjusting a parameter to be trained of the convolutional neural subnetwork based on a weighted sum of the subject loss and the edge loss.
In one embodiment, the image segmentation network comprises: a convolutional neural subnetwork and a subject segmentation subnetwork; the subject segmentation subnetwork and the convolutional neural subnetwork have a plurality of levels of horizontal connections therebetween;
inputting the image to be segmented into a preset image segmentation network for image segmentation processing to obtain a target segmentation result of the image to be segmented, wherein the target segmentation result comprises the following steps:
inputting the image to be segmented into the convolutional neural sub-network for feature extraction processing to obtain output feature maps of the multiple hierarchies in the convolutional neural sub-network;
and inputting the output feature maps of the multiple levels into corresponding levels of the main segmentation sub-network, performing feature fusion processing to obtain a main feature map, and performing decoding processing on the main feature map to obtain a target segmentation result of the image to be segmented.
In a second aspect, a method for training an image segmentation network, the method comprising:
acquiring a training sample image; the training sample image is marked with an image segmentation and annotation result, and the image segmentation and annotation result comprises a segmentation and annotation result of a main body region and a segmentation and annotation result of an edge region of a target object;
inputting the training sample image into an initial image segmentation network to carry out image segmentation processing, and obtaining a segmentation prediction result of a main body region and a segmentation prediction result of an edge region of the training sample image;
calculating a loss between a segmentation labeling result and a segmentation prediction result of the body region as the body loss, and calculating a loss between a segmentation labeling result and a segmentation prediction result of the edge region as the edge loss;
and adjusting the parameters to be trained of the initial image segmentation network according to the main body loss and the edge loss to obtain the trained image segmentation network.
In a third aspect, an image segmentation apparatus includes:
the image to be segmented acquisition module is used for acquiring an image to be segmented;
the image segmentation processing module is used for inputting the image to be segmented into a preset image segmentation network to perform image segmentation processing so as to obtain a target segmentation result of the image to be segmented; wherein the image segmentation network is obtained based on the training of the body loss and the edge loss of the training sample image; the body loss is the loss between the segmentation labeling result and the segmentation prediction result of the training sample image for the body region of the target object, and the edge loss is the loss between the segmentation labeling result and the segmentation prediction result of the training sample image for the edge region of the target object;
and the main body region segmentation module is used for segmenting an image of the main body region aiming at the target object from the image to be segmented according to the target segmentation result.
In a fourth aspect, an apparatus for training an image segmentation network includes:
the training sample image acquisition module is used for acquiring a training sample image; the training sample image is marked with an image segmentation and annotation result, and the image segmentation and annotation result comprises a segmentation and annotation result of a main body region and a segmentation and annotation result of an edge region of a target object;
the neural network processing module is used for inputting the training sample image into an initial image segmentation network for image segmentation processing to obtain a segmentation prediction result of a main body region and a segmentation prediction result of an edge region of the training sample image;
a loss calculation module, configured to calculate a loss between the segmentation labeling result and the segmentation prediction result of the main region as the main loss, and calculate a loss between the segmentation labeling result and the segmentation prediction result of the edge region as the edge loss;
and the neural network training module is used for adjusting the parameters to be trained of the initial image segmentation network according to the main body loss and the edge loss to obtain the trained image segmentation network.
In a fifth aspect, a computer device comprises a memory storing a computer program and a processor implementing the steps of the method of any one of the first and/or second aspects when the processor executes the computer program.
A sixth aspect is a readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of any of the first and/or second aspects.
The image segmentation method, the training method of the image segmentation network, the image segmentation device, the training device of the image segmentation network, the computer equipment and the readable storage medium adopt the body loss to guide the image segmentation network to pay attention to the body (global) of the target object, and simultaneously adopt the edge loss to guide the image segmentation network to pay attention to the edge of the target object, so that the image segmentation network can further improve the accuracy of segmentation of the edge of the target object on the basis of accurate segmentation of the body of the target object; moreover, since the main body region and the edge region have a common boundary, the prediction result of the segmentation of the main body region and the prediction result of the segmentation of the edge region supplement each other during the training process, which can mutually promote the improvement of the training efficiency and the segmentation accuracy, so that the edge accuracy of the image of the main body region of the target object obtained by the image segmentation network of the present embodiment is higher.
Drawings
FIG. 1 is a diagram of an exemplary embodiment of an image segmentation method;
FIG. 2 is a flow diagram illustrating a method for image segmentation in one embodiment;
FIG. 3a is a schematic flowchart of a training process for an image segmentation network in one embodiment;
FIG. 3b is a schematic diagram of an image segmentation network in one embodiment;
FIG. 3c is a schematic diagram of an image segmentation network in one embodiment;
FIG. 4 is a flow diagram illustrating a method for image segmentation in one embodiment;
FIG. 5 is a flowchart illustrating an image segmentation process according to one embodiment;
FIG. 6 is a flowchart illustrating a method for training an image segmentation network according to an embodiment;
FIG. 7 is a block diagram showing the structure of an image segmentation apparatus according to an embodiment;
FIG. 8 is a block diagram showing the structure of a training apparatus for an image segmentation network according to an embodiment;
FIG. 9 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The image segmentation method provided by the application can be applied to the application environment as shown in fig. 1, but is not limited to the application environment. The shooting device 12 can shoot and acquire an image to be segmented, and send the image to be segmented to the computer device 11; the computer device 11 may segment a subject image for a target object from an image to be segmented based on the image segmentation method of the present application. The computer device 11 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, servers, and the like.
In one embodiment, as shown in fig. 2, an image segmentation method is provided, which is described by taking the method as an example applied to the computer device in fig. 1, and comprises the following steps:
s201, acquiring an image to be segmented.
The image to be segmented may be an image received by the computer device and sent by other devices, such as a camera, other computer devices, etc., or may be an image stored locally by the computer device, or an image from other sources. In short, the computer device needs to perform image segmentation processing on the image to be segmented; for example, when the target object is a person, the person-image matting processing is performed, specifically, when the target object is a head of a person, the head-image matting processing is performed. Of course, the present embodiment does not limit the target objects, and the target objects may include, but are not limited to: the whole or specific parts of people, flowers, articles, etc. can also be combined; taking the target object as an example, the target object may be a general person or a specific person. In addition, the image to be segmented may be a two-dimensional image or a three-dimensional image.
Of course, after S201, the embodiment may also perform various types of preprocessing on the image to be segmented, and then input the preprocessed image to be segmented into the image segmentation network; various types of pre-processing include, but are not limited to, at least one of the following: image subtraction processing, image normalization processing (e.g., correcting the head portrait pose to a standard pose by pose correction), and the like; the present embodiment does not limit this.
S202, inputting the image to be segmented into a preset image segmentation network for image segmentation processing to obtain a target segmentation result of the image to be segmented; the image segmentation network is obtained based on the body loss and edge loss training of a training sample image; the body loss is the loss between the segmentation labeling result and the segmentation prediction result of the training sample image for the body region of the target object, and the edge loss is the loss between the segmentation labeling result and the segmentation prediction result of the training sample image for the edge region of the target object.
The image segmentation Network in the present application may be optimized by using architectures such as U-Net (a neural Network that can perform image segmentation on a two-dimensional image), V-Net (a neural Network that can perform image segmentation on a three-dimensional image), PSP-Net (Pyramid Scene analysis Network), FCN (full Convolutional neural Network), and inclusion (stacking convolution features and pooling features of each size such as 1x1 and 3x 3).
It is understood that the image to be segmented may be segmented as: the image processing method comprises the steps of obtaining a main body area of a target object, an edge area of the target object and a background area, wherein the edge area is a transition area between the main body area and the background area; the size of the edge region may be self-setting, for example, a proportionality coefficient between the width of the edge region and the maximum dimension of the body region may be preset, or the width of the edge region may be preset.
Correspondingly, the training sample image also has a main region and an edge region, in this embodiment, exemplarily, the segmentation labeling result of the edge region may be a binarized image, which can represent whether each pixel point is the edge region of the target object under the real condition; the segmentation prediction result of the edge region can also be a binary image, and whether each pixel point is the edge region of a target object under the prediction condition of an image segmentation network can be represented; the segmentation prediction result of the edge region can also be a probabilistic image, and the probability that each pixel point is the edge region of the target object under the prediction condition of the image segmentation network can be represented. The segmentation labeling result and the segmentation prediction result of the main region are similar to those described above, and are not described herein again.
Therefore, the embodiment can adopt the body loss to guide the body (global) of the target object focused by the image segmentation network, and simultaneously adopt the edge loss to guide the edge of the target object focused by the image segmentation network, so that the image segmentation network can further improve the accuracy of the segmentation of the edge of the target object on the basis of accurately segmenting the body of the target object. Moreover, since the body region and the edge region have a common boundary, the result of the prediction of the segmentation of the body region and the result of the prediction of the segmentation of the edge region also supplement each other during the training process, which can mutually promote the improvement of the training efficiency and the segmentation accuracy.
And S203, segmenting an image of the main body region aiming at the target object from the image to be segmented according to the target segmentation result.
The target segmentation result may be a segmentation prediction result of a main region of a target object in an image to be segmented, or a segmentation prediction result of an edge region, or a combination of the two; the present embodiment does not limit this. Taking the target segmentation result as the segmentation prediction result of the main region as an example, the segmentation prediction result of the main region output by the image segmentation network in the embodiment has higher precision, and it is also explained from another perspective that the segmentation prediction result of the edge region has higher precision, so that the edge precision of the image for segmenting the main region from the image to be segmented is higher. Taking the target segmentation result as the segmentation prediction result of the edge region as an example, since the region surrounded by the edge region is the main region, an image of the main region with higher edge accuracy can be similarly segmented.
In the image segmentation method of the embodiment, the subject loss is adopted to guide the subject (global) of the target object concerned by the image segmentation network, and the edge loss is adopted to guide the edge of the target object concerned by the image segmentation network, so that the image segmentation network can further improve the accuracy of the segmentation of the edge of the target object on the basis of accurately segmenting the subject of the target object; moreover, since the main body region and the edge region have a common boundary, the prediction result of the segmentation of the main body region and the prediction result of the segmentation of the edge region supplement each other during the training process, which can mutually promote the improvement of the training efficiency and the segmentation accuracy, so that the edge accuracy of the image of the main body region of the target object obtained by the image segmentation network of the present embodiment is higher.
In an embodiment, as shown in fig. 3a, a training process of an image segmentation network in the image segmentation method is shown, which may specifically include:
s301, acquiring a training sample image; the training sample image is marked with an image segmentation and annotation result, and the image segmentation and annotation result comprises a segmentation and annotation result of the main area and a segmentation and annotation result of the edge area.
Illustratively, the training sample image may be directly labeled with the segmentation labeling result of the body region and the segmentation labeling result of the edge region; the training sample image can also be marked with an integral image segmentation marking result, the position of each pixel point in a main body region aiming at the target object can be determined from the image segmentation marking result, the segmentation marking result of the main body region is generated according to the position of each pixel point in the main body region, the outline of the main body region is determined according to the position of each pixel point in the main body region, and the segmentation marking result of the edge region is determined according to the outline of the main body region and the preset edge width.
S302, inputting the training sample image into the initial image segmentation network for image segmentation processing, and obtaining the segmentation prediction result of the main region and the segmentation prediction result of the edge region of the training sample image.
S303, a loss between the segmentation labeling result and the segmentation prediction result of the body region is calculated as a body loss, and a loss between the segmentation labeling result and the segmentation prediction result of the edge region is calculated as an edge loss.
Illustratively, the loss can be calculated by a Smooth L1loss type (Smooth L1 loss) loss function, is more robust to outliers, and is insensitive to outliers (outliers) compared to the L2 loss, and the gradient can be controlled in magnitude so that the training is not easy to run away; the function is actually a piecewise function, and between [ -1,1] is actually the L2 loss, thus solving the problem of non-smoothness of L1, and outside the [ -1,1] interval, is actually the L1loss, thus solving the problem of outlier gradient explosion.
S304, adjusting the parameters to be trained of the initial image segmentation network according to the main body loss and the edge loss to obtain the image segmentation network.
In one embodiment, the body segmentation and the edge segmentation are performed using the same network. For example, the initial image segmentation network is a full convolution neural network, and the full convolution neural network may perform image segmentation processing on an input training sample image to obtain a segmentation prediction result of the training sample image, and extract a segmentation prediction result of the main region and a segmentation prediction result of the edge region from the segmentation prediction result. The extraction process may refer to the above extraction process for the segmentation labeling result, and is not described herein again. Accordingly, the training process of S304 may include: and taking the weighted summation result of the main body loss and the edge loss as a loss function value of the initial image segmentation network, and adjusting the parameters to be trained of the initial image segmentation network based on a gradient descent method and the like to obtain the image segmentation network.
In one embodiment, the body segmentation and the edge segmentation may be performed using different networks, respectively. For example, referring to fig. 3b, the initial image segmentation network may include a convolutional neural sub-network, a subject segmentation sub-network, and an edge segmentation sub-network, the subject segmentation sub-network and the edge segmentation sub-network sharing the convolutional neural sub-network. The convolutional neural subnetwork is used for performing feature extraction processing (equivalent to coding processing) on an input image to obtain image features; the main body segmentation sub-network is used for decoding the image characteristics to obtain the segmentation result of the main body region of the input image; the edge segmentation sub-network is used for decoding the image features to obtain the segmentation result of the edge region of the input image.
Accordingly, the training process of S304 may include: adjusting parameters to be trained of the subject segmentation sub-network based on the subject loss; adjusting parameters to be trained of the edge segmentation sub-network based on the edge loss; based on the weighted sum of the body loss and the edge loss, the parameters to be trained of the convolutional neural sub-network are adjusted. For example, in the training process, during each training step, the parameters to be trained of the principal segmentation sub-network are adjusted based on the principal loss, then the parameters to be trained of the edge segmentation sub-network are adjusted based on the edge loss, and finally the parameters to be trained of the convolutional neural sub-network are adjusted based on the weighted sum of the principal loss and the edge loss. Of course, in the training process, the parameters to be trained of the convolutional neural sub-networks are pre-trained based on the weighted sum of the main body loss and the edge loss, and after the pre-training of the convolutional neural sub-networks is completed, the main body segmentation sub-networks, the edge segmentation sub-networks and the convolutional neural sub-networks are alternately trained based on the corresponding losses.
Thus, in this embodiment, the convolutional neural sub-network may be adjusted based on the weighted sum of the body loss and the edge loss to improve the accuracy of the feature map output by the convolutional neural sub-network in the training process; meanwhile, the subject segmentation sub-network is trained on the basis of the subject loss, so that the accuracy of the prediction result of the subject segmentation sub-network is higher, and the accuracy of the prediction result of the edge segmentation sub-network is higher by training the edge segmentation sub-network on the basis of the edge loss, so that the accuracy of the prediction result of the subject segmentation and the accuracy of the prediction result of the edge segmentation are both considered, and the accuracy of the image segmentation network obtained by training on the edge segmentation of the target object is further improved.
Accordingly, the above S302 may include: inputting the training sample image into a convolutional neural sub-network for feature extraction processing to obtain an output feature map of the convolutional neural sub-network; and respectively inputting the output feature maps of the convolutional neural sub-networks into the main segmentation sub-network and the edge segmentation sub-network for decoding, and respectively obtaining the segmentation prediction result of the main region and the segmentation prediction result of the edge region.
It is understood that the convolutional neural network may include a plurality of convolutional blocks connected in sequence, and thus, a feature map output by the last convolutional block may be output as an output feature map of the convolutional neural network. The main partition sub-network and the edge partition sub-network may decode the feature map output from the last convolution block, and output a partition prediction result of the main region and a partition prediction result of the edge region, respectively. Here, the convolution block may have a structure of convolution layer + batch normalization layer + relu layer (relu activation function), may include only a convolution layer, or may have another structure.
Referring to fig. 3c, the principal segmentation sub-network and the edge segmentation sub-network may both be of a feature pyramid network type, and have a plurality of levels of horizontal connections with the convolutional neural sub-network, respectively. Accordingly, the above process of determining the segmentation prediction result according to the output feature map of the convolutional neural subnetwork may include: inputting the output feature maps of a plurality of hierarchies of the convolutional neural subnetwork into corresponding hierarchies of the main body segmentation subnetwork to perform feature fusion processing to obtain a main body feature map, and decoding the main body feature map to obtain a segmentation prediction result of the main body region; and inputting the output feature maps of a plurality of levels into corresponding levels of the edge segmentation sub-network to perform feature fusion processing to obtain an edge feature map, and performing decoding processing on the edge feature map to obtain a segmentation prediction result of an edge region.
The convolutional neural subnetwork can comprise a plurality of levels of convolutional blocks which are sequentially connected from bottom to top, wherein each level corresponds to a feature map with different scales (height multiplied by width), and the scale of the feature map which is generally output from bottom to top is larger and larger; correspondingly, the feature pyramid may include a plurality of levels of feature pyramid modules connected in sequence from top to bottom, each level corresponds to a feature map with different scales, and the scale generally increases from top to bottom, and the scale corresponding to the last level is generally the same as the scale of the input image. Therefore, the main segmentation sub-network and the edge segmentation sub-network can be respectively connected with the convolutional neural sub-network in a transverse mode among a plurality of hierarchies, so that feature fusion processing is carried out on output feature maps of the plurality of hierarchies output by the convolutional neural sub-network, decoding processing is carried out on the feature maps with different scale features obtained through the feature fusion processing, corresponding segmentation prediction results are obtained, and segmentation accuracy is improved.
It should be explained that, for example, for the main segmentation sub-network and the edge segmentation sub-network, the feature fusion process may include: and according to the output feature diagram of the previous level of the current level in the feature pyramid sub-network and the output feature diagram of the current level in the convolutional neural sub-network, fusing to obtain the output feature diagram of the current level in the feature pyramid sub-network until the output feature diagram of the last level in the feature pyramid sub-network is fused. Of course, before the fusion processing, the convolution processing may be performed on the output feature map of the current level in the convolutional neural sub-network, and/or the upsampling processing may be performed on the output feature map of the previous level, which is not limited in this embodiment. The output feature map of the topmost level in the feature pyramid sub-network may be the output feature map of the last level of the convolutional neural sub-network, or a feature map obtained by performing convolution processing on the output feature map of the last level.
Accordingly, for the above-mentioned main partition sub-network and edge partition sub-network of the feature pyramid network type, the partition prediction result of the main area may include: the first decoding processing result of the main feature map output by the last level in the main segmentation sub-network, and the segmentation prediction result of the edge region may include: the computer device can calculate the main body loss according to the segmentation marking result of the main body region and the first decoding processing result; and calculating the edge loss according to the segmentation labeling result of the edge region and the second decoding processing result.
In one embodiment, the computer apparatus may calculate a loss between a segmentation labeling result of the body region and a first decoding process result as a body loss, and a loss between a segmentation labeling result of the edge region and a second decoding process result as an edge loss.
In one embodiment, the segmentation prediction result of the body region further includes: the third decoding processing result of the main feature map output by at least one intermediate level in the main segmentation sub-network, and the segmentation prediction result of the edge region further includes: a fourth decoding processing result of the edge feature graph output by at least one middle level in the edge segmentation sub-network; the computer device can respectively perform downsampling processing on the segmentation labeling result of the main area and the segmentation labeling result of the edge area to obtain a middle segmentation labeling result of the main area and a middle segmentation labeling result of the edge area; calculating the loss between the segmentation labeling result of the main body region and the first decoding processing result, calculating the loss between the middle segmentation labeling result of the main body region and the third decoding processing result, and summing to obtain the main body loss; and calculating the loss between the segmentation labeling result of the edge region and the second decoding processing result, calculating the loss between the middle segmentation labeling result of the edge region and the fourth decoding processing result, and summing to obtain the edge loss.
It can be understood that the loss between the Intermediate segmentation labeling result of the body region and the third decoding processing result, and the loss between the Intermediate segmentation labeling result of the edge region and the fourth decoding processing result can be regarded as a relay Supervision (Intermediate Supervision) loss, which can further guide the body segmentation sub-network to pay more attention to the body region, and guide the edge segmentation sub-network to pay more attention to the edge region, thereby improving the edge segmentation accuracy; meanwhile, the problem that the gradient disappears along with the deepening of the network in the training process can be solved, and the training efficiency is improved. Of course, in this embodiment, taking the edge segmentation sub-network as an example, the edge loss may include a relay supervision loss between the decoding processing result of the edge feature maps output by a plurality of intermediate levels and the corresponding intermediate segmentation labeling result.
It should be noted that the intermediate segmentation labeling result corresponding to the intermediate hierarchy is obtained by upsampling the segmentation labeling result; the upsampling rate used for upsampling is related to the size relationship between the scale corresponding to the last hierarchy and the scale corresponding to the intermediate hierarchy. Typically, the scale of the upper level in the pyramid sub-network is twice the scale of the lower level, i.e. both the width and the height are twiceSo that when the last level differs from the above-mentioned intermediate level by M levels, the up-sampling rate is 2M(ii) a For example, when the last level and the above-described intermediate level are adjacent levels, M is 1 and the upsampling rate is 2.
Illustratively, a convolutional neural sub-network may be composed of 1 CB volume block, 2 RXB volume blocks (one RXB group), 3 RXB volume blocks (one RXB group), 4 RXB volume blocks (one RXB group), and 5 RXB volume blocks (one RXB group) connected in sequence, respectively. Wherein, the CB block is a common convolution block; each RXB block described above may be a convolution block having a residual structure, and specifically may be a resenext structure (network parameters and computation amount are reduced by packet convolution), but the number of packets in packet convolution may be different for each RXB convolution block belonging to different RXB groups. The convolutional neural subnetwork may further include a PSP block (a convolution block), and the PSP block may include a network structure that is subjected to convolution processing of different scales based on a feature map in a pyramid scene analysis network, and subjected to upsampling processing on feature maps of different scales obtained by the convolution processing and then to stitching (Concat) processing, so that an output feature map of a last level of the convolutional neural subnetwork has features of different scales, and thus segmentation accuracy may be further improved.
Illustratively, the main body segmentation sub-network includes a plurality of feature pyramid modules (e.g., P5-P1), and may further include a Mask Head (main body module), and may perform decoding processing on the main body feature map output at the last level to obtain a segmentation prediction result of the main body region, that is, a first decoding processing result; in addition, the main body module may further perform decoding processing on the main body feature map output by the at least one intermediate layer to obtain a partition prediction result of the main body region, that is, a third decoding processing result. Accordingly, the edge segmentation subnetwork may include a plurality of feature pyramid modules, which may be the same structure as the plurality of feature pyramid modules of the principal segmentation subnetwork, but with different network parameters; the Edge Head (Edge module) can be further included, and the Edge feature graph output by the last level can be decoded to obtain a segmentation prediction result of the Edge region, namely a second decoding processing result; in addition, the edge module may further perform decoding processing on the edge feature map output by the at least one intermediate level to obtain a segmentation prediction result of the edge region, that is, a fourth decoding processing result. The main block and the edge block may be structures of a rolling block + a convolution block + a sigmoid (S-shaped growth curve function), or may be other decoding structures, and the structure of the main block and the structure of the edge block are not limited in this embodiment.
In an embodiment, as shown in fig. 4, the present embodiment relates to focusing on a complex region in an edge region, so that the complex region is segmented more accurately, which may specifically include:
s401, according to the image segmentation and labeling result, determining a characteristic region in the edge region of the target object in the training sample image.
It can be understood that, taking the portrait as an example, when the portrait is divided, the areas with smoother edges (such as the body side and the leg side) are generally divided with higher precision, or the division precision is not high but the influence is not great; however, for some complicated regions, such as a neck region, a hair region, a finger region, and the like, which can reflect the human image features, the edge segmentation accuracy is low, and defects and other serious conditions are likely to occur. The embodiment needs to identify these complex regions and guide the image segmentation network to emphasize the segmentation accuracy of the complex regions in the training process.
In an embodiment, taking a portrait as an example, the computer device may determine an edge region of the portrait according to the image segmentation labeling result, recognize the training sample image through a preset portrait recognition algorithm, determine the position of the key point region, and determine, according to the position of the key point region, a region in the edge region, which is within a preset distance threshold from the position of the key point region, as the feature region. Wherein the key point region includes, but is not limited to, a neck region, a palm region, etc. The human image recognition algorithm can be a trained neural network model for human body posture recognition.
In one embodiment, the computer device may determine an edge region for the target object in the training sample image according to the image segmentation labeling result; performing morphological processing on the edge area, and determining a feature area with preset morphological characteristics in the edge area according to a morphological processing result; and/or calculating the local curvature of the edge region, and determining the characteristic region of which the local curvature is greater than a preset curvature threshold value.
Wherein, the morphological processing includes but is not limited to expansion processing, corrosion processing, opening operation processing, closing operation processing and the like, and can determine characteristic areas such as a maximum area, a minimum area, a closed area or a semi-closed area (such as a finger area) and the like which are obvious in the edge area; the region with a large curvature is a region with a large degree of curvature (e.g., a fingertip region, a neck region, etc.), and is also a characteristic region.
S402, determining the weight of each pixel point in the training sample image according to the characteristic region; the main body loss and the edge loss of the training sample image are related to the weight of each pixel point; the weight of the pixel points in the characteristic region is higher than the weight of the pixel points in other regions except the characteristic region in the training sample image.
Wherein, each region comprises at least one pixel point belonging to the region. Taking edge loss as an example, in the edge loss calculation process, a difference value between a probability value (actually 0 or 1) that each pixel point is an edge region in an edge segmentation labeling result and a probability value that each pixel point is an edge region in an edge segmentation prediction result may be calculated first, then a Smooth function value of the difference value is calculated, and finally a Smooth function value corresponding to each pixel point is subjected to weighted summation to obtain edge loss. Exemplarily, in general, the weight of each pixel point may be 1; in this embodiment, the weight of the pixel point in the feature region may be changed to a number greater than 1, such as 2, 3, 5, and the like, so as to relatively improve the influence of the segmentation prediction result of the pixel point in the complex region in the edge region on the edge loss, thereby improving the segmentation accuracy of the complex region in the edge region.
In one embodiment, referring to FIG. 5, an image segmentation network comprises: a convolutional neural subnetwork and a subject segmentation subnetwork; the body segmentation sub-network and the convolution neural sub-network have a plurality of levels of horizontal connection; the image segmentation processing procedure may include:
s501, inputting an image to be segmented into a convolutional neural subnetwork to perform feature extraction processing, and obtaining output feature maps of multiple levels in the convolutional neural subnetwork;
s502, inputting the output feature maps of a plurality of levels into corresponding levels of the main segmentation sub-network, performing feature fusion processing to obtain a main feature map, and performing decoding processing to the main feature map to obtain a target segmentation result of the image to be segmented.
For a specific image segmentation process, reference may be made to the above description, which is not described herein again.
It is to be understood that the above describes the training process of the image segmentation network. However, in the process of performing the image segmentation processing by using the trained image segmentation network, the image segmentation network may only include: the method comprises the following steps of (1) convolving a neural subnetwork and a main body segmentation subnetwork, wherein a target segmentation result is a segmentation prediction result of a main body region; or only including convolution neural sub-networks and edge segmentation sub-networks, and the target segmentation result is the segmentation prediction result of the edge region; therefore, the calculation amount in the prediction process can be reduced, the model parameters of the image segmentation network are simplified, and the deployment of the image segmentation network in a small terminal is facilitated.
In one embodiment, as shown in fig. 6, a training method for an image segmentation network is provided, which may include:
s601, acquiring a training sample image; the training sample image is marked with an image segmentation and annotation result, and the image segmentation and annotation result comprises a segmentation and annotation result aiming at a main body region and a segmentation and annotation result of an edge region of the target object;
s602, inputting a training sample image into an initial image segmentation network for image segmentation processing to obtain a segmentation prediction result of a main area and a segmentation prediction result of an edge area of the training sample image;
s603, calculating the loss between the segmentation labeling result and the segmentation prediction result of the main region as the main loss, and calculating the loss between the segmentation labeling result and the segmentation prediction result of the edge region as the edge loss;
s604, adjusting parameters to be trained of the initial image segmentation network according to the main body loss and the edge loss to obtain the trained image segmentation network.
For specific limitations of the training method of the image segmentation network, reference may be made to the above limitations of the image segmentation method, which are not described herein again.
It should be understood that although the various steps in the flow charts of fig. 2-6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-6 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 7, there is provided an image segmentation apparatus including: an image to be segmented acquisition module 71, an image segmentation processing module 72 and a main body region segmentation module 73, wherein:
an image to be segmented acquisition module 71, configured to acquire an image to be segmented;
the image segmentation processing module 72 inputs the image to be segmented into a preset image segmentation network to perform image segmentation processing, so as to obtain a target segmentation result of the image to be segmented; the image segmentation network is obtained based on the body loss and edge loss training of a training sample image; the body loss is the loss between the segmentation labeling result and the segmentation prediction result of the training sample image aiming at the body region of the target object, and the edge loss is the loss between the segmentation labeling result and the segmentation prediction result of the training sample image aiming at the edge region of the target object;
and a main body region segmentation module 73, configured to segment an image of a main body region for the target object from the image to be segmented according to the target segmentation result.
Optionally, the apparatus may further include: the neural network training module and the neural network model training module can comprise:
the training sample image acquisition unit is used for acquiring a training sample image; the training sample image is marked with an image segmentation and annotation result, and the image segmentation and annotation result comprises a segmentation and annotation result of a main body region and a segmentation and annotation result of an edge region;
the neural network processing unit is used for inputting the training sample image into the initial image segmentation network to carry out image segmentation processing so as to obtain a segmentation prediction result of a main body region and a segmentation prediction result of an edge region of the training sample image;
a loss calculation unit configured to calculate a loss between a segmentation labeling result and a segmentation prediction result of the body region as a body loss, and calculate a loss between a segmentation labeling result and a segmentation prediction result of the edge region as an edge loss;
and the neural network training unit is used for adjusting the parameters to be trained of the initial image segmentation network according to the main body loss and the edge loss to obtain the image segmentation network.
Optionally, the initial image segmentation network comprises a convolutional neural subnetwork, a subject segmentation subnetwork, and an edge segmentation subnetwork, the subject segmentation subnetwork and the edge segmentation subnetwork sharing the convolutional neural subnetwork; the neural network processing unit is specifically used for inputting the training sample image into the convolutional neural sub-network for feature extraction processing to obtain an output feature map of the convolutional neural sub-network; and respectively inputting the output feature maps of the convolutional neural sub-networks into the main segmentation sub-network and the edge segmentation sub-network for decoding, and respectively obtaining the segmentation prediction result of the main region and the segmentation prediction result of the edge region.
Optionally, the main segmentation sub-network and the edge segmentation sub-network are both of a feature pyramid network type, and have a plurality of levels of horizontal connections with the convolutional neural sub-network respectively; the neural network processing unit is also used for inputting the output feature maps of a plurality of hierarchies of the convolutional neural sub-network into corresponding hierarchies of the main body segmentation sub-network for feature fusion processing to obtain a main body feature map, and decoding the main body feature map to obtain a segmentation prediction result of the main body region; and inputting the output feature maps of a plurality of levels into corresponding levels of the edge segmentation sub-network to perform feature fusion processing to obtain an edge feature map, and performing decoding processing on the edge feature map to obtain a segmentation prediction result of an edge region.
Optionally, the segmentation prediction result of the body region includes: the first decoding processing result of the main feature map output by the last level in the main segmentation sub-network comprises the following segmentation prediction results of the edge region: a second decoding processing result of the edge feature graph output by the last level in the edge segmentation sub-network; correspondingly, the loss calculating unit is specifically configured to calculate the body loss according to the segmentation labeling result of the body region and the first decoding processing result; and calculating the edge loss according to the segmentation labeling result of the edge region and the second decoding processing result.
Optionally, the loss calculating unit is further configured to calculate a loss between the segmentation labeling result of the body region and the first decoding processing result as a body loss, and calculate a loss between the segmentation labeling result of the edge region and the second decoding processing result as an edge loss.
Optionally, the segmentation prediction result of the body region further includes: the third decoding processing result of the main feature map output by at least one intermediate level in the main segmentation sub-network, and the segmentation prediction result of the edge region further includes: a fourth decoding processing result of the edge feature graph output by at least one middle level in the edge segmentation sub-network; the loss calculation unit is further used for respectively carrying out downsampling processing on the segmentation labeling result of the main area and the segmentation labeling result of the edge area to obtain a middle segmentation labeling result of the main area and a middle segmentation labeling result of the edge area; calculating the loss between the segmentation labeling result of the main body region and the first decoding processing result, calculating the loss between the middle segmentation labeling result of the main body region and the third decoding processing result, and summing to obtain the main body loss; and calculating the loss between the segmentation labeling result of the edge region and the second decoding processing result, calculating the loss between the middle segmentation labeling result of the edge region and the fourth decoding processing result, and summing to obtain the edge loss.
Optionally, the neural network model training module may further include:
the characteristic region determining unit is used for determining a characteristic region in the edge region of the target object in the training sample image according to the image segmentation and labeling result;
the weight determining unit is used for determining the weight of each pixel point in the training sample image according to the characteristic region; the main body loss and the edge loss of the training sample image are related to the weight of each pixel point; the weight of the pixel points in the characteristic region is higher than the weight of the pixel points in other regions except the characteristic region in the training sample image.
Optionally, the feature region determining unit is specifically configured to determine, according to the image segmentation and labeling result, an edge region for the target object in the training sample image; performing morphological processing on the edge area, and determining a feature area with preset morphological characteristics in the edge area according to a morphological processing result; and/or calculating the local curvature of the edge region, and determining the characteristic region of which the local curvature is greater than a preset curvature threshold value.
Optionally, the neural network training unit is specifically configured to adjust a parameter to be trained of the subject segmentation sub-network based on the subject loss; adjusting parameters to be trained of the edge segmentation sub-network based on the edge loss; based on the weighted sum of the body loss and the edge loss, the parameters to be trained of the convolutional neural sub-network are adjusted.
Optionally, the image segmentation network comprises: a convolutional neural subnetwork and a subject segmentation subnetwork; the body segmentation sub-network and the convolution neural sub-network have a plurality of levels of horizontal connection; the image segmentation processing module 72 may include: the feature extraction unit is used for inputting the image to be segmented into the convolutional neural sub-network for feature extraction processing to obtain output feature maps of a plurality of levels in the convolutional neural sub-network; and the image segmentation unit is used for inputting the output feature maps of a plurality of levels into corresponding levels of the main segmentation sub-network to perform feature fusion processing to obtain a main feature map, and decoding the main feature map to obtain a target segmentation result of the image to be segmented.
In one embodiment, as shown in fig. 8, there is provided an image segmentation network training apparatus, including: a training sample image acquisition module 81, a neural network processing module 82, a loss calculation module 83, and a neural network training module 84, wherein:
a training sample image obtaining module 81, configured to obtain a training sample image; the training sample image is marked with an image segmentation and annotation result, and the image segmentation and annotation result comprises a segmentation and annotation result aiming at a main body region and a segmentation and annotation result of an edge region of the target object;
the neural network processing module 82 is configured to input the training sample image into the initial image segmentation network to perform image segmentation processing, so as to obtain a segmentation prediction result of a main region and a segmentation prediction result of an edge region of the training sample image;
a loss calculation module 83 configured to calculate a loss between the segmentation labeling result and the segmentation prediction result of the body region as a body loss, and calculate a loss between the segmentation labeling result and the segmentation prediction result of the edge region as an edge loss;
and the neural network training module 84 is configured to adjust parameters to be trained of the initial image segmentation network according to the main body loss and the edge loss, so as to obtain a trained image segmentation network.
For specific limitations of the image segmentation apparatus, reference may be made to the above limitations of the image segmentation method, and for specific limitations of the training apparatus of the image segmentation network, reference may be made to the above limitations of the training method of the image segmentation network, which are not described herein again. The modules in the image segmentation device and the training device of the image segmentation network can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 9. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an image segmentation method and/or a training method for an image segmentation network. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring an image to be segmented;
inputting an image to be segmented into a preset image segmentation network for image segmentation processing to obtain a target segmentation result of the image to be segmented; the image segmentation network is obtained based on the body loss and edge loss training of a training sample image; the body loss is the loss between the segmentation labeling result and the segmentation prediction result of the training sample image aiming at the body region of the target object, and the edge loss is the loss between the segmentation labeling result and the segmentation prediction result of the training sample image aiming at the edge region of the target object;
and according to the target segmentation result, segmenting an image of the main body region aiming at the target object from the image to be segmented.
On the other hand, the following steps can also be implemented:
acquiring a training sample image; the training sample image is marked with an image segmentation and annotation result, and the image segmentation and annotation result comprises a segmentation and annotation result aiming at a main body region and a segmentation and annotation result of an edge region of the target object;
inputting a training sample image into an initial image segmentation network to carry out image segmentation processing, and obtaining a segmentation prediction result of a main body region and a segmentation prediction result of an edge region of the training sample image;
calculating a loss between a segmentation labeling result and a segmentation prediction result of the main region as a main loss, and calculating a loss between a segmentation labeling result and a segmentation prediction result of the edge region as an edge loss;
and adjusting the parameters to be trained of the initial image segmentation network according to the main body loss and the edge loss to obtain the trained image segmentation network.
In one embodiment, a readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, realizes the following steps in one aspect:
acquiring an image to be segmented;
inputting an image to be segmented into a preset image segmentation network for image segmentation processing to obtain a target segmentation result of the image to be segmented; the image segmentation network is obtained based on the body loss and edge loss training of a training sample image; the body loss is the loss between the segmentation labeling result and the segmentation prediction result of the training sample image aiming at the body region of the target object, and the edge loss is the loss between the segmentation labeling result and the segmentation prediction result of the training sample image aiming at the edge region of the target object;
and according to the target segmentation result, segmenting an image of the main body region aiming at the target object from the image to be segmented.
On the other hand, the following steps can also be implemented:
acquiring a training sample image; the training sample image is marked with an image segmentation and annotation result, and the image segmentation and annotation result comprises a segmentation and annotation result aiming at a main body region and a segmentation and annotation result of an edge region of the target object;
inputting a training sample image into an initial image segmentation network to carry out image segmentation processing, and obtaining a segmentation prediction result of a main body region and a segmentation prediction result of an edge region of the training sample image;
calculating a loss between a segmentation labeling result and a segmentation prediction result of the main region as a main loss, and calculating a loss between a segmentation labeling result and a segmentation prediction result of the edge region as an edge loss;
and adjusting the parameters to be trained of the initial image segmentation network according to the main body loss and the edge loss to obtain the trained image segmentation network.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (15)

1. An image segmentation method, comprising:
acquiring an image to be segmented;
inputting the image to be segmented into a preset image segmentation network for image segmentation processing to obtain a target segmentation result of the image to be segmented; wherein the image segmentation network is obtained based on the training of the body loss and the edge loss of the training sample image; the body loss is the loss between the segmentation labeling result and the segmentation prediction result of the training sample image for the body region of the target object, and the edge loss is the loss between the segmentation labeling result and the segmentation prediction result of the training sample image for the edge region of the target object;
according to the target segmentation result, segmenting an image of a main body region aiming at the target object from the image to be segmented;
the method further comprises the following steps: acquiring the training sample image; the training sample image is marked with an image segmentation marking result; determining a characteristic region in the edge region of the target object in the training sample image according to the image segmentation and labeling result;
the determining a feature region in the edge region of the target object in the training sample image according to the image segmentation and labeling result includes:
determining an edge area aiming at a target object in the training sample image according to the image segmentation and annotation result; performing morphological processing on the edge region, and determining a feature region with preset morphological characteristics in the edge region according to a morphological processing result; and/or calculating the local curvature of the edge region, and determining the characteristic region of which the local curvature is greater than a preset curvature threshold value.
2. The method according to claim 1, wherein the image segmentation labeling result comprises a segmentation labeling result of a main region and a segmentation labeling result of an edge region; after the acquiring the training sample image, further comprising:
inputting the training sample image into an initial image segmentation network to carry out image segmentation processing, and obtaining a segmentation prediction result of a main body region and a segmentation prediction result of an edge region of the training sample image;
calculating a loss between a segmentation labeling result and a segmentation prediction result of the body region as the body loss, and calculating a loss between a segmentation labeling result and a segmentation prediction result of the edge region as the edge loss;
and adjusting the parameters to be trained of the initial image segmentation network according to the main body loss and the edge loss to obtain the image segmentation network.
3. The method of claim 2, wherein the initial image segmentation network comprises a convolutional neural subnetwork, a subject segmentation subnetwork, and an edge segmentation subnetwork, the subject segmentation subnetwork and the edge segmentation subnetwork sharing the convolutional neural subnetwork;
inputting the training sample image into an initial image segmentation network to perform image segmentation processing, so as to obtain a segmentation prediction result of a main region and a segmentation prediction result of an edge region of the training sample image, wherein the method comprises the following steps:
inputting the training sample image into the convolutional neural sub-network for feature extraction processing to obtain an output feature map of the convolutional neural sub-network;
and respectively inputting the output feature maps of the convolutional neural sub-networks into the main segmentation sub-network and the edge segmentation sub-network for decoding, so as to respectively obtain the segmentation prediction result of the main region and the segmentation prediction result of the edge region.
4. The method of claim 3, wherein the subject segmentation sub-network and the edge segmentation sub-network are both of a feature pyramid network type, and there are a plurality of levels of horizontal connections with the convolutional neural sub-networks, respectively;
inputting the output feature maps of the convolutional neural sub-networks into the main segmentation sub-network and the edge segmentation sub-network respectively for decoding processing to obtain the segmentation prediction result of the main region and the segmentation prediction result of the edge region respectively, including:
inputting the output feature maps of the multiple hierarchies of the convolutional neural sub-network into corresponding hierarchies of the main segmentation sub-network to perform feature fusion processing to obtain a main feature map, and decoding the main feature map to obtain a segmentation prediction result of the main region;
and inputting the output feature maps of the multiple hierarchies into corresponding hierarchies of the edge segmentation sub-network to perform feature fusion processing to obtain edge feature maps, and performing decoding processing on the edge feature maps to obtain segmentation prediction results of the edge regions.
5. The method of claim 4, wherein the partition prediction of the subject region comprises: a first decoding processing result of a main feature map output by a last level in the main segmentation sub-network, wherein a segmentation prediction result of the edge region comprises: a second decoding processing result of the edge feature map output by the last level in the edge segmentation sub-network;
accordingly, calculating a loss between the segmentation labeling result and the segmentation prediction result of the body region as the body loss, and calculating a loss between the segmentation labeling result and the segmentation prediction result of the edge region as the edge loss, includes:
calculating the main body loss according to the segmentation labeling result of the main body region and the first decoding processing result; and calculating the edge loss according to the segmentation labeling result of the edge region and the second decoding processing result.
6. The method according to claim 5, wherein the calculating the body loss is performed based on the segmentation labeling result of the body region and the first decoding processing result; and calculating the edge loss according to the segmentation labeling result of the edge region and the second decoding processing result, including:
calculating a loss between the segmentation labeling result of the body region and the first decoding processing result as the body loss, and calculating a loss between the segmentation labeling result of the edge region and the second decoding processing result as the edge loss.
7. The method of claim 5, wherein the segmentation prediction of the subject region further comprises: a third decoding processing result of the main feature map output by at least one intermediate level in the main segmentation sub-network, wherein the segmentation prediction result of the edge region further includes: a fourth decoding processing result of the edge feature map output by at least one intermediate level in the edge segmentation sub-network;
calculating the main body loss according to the segmentation labeling result of the main body region and the first decoding processing result; and calculating the edge loss according to the segmentation labeling result of the edge region and the second decoding processing result, including:
respectively carrying out downsampling processing on the segmentation labeling result of the main area and the segmentation labeling result of the edge area to obtain a middle segmentation labeling result of the main area and a middle segmentation labeling result of the edge area;
calculating the loss between the segmentation labeling result of the main body region and the first decoding processing result, calculating the loss between the middle segmentation labeling result of the main body region and the third decoding processing result, and summing to obtain the main body loss;
and calculating the loss between the segmentation labeling result of the edge region and the second decoding processing result, calculating the loss between the middle segmentation labeling result of the edge region and the fourth decoding processing result, and summing to obtain the edge loss.
8. The method according to claim 2, further comprising, after determining a feature region in an edge region of the training sample image for a target object according to the image segmentation labeling result:
determining the weight of each pixel point in the training sample image according to the characteristic region; the main body loss and the edge loss of the training sample image are related to the weight of each pixel point; and the weight of the pixel points in the characteristic region is higher than the weight of the pixel points in other regions except the characteristic region in the training sample image.
9. The method according to any one of claims 3-7, wherein the adjusting the parameter to be trained of the initial image segmentation network according to the subject loss and the edge loss comprises:
adjusting a parameter to be trained of the subject segmentation sub-network based on the subject loss;
adjusting parameters to be trained of the edge segmentation sub-network based on the edge loss;
adjusting a parameter to be trained of the convolutional neural subnetwork based on a weighted sum of the subject loss and the edge loss.
10. The method of claim 1, wherein the image segmentation network comprises: a convolutional neural subnetwork and a subject segmentation subnetwork; the subject segmentation subnetwork and the convolutional neural subnetwork have a plurality of levels of horizontal connections therebetween;
inputting the image to be segmented into a preset image segmentation network for image segmentation processing to obtain a target segmentation result of the image to be segmented, wherein the target segmentation result comprises the following steps:
inputting the image to be segmented into the convolutional neural sub-network for feature extraction processing to obtain output feature maps of the multiple hierarchies in the convolutional neural sub-network;
and inputting the output feature maps of the multiple levels into corresponding levels of the main segmentation sub-network, performing feature fusion processing to obtain a main feature map, and performing decoding processing on the main feature map to obtain a target segmentation result of the image to be segmented.
11. A method for training an image segmentation network, the method comprising:
acquiring a training sample image; the training sample image is marked with an image segmentation and annotation result, and the image segmentation and annotation result comprises a segmentation and annotation result of a main body region and a segmentation and annotation result of an edge region of a target object; inputting the training sample image into an initial image segmentation network to carry out image segmentation processing, and obtaining a segmentation prediction result of a main body region and a segmentation prediction result of an edge region of the training sample image;
calculating a loss between a segmentation labeling result and a segmentation prediction result of the body region as the body loss, and calculating a loss between a segmentation labeling result and a segmentation prediction result of the edge region as the edge loss;
adjusting parameters to be trained of the initial image segmentation network according to the main body loss and the edge loss to obtain a trained image segmentation network;
the method further comprises the following steps: determining a characteristic region in the edge region of the target object in the training sample image according to the image segmentation and labeling result;
the determining a feature region in the edge region of the target object in the training sample image according to the image segmentation and labeling result includes:
determining an edge area aiming at a target object in the training sample image according to the image segmentation and annotation result; performing morphological processing on the edge region, and determining a feature region with preset morphological characteristics in the edge region according to a morphological processing result; and/or calculating the local curvature of the edge region, and determining the characteristic region of which the local curvature is greater than a preset curvature threshold value.
12. An image segmentation apparatus, comprising:
the image to be segmented acquisition module is used for acquiring an image to be segmented;
the image segmentation processing module is used for inputting the image to be segmented into a preset image segmentation network to perform image segmentation processing so as to obtain a target segmentation result of the image to be segmented; wherein the image segmentation network is obtained based on the training of the body loss and the edge loss of the training sample image; the body loss is the loss between the segmentation labeling result and the segmentation prediction result of the training sample image for the body region of the target object, and the edge loss is the loss between the segmentation labeling result and the segmentation prediction result of the training sample image for the edge region of the target object;
a main body region segmentation module, configured to segment an image of a main body region for the target object from the image to be segmented according to the target segmentation result;
the apparatus further comprises a neural network training module: the neural network model training module comprises a training sample image acquisition unit used for acquiring the training sample image; the training sample image is marked with an image segmentation marking result; a feature region determining unit, configured to determine a feature region in an edge region of a target object in the training sample image according to the image segmentation and labeling result;
the feature region determination unit is further configured to: determining an edge area aiming at a target object in the training sample image according to the image segmentation and annotation result; performing morphological processing on the edge region, and determining a feature region with preset morphological characteristics in the edge region according to a morphological processing result; and/or calculating the local curvature of the edge region, and determining the characteristic region of which the local curvature is greater than a preset curvature threshold value.
13. An apparatus for training an image segmentation network, comprising:
the training sample image acquisition module is used for acquiring a training sample image; the training sample image is marked with an image segmentation and annotation result, and the image segmentation and annotation result comprises a segmentation and annotation result of a main body region and a segmentation and annotation result of an edge region of a target object;
the neural network processing module is used for inputting the training sample image into an initial image segmentation network for image segmentation processing to obtain a segmentation prediction result of a main body region and a segmentation prediction result of an edge region of the training sample image;
a loss calculation module, configured to calculate a loss between the segmentation labeling result and the segmentation prediction result of the main region as the main loss, and calculate a loss between the segmentation labeling result and the segmentation prediction result of the edge region as the edge loss;
the neural network training module is used for adjusting parameters to be trained of the initial image segmentation network according to the main body loss and the edge loss to obtain a trained image segmentation network;
the device also comprises a characteristic region determining unit, a characteristic region determining unit and a characteristic region determining unit, wherein the characteristic region determining unit is used for determining a characteristic region in the edge region of the target object in the training sample image according to the image segmentation and labeling result;
the feature region determination unit is further configured to: determining an edge area aiming at a target object in the training sample image according to the image segmentation and annotation result; performing morphological processing on the edge region, and determining a feature region with preset morphological characteristics in the edge region according to a morphological processing result; and/or calculating the local curvature of the edge region, and determining the characteristic region of which the local curvature is greater than a preset curvature threshold value.
14. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 11 when executing the computer program.
15. A readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 11.
CN201910724764.8A 2019-08-07 2019-08-07 Image segmentation and training method and device of image segmentation network and computer equipment Active CN110517278B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910724764.8A CN110517278B (en) 2019-08-07 2019-08-07 Image segmentation and training method and device of image segmentation network and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910724764.8A CN110517278B (en) 2019-08-07 2019-08-07 Image segmentation and training method and device of image segmentation network and computer equipment

Publications (2)

Publication Number Publication Date
CN110517278A CN110517278A (en) 2019-11-29
CN110517278B true CN110517278B (en) 2022-04-29

Family

ID=68624579

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910724764.8A Active CN110517278B (en) 2019-08-07 2019-08-07 Image segmentation and training method and device of image segmentation network and computer equipment

Country Status (1)

Country Link
CN (1) CN110517278B (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113256643A (en) * 2020-02-10 2021-08-13 武汉Tcl集团工业研究院有限公司 Portrait segmentation model training method, storage medium and terminal equipment
CN111311616B (en) * 2020-02-13 2023-04-07 北京百度网讯科技有限公司 Method and apparatus for segmenting an image
CN111126358A (en) * 2020-02-25 2020-05-08 京东方科技集团股份有限公司 Face detection method, face detection device, storage medium and equipment
CN111275045B (en) * 2020-02-28 2024-02-06 Oppo广东移动通信有限公司 Image main body recognition method and device, electronic equipment and medium
CN111402258A (en) * 2020-03-12 2020-07-10 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN111401247B (en) * 2020-03-17 2023-07-28 杭州小影创新科技股份有限公司 Portrait segmentation method based on cascade convolution neural network
CN111311485B (en) * 2020-03-17 2023-07-04 Oppo广东移动通信有限公司 Image processing method and related device
CN111445493B (en) * 2020-03-27 2024-04-12 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN111626298B (en) * 2020-04-17 2023-08-18 中国科学院声学研究所 Real-time image semantic segmentation device and segmentation method
CN113744293A (en) * 2020-05-13 2021-12-03 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, electronic device, and readable storage medium
CN113706562B (en) * 2020-07-08 2023-04-07 腾讯医疗健康(深圳)有限公司 Image segmentation method, device and system and cell segmentation method
CN112132847A (en) * 2020-09-27 2020-12-25 北京字跳网络技术有限公司 Model training method, image segmentation method, device, electronic device and medium
CN112465834A (en) * 2020-11-26 2021-03-09 中科麦迪人工智能研究院(苏州)有限公司 Blood vessel segmentation method and device
CN112580567B (en) * 2020-12-25 2024-04-16 深圳市优必选科技股份有限公司 Model acquisition method, model acquisition device and intelligent equipment
CN112581481B (en) * 2020-12-30 2024-04-12 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN112926461B (en) * 2021-02-26 2024-04-19 商汤集团有限公司 Neural network training and driving control method and device
CN113192060A (en) * 2021-05-25 2021-07-30 上海商汤临港智能科技有限公司 Image segmentation method and device, electronic equipment and storage medium
CN113349792B (en) * 2021-05-31 2022-10-11 平安科技(深圳)有限公司 Method, apparatus, device and medium for classifying multi-lead electrocardiosignal
CN113408524A (en) * 2021-06-11 2021-09-17 南开大学 Crop image segmentation and extraction algorithm based on MASK RCNN
CN113643311B (en) * 2021-06-28 2024-04-09 清华大学 Image segmentation method and device with robust boundary errors
CN113506307B (en) * 2021-06-29 2022-05-27 吉林大学 Medical image segmentation method for improving U-Net neural network based on residual connection
CN113822901B (en) * 2021-07-21 2023-12-12 南京旭锐软件科技有限公司 Image segmentation method and device, storage medium and electronic equipment
WO2023230927A1 (en) * 2022-05-31 2023-12-07 北京小米移动软件有限公司 Image processing method and device, and readable storage medium
CN115272377B (en) * 2022-09-27 2022-12-27 松立控股集团股份有限公司 Vehicle segmentation method fusing image edge information
CN117635942A (en) * 2023-12-05 2024-03-01 齐鲁工业大学(山东省科学院) Cardiac MRI image segmentation method based on edge feature enhancement

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108647684A (en) * 2018-05-02 2018-10-12 深圳市唯特视科技有限公司 A kind of Weakly supervised semantic segmentation method based on guiding attention inference network
CN109102502A (en) * 2018-08-03 2018-12-28 西北工业大学 Pulmonary nodule detection method based on Three dimensional convolution neural network
CN109740608A (en) * 2018-12-28 2019-05-10 北京工业大学 A kind of image partition method based on deep learning
CN110008915A (en) * 2019-04-11 2019-07-12 电子科技大学 The system and method for dense human body attitude estimation is carried out based on mask-RCNN
CN110059698A (en) * 2019-04-30 2019-07-26 福州大学 The semantic segmentation method and system based on the dense reconstruction in edge understood for streetscape

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108647684A (en) * 2018-05-02 2018-10-12 深圳市唯特视科技有限公司 A kind of Weakly supervised semantic segmentation method based on guiding attention inference network
CN109102502A (en) * 2018-08-03 2018-12-28 西北工业大学 Pulmonary nodule detection method based on Three dimensional convolution neural network
CN109740608A (en) * 2018-12-28 2019-05-10 北京工业大学 A kind of image partition method based on deep learning
CN110008915A (en) * 2019-04-11 2019-07-12 电子科技大学 The system and method for dense human body attitude estimation is carried out based on mask-RCNN
CN110059698A (en) * 2019-04-30 2019-07-26 福州大学 The semantic segmentation method and system based on the dense reconstruction in edge understood for streetscape

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《FusionNet: Edge Aware Deep Convolutional Networks for Semantic Segmentation of Remote Sensing Harbor Images》;Dongcai Cheng等;《IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing》;20171231;第10卷(第12期);图1,摘要,第4节第B小节第1段 *
《Multi-Task Convolutional Neural Network for Patient Detection and Skin Segmentation in Continuous Non-Contact Vital Sign Monitoring》;Sitthichok Chaichulee等;《2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017)》;20171231;第266-272页 *
《一种基于残差网络的多任务模型》;陈良甫等;《中国集成电路》;20170831(第219期);第64-71页 *

Also Published As

Publication number Publication date
CN110517278A (en) 2019-11-29

Similar Documents

Publication Publication Date Title
CN110517278B (en) Image segmentation and training method and device of image segmentation network and computer equipment
KR102592270B1 (en) Facial landmark detection method and apparatus, computer device, and storage medium
WO2021120695A1 (en) Image segmentation method and apparatus, electronic device and readable storage medium
CN111079632A (en) Training method and device of text detection model, computer equipment and storage medium
CN109886077B (en) Image recognition method and device, computer equipment and storage medium
CN109493417B (en) Three-dimensional object reconstruction method, device, equipment and storage medium
CN110516541B (en) Text positioning method and device, computer readable storage medium and computer equipment
CN109858333B (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN111598779B (en) Image super-resolution processing method and device, electronic equipment and storage medium
CN111062263B (en) Method, apparatus, computer apparatus and storage medium for hand gesture estimation
CN112597941A (en) Face recognition method and device and electronic equipment
CN109711419A (en) Image processing method, device, computer equipment and storage medium
CN110287836B (en) Image classification method and device, computer equipment and storage medium
CN112418195B (en) Face key point detection method and device, electronic equipment and storage medium
CN112241976A (en) Method and device for training model
CN110555526B (en) Neural network model training method, image recognition method and device
CN112749726B (en) Training method and device for target detection model, computer equipment and storage medium
CN112651291A (en) Video-based posture estimation method, device, medium and electronic equipment
CN112001859A (en) Method and system for repairing face image
CN111047088A (en) Prediction image acquisition method and device, computer equipment and storage medium
CN111582155A (en) Living body detection method, living body detection device, computer equipment and storage medium
Wang et al. Object counting in video surveillance using multi-scale density map regression
CN111754429A (en) Motion vector post-processing method and device, electronic device and storage medium
CN108875611B (en) Video motion recognition method and device
CN115115552B (en) Image correction model training method, image correction device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant