CN114359233A - Image segmentation model training method and device, electronic equipment and readable storage medium - Google Patents

Image segmentation model training method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN114359233A
CN114359233A CN202210015383.4A CN202210015383A CN114359233A CN 114359233 A CN114359233 A CN 114359233A CN 202210015383 A CN202210015383 A CN 202210015383A CN 114359233 A CN114359233 A CN 114359233A
Authority
CN
China
Prior art keywords
image
segmentation
segmentation model
road landscape
road
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210015383.4A
Other languages
Chinese (zh)
Other versions
CN114359233B (en
Inventor
许春磊
王超
马维士
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Huayuan Information Technology Co Ltd
Original Assignee
Beijing Huayuan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Huayuan Information Technology Co Ltd filed Critical Beijing Huayuan Information Technology Co Ltd
Priority to CN202210015383.4A priority Critical patent/CN114359233B/en
Publication of CN114359233A publication Critical patent/CN114359233A/en
Application granted granted Critical
Publication of CN114359233B publication Critical patent/CN114359233B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application provides an image segmentation model training method, an image segmentation model training device, electronic equipment and a readable storage medium, wherein the method comprises the following steps: acquiring a road landscape data set for training an image segmentation model; inputting the road landscape image into an initial image segmentation model, performing network semantic segmentation on the road landscape image according to the first object characteristics of each object contained in the identified road landscape image to obtain a first segmentation image, and deleting object characteristics which do not exist in the second object characteristics of each object marked in the marked image from the first segmentation image to obtain a second segmentation image; performing loss function calculation by using the second segmentation image and the annotation image corresponding to the second segmentation image to obtain a loss value; and performing back propagation training on the initial image segmentation model by using the loss value so as to adjust learnable parameters in the initial image segmentation model. By the method, the accuracy of image segmentation model identification is improved.

Description

Image segmentation model training method and device, electronic equipment and readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for training an image segmentation model, an electronic device, and a readable storage medium.
Background
In the driving process of the automatic driving automobile, the captured road landscape image needs to be subjected to semantic segmentation, so that different objects contained in the road landscape image can be distinguished and identified, for example, which are indicator lights, which are traffic signs, which are obstacles or pedestrians, and the like are identified from the road landscape image.
However, when the automatic driving automobile runs in the weather of wind, sand, rain, snow and the like, some environmental interference information (such as sand, dust, rain, snow and illumination intensity) may exist in the road landscape image obtained by shooting at the moment, and the definition of the road landscape image is poor due to the environmental interference information, so that the distinguishing and the recognition of each object in the road landscape image are influenced, and the recognition accuracy is low.
Disclosure of Invention
In view of the above, an object of the present application is to provide an image segmentation model training method, an image segmentation model training device, an electronic device and a readable storage medium, so as to improve the accuracy of the image segmentation model in identifying each object in a road landscape image.
In a first aspect, an embodiment of the present application provides an image segmentation model training method, including:
acquiring a road landscape data set for training an image segmentation model; the road-landscape data set includes: the road landscape image without environmental interference information, the road landscape image with the environmental interference information and the label image corresponding to each road landscape image are obtained; the labeled image is an image for labeling the types of different objects in the road landscape image;
inputting the road landscape image into an initial image segmentation model, and performing first image feature processing on the road landscape image through the initial image segmentation model; the first image feature processing includes: identifying a first object feature of each object contained in the road landscape image, performing network semantic segmentation on each object contained in the road landscape image according to the first object feature to obtain a first segmented image, comparing the first segmented image with the labeled image corresponding to the first segmented image, and deleting object features which do not exist in a second object feature of each object labeled in the labeled image from the first segmented image to obtain a second segmented image;
performing loss function calculation by using the second segmentation image and the annotation image corresponding to the second segmentation image to obtain a loss value;
and performing back propagation training on the initial image segmentation model by using the loss value so as to adjust learnable parameters in the initial image segmentation model until convergence of the learnable parameters is completed, and ending model training to obtain the image segmentation model.
With reference to the first aspect, embodiments of the present application provide a first possible implementation manner of the first aspect, where the road-scene images include overlapping objects;
the identifying a first object feature of each object included in the road-scene image includes:
when the overlapped objects are objects of the same kind, taking the overlapped objects as a target object, and identifying a first image characteristic of the target object through the initial image segmentation model;
when the overlapped objects are different types of objects, identifying each object in the overlapped objects respectively through the initial image segmentation model to obtain a first object characteristic of each object.
With reference to the first aspect, this embodiment provides a second possible implementation manner of the first aspect, where the deleting, from the first divided image, an object feature that does not exist in the second object feature of each object marked in the marked image to obtain a second divided image includes:
deleting object features which do not exist in the second object features of each object marked in the marked image from the first divided image to obtain a third divided image; the object features which do not exist in the second object features are features of environmental interference information; the third segmentation image is an image from which the environmental interference information is removed;
and supplementing the missing region according to the pixel value of the first pixel point in the target range around the missing region on the third segmentation image aiming at the missing region of the missing pixel point after the environmental interference information is removed in the third segmentation image, so as to obtain the second segmentation image.
With reference to the second possible implementation manner of the first aspect, an embodiment of the present application provides a third possible implementation manner of the first aspect, where the supplementing the missing region according to a pixel value of a first pixel point in a target range around the missing region on the third segmented image includes:
calculating the difference of pixel values between the first pixel points on two sides of the central point on a straight line passing through the central point in the target range based on the central point of the missing region to obtain a difference value corresponding to each straight line;
determining a target straight line corresponding to the largest difference value from the straight lines according to the difference value corresponding to each straight line, taking the direction from large to small of the pixel values of the first pixel points positioned on the two sides of the central point on the target straight line as a target direction, and calculating the missing length of the missing area in the target direction;
calculating the ratio of the difference value corresponding to the target straight line to the missing length to obtain the change value of the pixel value of a second pixel point in the missing area in the target direction;
and supplementing the image value of the second image point in the missing region according to the pixel values of the first pixel points positioned on two sides of the central point on the target straight line, the target direction and the change value.
With reference to the second possible implementation manner of the first aspect, an embodiment of the present application provides a fourth possible implementation manner of the first aspect, where the supplementing the missing region according to a pixel value of a first pixel point in a target range around the region on the third segmented image includes:
calculating the average value of the pixel values of the first pixel points of the target number adjacent to each edge position on the missing area, and supplementing the edge position by using the average value to obtain a supplemented area;
removing the supplementary region from the missing region to obtain a target missing region, and judging whether the area of the target missing region is 0;
if the area of the target missing region is 0, indicating that the missing region is completed;
if the area of the target missing region is not 0, taking the target missing region as a new missing region, and continuing to execute the following steps: and calculating the average value of the pixel values of the first pixel points of the target number adjacent to each edge position on the missing region, and supplementing the edge position by using the average value to obtain a supplemented region.
With reference to the first aspect, an embodiment of the present application provides a fifth possible implementation manner of the first aspect, where acquiring a road landscape data set used for training an image segmentation model includes:
acquiring a plurality of initial road landscape images;
for each initial road landscape image, carrying out image processing on the initial road landscape image to generate a sample image corresponding to the initial road landscape image; the image processing includes: one or more of image flipping, image cropping, and image scaling;
and taking the initial road landscape image and the sample image corresponding to the initial road landscape image as the road landscape image in the road landscape data set.
With reference to the first aspect, an embodiment of the present application provides a sixth possible implementation manner of the first aspect, where after the obtaining the image segmentation model, the method further includes:
acquiring a road landscape image to be identified;
inputting the road landscape image to be identified into the image segmentation model, and performing second image characteristic processing on the road landscape image to be identified through the image segmentation model; the second image feature processing includes: identifying third object characteristics of each object contained in the road landscape image to be identified, performing network semantic segmentation on each object contained in the road landscape image to be identified according to the third object characteristics to obtain a third segmentation image, comparing the types of each object obtained by segmentation contained in the third segmentation image with preset types, and deleting object characteristics corresponding to objects which do not exist in the preset types from each object contained in the third segmentation image to obtain a target segmentation image.
In a second aspect, an embodiment of the present application further provides an image segmentation model training apparatus, including:
the first acquisition module is used for acquiring a road landscape data set used for training an image segmentation model; the road-landscape data set includes: the road landscape image without environmental interference information, the road landscape image with the environmental interference information and the label image corresponding to each road landscape image are obtained; the labeled image is an image for labeling the types of different objects in the road landscape image;
the first input module is used for inputting the road landscape image into an initial image segmentation model and carrying out first image feature processing on the road landscape image through the initial image segmentation model; the first image feature processing includes: identifying a first object feature of each object contained in the road landscape image, performing network semantic segmentation on each object contained in the road landscape image according to the first object feature to obtain a first segmented image, comparing the first segmented image with the labeled image corresponding to the first segmented image, and deleting object features which do not exist in a second object feature of each object labeled in the labeled image from the first segmented image to obtain a second segmented image;
the calculation module is used for performing loss function calculation by using the second segmentation image and the annotation image corresponding to the second segmentation image to obtain a loss value;
and the training module is used for carrying out back propagation training on the initial image segmentation model by using the loss value so as to adjust the learnable parameters in the initial image segmentation model until the learnable parameters finish convergence and then finish model training to obtain the image segmentation model.
With reference to the second aspect, the present application provides a first possible implementation manner of the second aspect, where the road-scene image includes overlapped objects;
the first input module, when configured to identify a first object feature of each object included in the road-scene image, is specifically configured to:
when the overlapped objects are objects of the same kind, taking the overlapped objects as a target object, and identifying a first image characteristic of the target object through the initial image segmentation model;
when the overlapped objects are different types of objects, identifying each object in the overlapped objects respectively through the initial image segmentation model to obtain a first object characteristic of each object.
With reference to the second aspect, the present application provides a second possible implementation manner of the second aspect, wherein when the first input module is configured to delete, from the first divided image, an object feature that does not exist in the second object feature of each object marked in the marked image, and obtain a second divided image, the first input module is specifically configured to:
deleting object features which do not exist in the second object features of each object marked in the marked image from the first divided image to obtain a third divided image; the object features which do not exist in the second object features are features of environmental interference information; the third segmentation image is an image from which the environmental interference information is removed;
and supplementing the missing region according to the pixel value of the first pixel point in the target range around the missing region on the third segmentation image aiming at the missing region of the missing pixel point after the environmental interference information is removed in the third segmentation image, so as to obtain the second segmentation image.
With reference to the second possible implementation manner of the second aspect, an embodiment of the present application provides a third possible implementation manner of the second aspect, where when the first input module is configured to supplement the missing region according to a pixel value of a first pixel point in a target range around the missing region on the third segmented image, the first input module is specifically configured to:
calculating the difference of pixel values between the first pixel points on two sides of the central point on a straight line passing through the central point in the target range based on the central point of the missing region to obtain a difference value corresponding to each straight line;
determining a target straight line corresponding to the largest difference value from the straight lines according to the difference value corresponding to each straight line, taking the direction from large to small of the pixel values of the first pixel points positioned on the two sides of the central point on the target straight line as a target direction, and calculating the missing length of the missing area in the target direction;
calculating the ratio of the difference value corresponding to the target straight line to the missing length to obtain the change value of the pixel value of a second pixel point in the missing area in the target direction;
and supplementing the image value of the second image point in the missing region according to the pixel values of the first pixel points positioned on two sides of the central point on the target straight line, the target direction and the change value.
With reference to the second possible implementation manner of the second aspect, an embodiment of the present application provides a fourth possible implementation manner of the second aspect, where when the first input module is configured to supplement the missing region according to a pixel value of a first pixel point in a target range around the missing region on the third segmented image, the first input module is specifically configured to:
calculating the average value of the pixel values of the first pixel points of the target number adjacent to each edge position on the missing area, and supplementing the edge position by using the average value to obtain a supplemented area;
removing the supplementary region from the missing region to obtain a target missing region, and judging whether the area of the target missing region is 0;
if the area of the target missing region is 0, indicating that the missing region is completed;
if the area of the target missing region is not 0, taking the target missing region as a new missing region, and continuing to execute the following steps: and calculating the average value of the pixel values of the first pixel points of the target number adjacent to each edge position on the missing region, and supplementing the edge position by using the average value to obtain a supplemented region.
With reference to the second aspect, embodiments of the present application provide a fifth possible implementation manner of the second aspect, where the first obtaining module, when configured to obtain a road-landscape data set used for training an image segmentation model, is specifically configured to:
acquiring a plurality of initial road landscape images;
for each initial road landscape image, carrying out image processing on the initial road landscape image to generate a sample image corresponding to the initial road landscape image; the image processing includes: one or more of image flipping, image cropping, and image scaling;
and taking the initial road landscape image and the sample image corresponding to the initial road landscape image as the road landscape image in the road landscape data set.
With reference to the second aspect, embodiments of the present application provide a sixth possible implementation manner of the second aspect, where the method further includes:
the second acquisition module is used for acquiring a road landscape image to be recognized after the training module obtains the image segmentation model;
the second input module is used for inputting the road landscape image to be identified into the image segmentation model and carrying out second image characteristic processing on the road landscape image to be identified through the image segmentation model; the second image feature processing includes: identifying third object characteristics of each object contained in the road landscape image to be identified, performing network semantic segmentation on each object contained in the road landscape image to be identified according to the third object characteristics to obtain a third segmentation image, comparing the types of each object obtained by segmentation contained in the third segmentation image with preset types, and deleting object characteristics corresponding to objects which do not exist in the preset types from each object contained in the third segmentation image to obtain a target segmentation image.
In a third aspect, an embodiment of the present application further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions being executable by the processor to perform the steps of any one of the possible implementations of the first aspect.
In a fourth aspect, this application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program is executed by a processor to perform the steps in any one of the possible implementation manners of the first aspect.
In the image segmentation model, network semantic segmentation is performed on each object contained in the road landscape image through a first object feature of each object contained in the identified road landscape image to obtain a first segmentation image, the first segmentation image is compared with an annotation image corresponding to the first segmentation image, and an object feature which does not exist in a second object feature of each object marked in the annotation image is deleted from the first segmentation image to obtain a second segmentation image; and then training the image segmentation model according to the loss value between the second segmentation image and the labeled image.
In the scheme, because only each object in the road landscape image is marked in the marked image and the environmental interference information existing in the road landscape image is not marked, the object features of the environmental interference information are deleted from the first segmentation image by deleting the object features which do not exist in the second object features of each object marked in the marked image from the first segmentation image, so that the image segmentation model learns to remove the environmental interference information in the road landscape image when performing network semantic segmentation, thereby reducing the influence of the environmental interference information on the distinguishing and identifying of each object in the road landscape image and further improving the accuracy of the image segmentation model in identifying each object in the road landscape image.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a flowchart illustrating an image segmentation model training method provided in an embodiment of the present application;
FIG. 2 is a diagram illustrating a missing region in a third segmented image provided by an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating another missing region in a third segmented image provided by an embodiment of the present application;
FIG. 4 is a schematic structural diagram illustrating an image segmentation model training apparatus provided in an embodiment of the present application;
fig. 5 shows a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
Autonomous driving can be divided into three parts: sensing, decision making and control. Of these three parts, perception is the most fundamental and also the most important part. Sensing is mainly responsible for acquiring road landscape images from the outside in the field of automatic driving, and then identifying a drivable area and environmental information such as signal lamps, signs and the like from the acquired road landscape images to serve as decision conditions. Automatic driving requires sensing on a changing road, so that it is necessary to analyze road-view images with high accuracy.
When analyzing the road-view image, the captured road-view image is specifically subjected to semantic segmentation, so that different objects contained in the road-view image can be distinguished and identified, for example, which are indicator lights, which are traffic signs, which are obstacles or pedestrians, and the like are identified from the road-view image.
However, when the automatic driving automobile runs in the weather of wind, sand, rain, snow and the like, some environmental interference information (such as sand, dust, rain, snow and illumination intensity) may exist in the road landscape image obtained by shooting at the moment, and the definition of the road landscape image is poor due to the environmental interference information, so that the distinguishing and the recognition of each object in the road landscape image are influenced, and the recognition accuracy is low.
In view of the above problems, embodiments of the present application provide an image segmentation model training method, an image segmentation model training device, an electronic device, and a readable storage medium to improve the accuracy of the image segmentation model in identifying each object in a road landscape image, which are described below by way of example.
The first embodiment is as follows:
for the convenience of understanding the present embodiment, a detailed description will be first given of an image segmentation model training method disclosed in the embodiments of the present application. Fig. 1 shows a flowchart of an image segmentation model training method provided in an embodiment of the present application, which includes two stages: the method comprises an image segmentation model training stage and a stage of identifying each object contained in a road landscape image to be identified by using the image segmentation model. The image segmentation model training stage is to train an initial image segmentation model by using a road landscape data set to obtain a trained image segmentation model, which is specifically shown in fig. 1. The following describes the training phase of the image segmentation model in detail, and specifically includes the following steps:
s101: acquiring a road landscape data set for training an image segmentation model; the set of road-landscape data includes: the road landscape images without environmental interference information, the road landscape images with the environmental interference information and the labeling images corresponding to the road landscape images are obtained; the labeled image is an image for labeling the types of different objects in the road landscape image.
The environmental interference information includes: one or more of rain, snow, sand dust, strong light and weak light. The road landscape image without the environmental interference information does not contain the environmental interference information, and the road landscape image with the environmental interference information contains the environmental interference information. The road landscape image can also comprise objects such as roads, vehicles, buildings, pedestrians, signs, flower beds and the like on the roads.
Each road landscape image corresponds to one annotation image, the annotation image is an image for annotating the types of objects of different types in the road landscape image, in a specific embodiment, the road landscape image is artificially annotated, the objects of various types in the road landscape image are respectively annotated by using different colors or different annotation forms, and the annotated image is used as the annotation image of the road landscape image. Wherein the categories of the objects included in the road-scene image may include: buildings, pedestrians, obstacles, roads, etc., which may be subdivided into vehicles, animals, etc. The present embodiment does not limit the specific classification.
Specifically, when labeling objects of various types in the road landscape image, the objects may be attached to each other for labeling according to the shape of each object. When objects of respective categories in the road-scene image are labeled with different colors, the objects of each category may be labeled with the same color. For example, when the pedestrians are labeled, the pedestrians can be labeled by blue according to the shape of the pedestrians, and all the pedestrians in the road landscape image are labeled by blue.
S102: inputting the road landscape image into an initial image segmentation model, and performing first image characteristic processing on the road landscape image through the initial image segmentation model; the first image feature processing includes: the method comprises the steps of identifying first object features of each object contained in a road landscape image, performing network semantic segmentation on each object contained in the road landscape image according to the first object features to obtain a first segmentation image, comparing the first segmentation image with a labeled image corresponding to the first segmentation image, and deleting object features which do not exist in second object features of each object labeled in the labeled image from the first segmentation image to obtain a second segmentation image.
The first segmentation image is obtained by performing network semantic segmentation on the road landscape image through the initial image segmentation model, namely labeling objects of various categories in the road landscape image through the initial image segmentation model, and specifically, the initial image segmentation model is labeled by attaching the objects with different colors according to the shapes of the objects.
If the road-landscape image includes the environmental interference information, the first segmented image corresponding to the road-landscape image may also label the environmental interference information, and therefore, in this embodiment, by comparing the labeled image corresponding to the road-landscape image with the first segmented image corresponding to the road-landscape image, since only the objects in the road-landscape image are labeled in the labeled image and the environmental interference information existing in the road-landscape image is not labeled, the object features that do not exist in the second object features of each object labeled in the labeled image are deleted from the first segmented image, that is, the object features of the environmental interference information are deleted from the first segmented image, that is, the environmental interference information does not exist in the obtained second segmented image.
S103: and performing loss function calculation by using the second segmentation image and the annotation image corresponding to the second segmentation image to obtain a loss value.
Although the second segmented image has no environmental interference information, after the road landscape image is subjected to the first image feature processing by the initial image segmentation model, each object marked with the road landscape image may not be correct, for example, the situations such as missing marks (that is, some pedestrians are not marked), wrong marks (that is, pedestrians are marked as buildings) may occur. The labeling result in the obtained second segmented image may be different from the labeling result on the labeling image. Therefore, the loss value is obtained by performing the loss function calculation using the second segmented image and the labeled image corresponding to the second segmented image.
S104: and performing back propagation training on the initial image segmentation model by using the loss value to adjust the learnable parameters in the initial image segmentation model until the learnable parameters are converged, and finishing the model training to obtain the image segmentation model.
In one particular implementation, the initial image segmentation model satisfies the training cutoff condition when convergence is completed for the learnable parameters in the initial image segmentation model. Wherein the training cutoff conditions include one or more of: the recognition accuracy of the initial image segmentation model is greater than the preset recognition accuracy, the recognition speed of the initial image segmentation model is greater than the preset recognition speed, and the training times of the initial image segmentation model are greater than the preset training times.
In one possible embodiment, the road-scene image contains overlapping objects;
when the first object feature of each object included in the road-view image is identified in the step S102, the following steps may be specifically performed:
s1021: when the overlapped objects are the same kind of objects, the overlapped objects are taken as a target object, and the first image characteristics of the target object are identified through the initial image segmentation model.
For example, when the overlapped objects are two pedestrians, the two pedestrians are used as a target object, and the first image feature of the target object is identified through the initial image segmentation model.
S1022: when the overlapped objects are different types of objects, identifying each object in the overlapped objects through the initial image segmentation model respectively to obtain a first object characteristic of each object.
Illustratively, when the overlapped objects are a pedestrian and a vehicle, the pedestrian and the vehicle are respectively identified by an initial image segmentation model, and a first object feature of the pedestrian and a first object feature of the vehicle are obtained.
In another possible implementation, if the road-landscape image does not include the overlapped objects, each object in the road-landscape image is identified by the initial image segmentation model, and the first object feature of each object is obtained.
In one possible embodiment, when the step S102 is executed to delete an object feature that does not exist in the second object feature of each object marked in the annotation image from the first divided image to obtain the second divided image, the following steps may be specifically executed:
s1023: deleting object features which do not exist in the second object features of each object marked in the marked image from the first divided image to obtain a third divided image; the object features which do not exist in the second object features are features of the environmental interference information; the third segmentation image is an image with the environmental interference information removed.
And deleting the object features which do not exist in the second object features of each object marked in the marked image from the first divided image to obtain a third divided image which is an image with missing part content. For example, the first divided image contains snowflakes, and if there is no object feature of snowflakes in the second object feature of each object marked in the annotation image, the object feature of snowflakes is deleted from the first divided image, and the content of the position of snowflakes in the obtained third divided image is missing.
S1024: and supplementing the missing region according to the pixel value of the first pixel point in the target range around the missing region on the third segmentation image aiming at the missing region of the missing pixel point after the environmental interference information is removed in the third segmentation image, so as to obtain a second segmentation image.
In this embodiment, for a content-missing region in the third segmented image, the missing region is supplemented according to the pixel value of the first pixel point in the target range around the missing region, so as to obtain the second segmented image. So that the second segmented image has no environmental interference information and no missing region.
In a possible embodiment, when the step S1024 is executed to supplement the missing region according to the pixel value of the first pixel point in the target range around the missing region on the third segmented image, the following steps may be specifically executed:
s10241: and calculating the difference of pixel values between first pixel points positioned on two sides of the central point on a straight line passing through the central point in the target range based on the central point of the missing region to obtain a difference value corresponding to each straight line.
Fig. 2 is a schematic diagram illustrating a missing region on a third segmented image provided by an embodiment of the present application, and as shown in fig. 2, the target range may be a range on the third segmented image, which is surrounded by a distance from a center point of the missing region to the target length.
There are countless straight lines passing through the central point, so in the embodiment of the present application, the straight lines passing through the central point in the target range may be determined according to the preset included angle, and the preset included angle between each straight line is 10 degrees, for example.
And calculating the difference of pixel values between first pixel points positioned at two sides of the central point on each straight line, wherein the first pixel points are pixel points in a target range on the third segmentation image.
S10242: and determining a target straight line corresponding to the maximum difference from the straight lines according to the difference corresponding to each straight line, taking the direction from large to small of the pixel values of the first pixel points positioned at two sides of the central point on the target straight line as a target direction, and calculating the missing length of the missing area in the target direction.
As shown in fig. 2, in a specific embodiment, the direction indicated by the arrow is a target direction, and the missing length of the missing region in the target direction is calculated, specifically, the number of pixels missing in the target direction of the missing region may be calculated, and the physical missing length of the missing region in the target direction may also be calculated.
S10243: and calculating the ratio of the difference value corresponding to the target straight line to the missing length to obtain the change value of the pixel value of the second pixel point in the missing area in the target direction.
In this embodiment, the second pixel point is a pixel point that needs to be supplemented in the missing region.
S10244: and supplementing the image value of the second image point in the missing region according to the image value of the first pixel point positioned at two sides of the central point on the target straight line, the target direction and the change value.
Illustratively, when the missing length is the number of the missing pixels in the target direction, if the ratio of the difference corresponding to the target straight line to the missing length is calculated to obtain 3, the difference between the pixel values of the second pixels in the target direction in the missing region is 3, which is expressed by the change numerical value of the pixel value of the second pixel adjacent to the target direction.
Illustratively, if the pixel values of the first pixel points located on both sides of the center point on the target straight line are 100 and 115, respectively, the missing length is 5, and the ratio is 3, then the pixel values of every two adjacent second pixel points in the target direction are 115, 112, 109, 106, and 103, respectively.
The pixel values of the second pixel points on the target straight line are 115, 112, 109, 106, and 103 in sequence, and then when determining the pixel values of other second pixel points located at positions parallel to the target straight line in the missing region, specifically, as shown in fig. 2, the pixel values of the second pixel points located in the missing region and in the vertical direction to the target straight line are the same.
In another possible implementation manner, when step S1024 is executed to supplement the missing region according to the pixel value of the first pixel point in the target range around the missing region on the third segmented image, the following steps may be specifically executed:
s10245: and calculating the mean value of the pixel values of the first pixel points of the target number adjacent to each edge position on the missing region, and supplementing the edge positions by using the mean values to obtain a supplemented region.
Fig. 3 is a schematic diagram illustrating another missing region on a third segmented image provided in the embodiment of the present application, and as shown in fig. 3, in this embodiment, for each third pixel point on an edge of the missing region, a mean value of pixel values of target number of first pixel points adjacent to the third pixel point is calculated, and the edge position is supplemented by using the mean value, so as to obtain a supplemented region.
S10246: and removing the supplementary region from the missing region to obtain a target missing region, and judging whether the area of the target missing region is 0.
S10247: if the area of the target missing region is 0, it indicates that the missing region is completed.
S10248: if the area of the target missing region is not 0, taking the target missing region as a new missing region, and continuing to execute the following steps: and calculating the mean value of the pixel values of the first pixel points of the target number adjacent to each edge position on the missing region, and supplementing the edge positions by using the mean values to obtain a supplemented region.
If the area of the target missing region is not 0, it indicates that the missing region is not completely supplemented, and the target missing region needs to be continuously supplemented.
In one possible embodiment, when the step S101 is executed to acquire the road landscape data set used for training the image segmentation model, the following steps may be specifically executed:
s1011: a plurality of initial road-scene images are acquired.
And acquiring a plurality of initial road landscape images and the corresponding annotation image of each initial road landscape image. The initial road-landscape image may be an initial road-landscape image without environmental interference information, an initial road-landscape image with environmental interference information.
The initial road landscape image may be shot by radar, or shot by a camera, or downloaded from a network.
S1012: for each initial road landscape image, performing image processing on the initial road landscape image to generate a sample image corresponding to the initial road landscape image; the image processing includes: image flipping, image cropping, image scaling.
The image turning can turn over the initial road landscape image according to a preset turning angle, the image cutting can cut the initial road landscape image into at least one sample image, the size of the sample image obtained after cutting is changed, and the position of each object in the sample image relative to each object in the initial road landscape image is changed. The image scaling may be a random scaling, and the position of each object in the scaled sample image relative to each object in the initial road-scene image is also changed.
S1013: and taking the initial road landscape image and the sample image corresponding to the initial road landscape image as the road landscape image in the road landscape data set.
By performing image processing on the initial road landscape image, the number of samples is increased.
In a possible embodiment, after the image segmentation model is obtained in step S104, the following detailed description is made on the stage of identifying each object included in the road-view image to be identified by using the image segmentation model, and specifically includes the following steps:
s105: and acquiring a road landscape image to be identified.
The image segmentation model is used for carrying out image feature processing on the road landscape image to be identified acquired in real time, and identifying which of the road landscape image to be identified are indicator lights, traffic signs, obstacles or pedestrians and the like.
In the embodiment of the application, the image segmentation model can be applied to an automatic driving automobile, and a road landscape image to be identified can be acquired in real time through a shooting device in the automatic driving automobile.
S106: inputting the road landscape image to be recognized into an image segmentation model, and performing second image characteristic processing on the road landscape image to be recognized through the image segmentation model; the second image feature processing includes: identifying third object characteristics of each object contained in the road landscape image to be identified, performing network semantic segmentation on each object contained in the road landscape image to be identified according to the third object characteristics to obtain a third segmentation image, comparing the types of each object obtained by segmentation contained in the third segmentation image with preset types, and deleting object characteristics corresponding to objects which do not exist in the preset types from each object contained in the third segmentation image to obtain a target segmentation image.
The third segmentation image is obtained by performing network semantic segmentation on the road landscape image to be recognized through an image segmentation model, namely labeling objects of various categories in the road landscape image to be recognized through the image segmentation model, and specifically, the image segmentation model is labeled by attaching the objects with different colors according to the shapes of the objects.
The preset category may be a category other than the environmental disturbance information, such as a vehicle, a building, a pedestrian, an obstacle, a road, and the like. Does not contain environmental interference substances such as snowflakes, raindrops, sand grains and the like. And comparing the types of the objects obtained by segmentation contained in the third segmentation image with preset types, and deleting object features corresponding to objects which do not exist in the preset types from the objects contained in the third segmentation image to obtain the target segmentation image. That is, object features corresponding to objects such as snowflakes, raindrops, and sand grains are deleted from the objects included in the third segmented image, so that the obtained target segmented image does not have environmental interference information.
Example two:
based on the same technical concept, an embodiment of the present application further provides an image segmentation model training device, and fig. 4 shows a schematic structural diagram of the image segmentation model training device provided in the embodiment of the present application, and as shown in fig. 4, the device includes:
a first obtaining module 401, configured to obtain a road landscape data set used for training an image segmentation model; the road-landscape data set includes: the road landscape image without environmental interference information, the road landscape image with the environmental interference information and the label image corresponding to each road landscape image are obtained; the labeled image is an image for labeling the types of different objects in the road landscape image;
a first input module 402, configured to input the road-landscape image into an initial image segmentation model, and perform first image feature processing on the road-landscape image through the initial image segmentation model; the first image feature processing includes: identifying a first object feature of each object contained in the road landscape image, performing network semantic segmentation on each object contained in the road landscape image according to the first object feature to obtain a first segmented image, comparing the first segmented image with the labeled image corresponding to the first segmented image, and deleting object features which do not exist in a second object feature of each object labeled in the labeled image from the first segmented image to obtain a second segmented image;
a first calculating module 403, configured to perform loss function calculation using the second segmented image and the labeled image corresponding to the second segmented image to obtain a loss value;
a training module 404, configured to perform back propagation training on the initial image segmentation model by using the loss value, so as to adjust a learnable parameter in the initial image segmentation model until the learnable parameter completes convergence, and then end model training to obtain the image segmentation model.
Optionally, the road landscape image includes an overlapped object;
the first input module 402, when configured to identify a first object feature of each object included in the road-scene image, is specifically configured to:
when the overlapped objects are objects of the same kind, taking the overlapped objects as a target object, and identifying a first image characteristic of the target object through the initial image segmentation model;
when the overlapped objects are different types of objects, identifying each object in the overlapped objects respectively through the initial image segmentation model to obtain a first object characteristic of each object.
Optionally, when the first input module 402 is configured to delete, from the first divided image, an object feature that does not exist in the second object feature of each object marked in the marked image, so as to obtain a second divided image, the first input module is specifically configured to:
deleting object features which do not exist in the second object features of each object marked in the marked image from the first divided image to obtain a third divided image; the object features which do not exist in the second object features are features of environmental interference information; the third segmentation image is an image from which the environmental interference information is removed;
and supplementing the missing region according to the pixel value of the first pixel point in the target range around the missing region on the third segmentation image aiming at the missing region of the missing pixel point after the environmental interference information is removed in the third segmentation image, so as to obtain the second segmentation image.
Optionally, when the first input module 402 is configured to supplement the missing region according to the pixel value of the first pixel point in the target range around the missing region on the third segmented image, specifically configured to:
calculating the difference of pixel values between the first pixel points on two sides of the central point on a straight line passing through the central point in the target range based on the central point of the missing region to obtain a difference value corresponding to each straight line;
determining a target straight line corresponding to the largest difference value from the straight lines according to the difference value corresponding to each straight line, taking the direction from large to small of the pixel values of the first pixel points positioned on the two sides of the central point on the target straight line as a target direction, and calculating the missing length of the missing area in the target direction;
calculating the ratio of the difference value corresponding to the target straight line to the missing length to obtain the change value of the pixel value of a second pixel point in the missing area in the target direction;
and supplementing the image value of the second image point in the missing region according to the pixel values of the first pixel points positioned on two sides of the central point on the target straight line, the target direction and the change value.
Optionally, when the first input module 402 is configured to supplement the missing region according to the pixel value of the first pixel point in the target range around the missing region on the third segmented image, specifically configured to:
calculating the average value of the pixel values of the first pixel points of the target number adjacent to each edge position on the missing area, and supplementing the edge position by using the average value to obtain a supplemented area;
removing the supplementary region from the missing region to obtain a target missing region, and judging whether the area of the target missing region is 0;
if the area of the target missing region is 0, indicating that the missing region is completed;
if the area of the target missing region is not 0, taking the target missing region as a new missing region, and continuing to execute the following steps: and calculating the average value of the pixel values of the first pixel points of the target number adjacent to each edge position on the missing region, and supplementing the edge position by using the average value to obtain a supplemented region.
Optionally, when the first obtaining module 401 is configured to obtain a road landscape data set used for training an image segmentation model, it is specifically configured to:
acquiring a plurality of initial road landscape images;
for each initial road landscape image, carrying out image processing on the initial road landscape image to generate a sample image corresponding to the initial road landscape image; the image processing includes: one or more of image flipping, image cropping, and image scaling;
and taking the initial road landscape image and the sample image corresponding to the initial road landscape image as the road landscape image in the road landscape data set.
Optionally, the method further includes:
a second obtaining module, configured to obtain a road landscape image to be recognized after the training module 404 obtains the image segmentation model;
the second input module is used for inputting the road landscape image to be identified into the image segmentation model and carrying out second image characteristic processing on the road landscape image to be identified through the image segmentation model; the second image feature processing includes: identifying third object characteristics of each object contained in the road landscape image to be identified, performing network semantic segmentation on each object contained in the road landscape image to be identified according to the third object characteristics to obtain a third segmentation image, comparing the types of each object obtained by segmentation contained in the third segmentation image with preset types, and deleting object characteristics corresponding to objects which do not exist in the preset types from each object contained in the third segmentation image to obtain a target segmentation image.
For the specific implementation steps and principles, reference is made to the description of the first embodiment, which is not repeated herein.
Example three:
based on the same technical concept, an embodiment of the present application further provides an electronic device, and fig. 5 shows a schematic structural diagram of the electronic device provided in the embodiment of the present application, and as shown in fig. 5, the electronic device 500 includes: a processor 501, a memory 502 and a bus 503, wherein the memory stores machine-readable instructions executable by the processor, when the electronic device is operated, the processor 501 and the memory 502 communicate with each other through the bus 503, and the processor 501 executes the machine-readable instructions to execute the steps of the method described in the first embodiment.
For the specific implementation steps and principles, reference is made to the description of the first embodiment, which is not repeated herein.
Example four:
based on the same technical concept, a computer-readable storage medium is further provided in a fourth embodiment of the present application, where a computer program is stored on the computer-readable storage medium, and the computer program is executed by a processor to perform the method steps in the first embodiment.
For the specific implementation steps and principles, reference is made to the description of the first embodiment, which is not repeated herein.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the exemplary embodiments of the present application, and are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An image segmentation model training method is characterized by comprising the following steps:
acquiring a road landscape data set for training an image segmentation model; the road-landscape data set includes: the road landscape image without environmental interference information, the road landscape image with the environmental interference information and the label image corresponding to each road landscape image are obtained; the labeled image is an image for labeling the types of different objects in the road landscape image;
inputting the road landscape image into an initial image segmentation model, and performing first image feature processing on the road landscape image through the initial image segmentation model; the first image feature processing includes: identifying a first object feature of each object contained in the road landscape image, performing network semantic segmentation on each object contained in the road landscape image according to the first object feature to obtain a first segmented image, comparing the first segmented image with the labeled image corresponding to the first segmented image, and deleting object features which do not exist in a second object feature of each object labeled in the labeled image from the first segmented image to obtain a second segmented image;
performing loss function calculation by using the second segmentation image and the annotation image corresponding to the second segmentation image to obtain a loss value;
and performing back propagation training on the initial image segmentation model by using the loss value so as to adjust learnable parameters in the initial image segmentation model until convergence of the learnable parameters is completed, and ending model training to obtain the image segmentation model.
2. The image segmentation model training method according to claim 1, wherein the road-scene image includes overlapped objects;
the identifying a first object feature of each object included in the road-scene image includes:
when the overlapped objects are objects of the same kind, taking the overlapped objects as a target object, and identifying a first image characteristic of the target object through the initial image segmentation model;
when the overlapped objects are different types of objects, identifying each object in the overlapped objects respectively through the initial image segmentation model to obtain a first object characteristic of each object.
3. The method for training an image segmentation model according to claim 1, wherein the deleting object features that do not exist in the second object features of each object labeled in the labeled image from the first segmented image to obtain a second segmented image comprises:
deleting object features which do not exist in the second object features of each object marked in the marked image from the first divided image to obtain a third divided image; the object features which do not exist in the second object features are features of environmental interference information; the third segmentation image is an image from which the environmental interference information is removed;
and supplementing the missing region according to the pixel value of the first pixel point in the target range around the missing region on the third segmentation image aiming at the missing region of the missing pixel point after the environmental interference information is removed in the third segmentation image, so as to obtain the second segmentation image.
4. The method for training the image segmentation model according to claim 3, wherein the supplementing the missing region according to the pixel value of the first pixel point in the target range around the missing region on the third segmented image comprises:
calculating the difference of pixel values between the first pixel points on two sides of the central point on a straight line passing through the central point in the target range based on the central point of the missing region to obtain a difference value corresponding to each straight line;
determining a target straight line corresponding to the largest difference value from the straight lines according to the difference value corresponding to each straight line, taking the direction from large to small of the pixel values of the first pixel points positioned on the two sides of the central point on the target straight line as a target direction, and calculating the missing length of the missing area in the target direction;
calculating the ratio of the difference value corresponding to the target straight line to the missing length to obtain the change value of the pixel value of a second pixel point in the missing area in the target direction;
and supplementing the image value of the second image point in the missing region according to the pixel values of the first pixel points positioned on two sides of the central point on the target straight line, the target direction and the change value.
5. The method for training the image segmentation model according to claim 3, wherein the supplementing the missing region according to the pixel value of the first pixel point in the target range around the region on the third segmented image comprises:
calculating the average value of the pixel values of the first pixel points of the target number adjacent to each edge position on the missing area, and supplementing the edge position by using the average value to obtain a supplemented area;
removing the supplementary region from the missing region to obtain a target missing region, and judging whether the area of the target missing region is 0;
if the area of the target missing region is 0, indicating that the missing region is completed;
if the area of the target missing region is not 0, taking the target missing region as a new missing region, and continuing to execute the following steps: and calculating the average value of the pixel values of the first pixel points of the target number adjacent to each edge position on the missing region, and supplementing the edge position by using the average value to obtain a supplemented region.
6. The image segmentation model training method of claim 1, wherein obtaining a road-scene data set for training an image segmentation model comprises:
acquiring a plurality of initial road landscape images;
for each initial road landscape image, carrying out image processing on the initial road landscape image to generate a sample image corresponding to the initial road landscape image; the image processing includes: one or more of image flipping, image cropping, and image scaling;
and taking the initial road landscape image and the sample image corresponding to the initial road landscape image as the road landscape image in the road landscape data set.
7. The method for training the image segmentation model according to claim 1, wherein after obtaining the image segmentation model, the method further comprises:
acquiring a road landscape image to be identified;
inputting the road landscape image to be identified into the image segmentation model, and performing second image characteristic processing on the road landscape image to be identified through the image segmentation model; the second image feature processing includes: identifying third object characteristics of each object contained in the road landscape image to be identified, performing network semantic segmentation on each object contained in the road landscape image to be identified according to the third object characteristics to obtain a third segmentation image, comparing the types of each object obtained by segmentation contained in the third segmentation image with preset types, and deleting object characteristics corresponding to objects which do not exist in the preset types from each object contained in the third segmentation image to obtain a target segmentation image.
8. An image segmentation model training device, comprising:
the first acquisition module is used for acquiring a road landscape data set used for training an image segmentation model; the road-landscape data set includes: the road landscape image without environmental interference information, the road landscape image with the environmental interference information and the label image corresponding to each road landscape image are obtained; the labeled image is an image for labeling the types of different objects in the road landscape image;
the first input module is used for inputting the road landscape image into an initial image segmentation model and carrying out first image feature processing on the road landscape image through the initial image segmentation model; the first image feature processing includes: identifying a first object feature of each object contained in the road landscape image, performing network semantic segmentation on each object contained in the road landscape image according to the first object feature to obtain a first segmented image, comparing the first segmented image with the labeled image corresponding to the first segmented image, and deleting object features which do not exist in a second object feature of each object labeled in the labeled image from the first segmented image to obtain a second segmented image;
the first calculation module is used for performing loss function calculation by using the second segmentation image and the annotation image corresponding to the second segmentation image to obtain a loss value;
and the training module is used for carrying out back propagation training on the initial image segmentation model by using the loss value so as to adjust the learnable parameters in the initial image segmentation model until the learnable parameters finish convergence and then finish model training to obtain the image segmentation model.
9. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the image segmentation model training method according to any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, performs the steps of the image segmentation model training method according to any one of claims 1 to 7.
CN202210015383.4A 2022-01-07 2022-01-07 Image segmentation model training method and device, electronic equipment and readable storage medium Active CN114359233B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210015383.4A CN114359233B (en) 2022-01-07 2022-01-07 Image segmentation model training method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210015383.4A CN114359233B (en) 2022-01-07 2022-01-07 Image segmentation model training method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN114359233A true CN114359233A (en) 2022-04-15
CN114359233B CN114359233B (en) 2024-04-02

Family

ID=81107419

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210015383.4A Active CN114359233B (en) 2022-01-07 2022-01-07 Image segmentation model training method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114359233B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117689660A (en) * 2024-02-02 2024-03-12 杭州百子尖科技股份有限公司 Vacuum cup temperature quality inspection method based on machine vision
CN117689660B (en) * 2024-02-02 2024-05-14 杭州百子尖科技股份有限公司 Vacuum cup temperature quality inspection method based on machine vision

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084095A (en) * 2019-03-12 2019-08-02 浙江大华技术股份有限公司 Method for detecting lane lines, lane detection device and computer storage medium
CN110189341A (en) * 2019-06-05 2019-08-30 北京青燕祥云科技有限公司 A kind of method, the method and device of image segmentation of Image Segmentation Model training
CN110246142A (en) * 2019-06-14 2019-09-17 深圳前海达闼云端智能科技有限公司 A kind of method, terminal and readable storage medium storing program for executing detecting barrier
WO2021189847A1 (en) * 2020-09-03 2021-09-30 平安科技(深圳)有限公司 Training method, apparatus and device based on image classification model, and storage medium
EP3910590A2 (en) * 2021-03-31 2021-11-17 Beijing Baidu Netcom Science Technology Co., Ltd. Method and apparatus of processing image, electronic device, and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084095A (en) * 2019-03-12 2019-08-02 浙江大华技术股份有限公司 Method for detecting lane lines, lane detection device and computer storage medium
CN110189341A (en) * 2019-06-05 2019-08-30 北京青燕祥云科技有限公司 A kind of method, the method and device of image segmentation of Image Segmentation Model training
CN110246142A (en) * 2019-06-14 2019-09-17 深圳前海达闼云端智能科技有限公司 A kind of method, terminal and readable storage medium storing program for executing detecting barrier
WO2021189847A1 (en) * 2020-09-03 2021-09-30 平安科技(深圳)有限公司 Training method, apparatus and device based on image classification model, and storage medium
EP3910590A2 (en) * 2021-03-31 2021-11-17 Beijing Baidu Netcom Science Technology Co., Ltd. Method and apparatus of processing image, electronic device, and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
冯语姗;王子磊;: "自上而下注意图分割的细粒度图像分类", 中国图象图形学报, no. 09, 16 September 2016 (2016-09-16) *
青晨;禹晶;肖创柏;段娟;: "深度卷积神经网络图像语义分割研究进展", 中国图象图形学报, no. 06, 16 June 2020 (2020-06-16) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117689660A (en) * 2024-02-02 2024-03-12 杭州百子尖科技股份有限公司 Vacuum cup temperature quality inspection method based on machine vision
CN117689660B (en) * 2024-02-02 2024-05-14 杭州百子尖科技股份有限公司 Vacuum cup temperature quality inspection method based on machine vision

Also Published As

Publication number Publication date
CN114359233B (en) 2024-04-02

Similar Documents

Publication Publication Date Title
CN110069986B (en) Traffic signal lamp identification method and system based on hybrid model
CN109508580B (en) Traffic signal lamp identification method and device
US10074020B2 (en) Vehicular lane line data processing method, apparatus, storage medium, and device
US8611585B2 (en) Clear path detection using patch approach
US8452053B2 (en) Pixel-based texture-rich clear path detection
CN110619279B (en) Road traffic sign instance segmentation method based on tracking
CN112991791B (en) Traffic information identification and intelligent driving method, device, equipment and storage medium
US20090295917A1 (en) Pixel-based texture-less clear path detection
US20100098297A1 (en) Clear path detection using segmentation-based method
CN111580131B (en) Method for identifying vehicles on expressway by three-dimensional laser radar intelligent vehicle
CN110188482B (en) Test scene creating method and device based on intelligent driving
JP2002083297A (en) Object recognition method and object recognition device
CN114413881A (en) Method and device for constructing high-precision vector map and storage medium
CN111382625A (en) Road sign identification method and device and electronic equipment
CN112990293A (en) Point cloud marking method and device and electronic equipment
CN114841910A (en) Vehicle-mounted lens shielding identification method and device
CN117576652B (en) Road object identification method and device, storage medium and electronic equipment
JP5327241B2 (en) Object identification device
CN114359233B (en) Image segmentation model training method and device, electronic equipment and readable storage medium
CN111126154A (en) Method and device for identifying road surface element, unmanned equipment and storage medium
CN114693722B (en) Vehicle driving behavior detection method, detection device and detection equipment
Ab Ghani et al. Lane Detection Using Deep Learning for Rainy Conditions
CN114972731A (en) Traffic light detection and identification method and device, moving tool and storage medium
CN114359859A (en) Method and device for processing target object with shielding and storage medium
CN112418183A (en) Parking lot element extraction method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant