CN114359233B - Image segmentation model training method and device, electronic equipment and readable storage medium - Google Patents

Image segmentation model training method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN114359233B
CN114359233B CN202210015383.4A CN202210015383A CN114359233B CN 114359233 B CN114359233 B CN 114359233B CN 202210015383 A CN202210015383 A CN 202210015383A CN 114359233 B CN114359233 B CN 114359233B
Authority
CN
China
Prior art keywords
image
road landscape
segmentation model
segmented
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210015383.4A
Other languages
Chinese (zh)
Other versions
CN114359233A (en
Inventor
许春磊
王超
马维士
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Huayuan Information Technology Co Ltd
Original Assignee
Beijing Huayuan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Huayuan Information Technology Co Ltd filed Critical Beijing Huayuan Information Technology Co Ltd
Priority to CN202210015383.4A priority Critical patent/CN114359233B/en
Publication of CN114359233A publication Critical patent/CN114359233A/en
Application granted granted Critical
Publication of CN114359233B publication Critical patent/CN114359233B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application provides an image segmentation model training method, an image segmentation model training device, electronic equipment and a readable storage medium, wherein the method comprises the following steps: acquiring a road landscape data set for training an image segmentation model; inputting a road landscape image into an initial image segmentation model, performing network semantic segmentation on the road landscape image according to first object features of each object contained in the identified road landscape image to obtain a first segmentation image, and deleting object features which are not existed in second object features of each object marked in a marked image from the first segmentation image to obtain a second segmentation image; performing loss function calculation by using the second divided image and the marked image corresponding to the second divided image to obtain a loss value; and carrying out back propagation training on the initial image segmentation model by using the loss value so as to adjust the learnable parameters in the initial image segmentation model. By the method, accuracy of image segmentation model identification is improved.

Description

Image segmentation model training method and device, electronic equipment and readable storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image segmentation model training method and apparatus, an electronic device, and a readable storage medium.
Background
In the running process of the automatic driving automobile, the shot road landscape image needs to be subjected to semantic segmentation, so that different objects contained in the road landscape image can be distinguished and identified, for example, which are the indication lamps, which are the traffic signs, which are the obstacles or pedestrians and the like are identified from the road landscape image.
However, when the automatic driving vehicle runs in windy and sandy, rainy and snowy weather, some environmental interference information (such as sand dust, rain and snow and illumination intensity) may exist in the road landscape image obtained by shooting at this time, and the environmental interference information may cause poor definition of the road landscape image, so that the distinction and recognition of each object in the road landscape image are affected, and the recognition accuracy is lower.
Disclosure of Invention
In view of the foregoing, an object of the present application is to provide an image segmentation model training method, an image segmentation model training device, an electronic device and a readable storage medium, so as to improve accuracy of identifying objects in a road landscape image by using an image segmentation model.
In a first aspect, an embodiment of the present application provides an image segmentation model training method, including:
acquiring a road landscape data set for training an image segmentation model; the road landscape data set comprises: the road landscape image without environment interference information, the road landscape image with environment interference information and the annotation image corresponding to each road landscape image; the labeling image is an image for labeling the categories of different types of objects in the road landscape image;
inputting the road landscape image into an initial image segmentation model, and performing first image feature processing on the road landscape image through the initial image segmentation model; the first image feature processing includes: identifying first object features of each object contained in the road landscape image, performing network semantic segmentation on each object contained in the road landscape image according to the first object features to obtain a first segmented image, comparing the first segmented image with the labeling image corresponding to the first segmented image, and deleting object features which are not present in the second object features of each object labeled in the labeling image from the first segmented image to obtain a second segmented image;
Performing loss function calculation by using the second segmented image and the labeling image corresponding to the second segmented image to obtain a loss value;
and carrying out back propagation training on the initial image segmentation model by using the loss value so as to adjust the learnable parameters in the initial image segmentation model, and ending model training after the learnable parameters are converged, so as to obtain the image segmentation model.
With reference to the first aspect, the embodiments of the present application provide a first possible implementation manner of the first aspect, where the road landscape image includes an overlapped object;
the identifying a first object feature of each object contained in the road-scene image includes:
when the overlapped objects are the same kind of objects, taking the overlapped objects as a target object, and identifying first image features of the target object through the initial image segmentation model;
when the overlapped objects are different kinds of objects, respectively identifying each object in the overlapped objects through the initial image segmentation model to obtain first object characteristics of each object.
With reference to the first aspect, an embodiment of the present application provides a second possible implementation manner of the first aspect, where deleting, from the first segmented image, object features that are not present in the second object feature of each object marked in the marking image, to obtain a second segmented image includes:
Deleting object features which are not existed in the second object features of each object marked in the marked image from the first segmented image to obtain a third segmented image; the object features which are not present in the second object features are features of environmental interference information; the third segmentation image is an image from which the environmental interference information is removed;
and supplementing a missing region of the missing pixel points after the environmental interference information is removed in the third segmented image according to the pixel values of the first pixel points in the target range around the missing region on the third segmented image, so as to obtain the second segmented image.
With reference to the second possible implementation manner of the first aspect, the embodiment of the present application provides a third possible implementation manner of the first aspect, wherein the supplementing the missing area according to the pixel values of the first pixel points in the target range around the missing area on the third segmented image includes:
calculating the difference of pixel values between the first pixel points positioned at two sides of the center point on the straight line passing through the center point in the target range based on the center point of the missing region, and obtaining a difference value corresponding to each straight line;
Determining a target straight line corresponding to the maximum difference value from the straight lines according to the difference value corresponding to each straight line, taking the direction from large to small of the pixel value of the first pixel points positioned on the two sides of the center point on the target straight line as a target direction, and calculating the missing length of the missing region in the target direction;
calculating the ratio of the difference value corresponding to the target straight line to the missing length to obtain a change value of the pixel value of a second pixel point in the missing area in the target direction;
and supplementing the image value of the second image number point in the missing area according to the pixel value of the first pixel point, the target direction and the change value, which are positioned on the two sides of the center point, on the target straight line.
With reference to the second possible implementation manner of the first aspect, the embodiment of the present application provides a fourth possible implementation manner of the first aspect, wherein the supplementing the missing area according to the pixel values of the first pixel points in the target range around the area on the third segmented image includes:
calculating the average value of the pixel values of the first pixel points of the target quantity adjacent to each edge position on the missing region, and supplementing the edge position by using the average value to obtain a supplementing region;
Removing the supplementary region from the missing region to obtain a target missing region, and judging whether the area of the target missing region is 0;
if the area of the target missing region is 0, the missing region is completely supplemented;
if the area of the target missing region is not 0, taking the target missing region as a new missing region, and continuing to execute the steps: and calculating the average value of the pixel values of the first pixel points of the target quantity adjacent to the edge position aiming at each edge position on the missing region, and supplementing the edge position by using the average value to obtain a supplementing region.
With reference to the first aspect, the embodiments of the present application provide a fifth possible implementation manner of the first aspect, wherein obtaining a road landscape data set for training an image segmentation model includes:
acquiring a plurality of initial road landscape images;
performing image processing on the initial road landscape image aiming at each initial road landscape image to generate a sample image corresponding to the initial road landscape image; the image processing includes: one or more of image flipping, image cropping, image scaling;
And taking the sample image corresponding to the initial road landscape image as the road landscape image in the road landscape data set.
With reference to the first aspect, an embodiment of the present application provides a sixth possible implementation manner of the first aspect, where after the obtaining the image segmentation model, the method further includes:
acquiring a road landscape image to be identified;
inputting the road landscape image to be identified into the image segmentation model, and performing second image feature processing on the road landscape image to be identified through the image segmentation model; the second image feature processing includes: and identifying third object features of each object contained in the road landscape image to be identified, carrying out network semantic segmentation on each object contained in the road landscape image to be identified according to the third object features to obtain a third segmented image, comparing the types of the segmented objects contained in the third segmented image with preset types, and deleting object features corresponding to objects which are not contained in the preset types from the objects contained in the third segmented image to obtain a target segmented image.
In a second aspect, an embodiment of the present application further provides an image segmentation model training apparatus, including:
the first acquisition module is used for acquiring a road landscape data set for training an image segmentation model; the road landscape data set comprises: the road landscape image without environment interference information, the road landscape image with environment interference information and the annotation image corresponding to each road landscape image; the labeling image is an image for labeling the categories of different types of objects in the road landscape image;
the first input module is used for inputting the road landscape image into an initial image segmentation model, and performing first image feature processing on the road landscape image through the initial image segmentation model; the first image feature processing includes: identifying first object features of each object contained in the road landscape image, performing network semantic segmentation on each object contained in the road landscape image according to the first object features to obtain a first segmented image, comparing the first segmented image with the labeling image corresponding to the first segmented image, and deleting object features which are not present in the second object features of each object labeled in the labeling image from the first segmented image to obtain a second segmented image;
The calculation module is used for carrying out loss function calculation by using the second segmentation image and the labeling image corresponding to the second segmentation image to obtain a loss value;
and the training module is used for carrying out back propagation training on the initial image segmentation model by utilizing the loss value so as to adjust the learnable parameters in the initial image segmentation model, and ending model training after the learnable parameters are converged, so as to obtain the image segmentation model.
With reference to the second aspect, embodiments of the present application provide a first possible implementation manner of the second aspect, where the road landscape image includes an overlapped object;
the first input module is specifically configured to, when used for identifying a first object feature of each object included in the road landscape image:
when the overlapped objects are the same kind of objects, taking the overlapped objects as a target object, and identifying first image features of the target object through the initial image segmentation model;
when the overlapped objects are different kinds of objects, respectively identifying each object in the overlapped objects through the initial image segmentation model to obtain first object characteristics of each object.
With reference to the second aspect, an embodiment of the present application provides a second possible implementation manner of the second aspect, where the first input module is configured to delete, from the first segmented image, object features that are not in the second object feature of each object marked in the marked image, to obtain a second segmented image, where the second segmented image is specifically configured to:
deleting object features which are not existed in the second object features of each object marked in the marked image from the first segmented image to obtain a third segmented image; the object features which are not present in the second object features are features of environmental interference information; the third segmentation image is an image from which the environmental interference information is removed;
and supplementing a missing region of the missing pixel points after the environmental interference information is removed in the third segmented image according to the pixel values of the first pixel points in the target range around the missing region on the third segmented image, so as to obtain the second segmented image.
With reference to the second possible implementation manner of the second aspect, the embodiment of the present application provides a third possible implementation manner of the second aspect, where the first input module is configured to, when performing the supplementing of the missing area according to a pixel value of a first pixel point in a target range around the missing area on the third segmented image, specifically:
Calculating the difference of pixel values between the first pixel points positioned at two sides of the center point on the straight line passing through the center point in the target range based on the center point of the missing region, and obtaining a difference value corresponding to each straight line;
determining a target straight line corresponding to the maximum difference value from the straight lines according to the difference value corresponding to each straight line, taking the direction from large to small of the pixel value of the first pixel points positioned on the two sides of the center point on the target straight line as a target direction, and calculating the missing length of the missing region in the target direction;
calculating the ratio of the difference value corresponding to the target straight line to the missing length to obtain a change value of the pixel value of a second pixel point in the missing area in the target direction;
and supplementing the image value of the second image number point in the missing area according to the pixel value of the first pixel point, the target direction and the change value, which are positioned on the two sides of the center point, on the target straight line.
With reference to the second possible implementation manner of the second aspect, the embodiment of the present application provides a fourth possible implementation manner of the second aspect, where the first input module is configured to, when performing supplementing the missing region according to a pixel value of a first pixel point in a target range around the missing region on the third segmented image, specifically:
Calculating the average value of the pixel values of the first pixel points of the target quantity adjacent to each edge position on the missing region, and supplementing the edge position by using the average value to obtain a supplementing region;
removing the supplementary region from the missing region to obtain a target missing region, and judging whether the area of the target missing region is 0;
if the area of the target missing region is 0, the missing region is completely supplemented;
if the area of the target missing region is not 0, taking the target missing region as a new missing region, and continuing to execute the steps: and calculating the average value of the pixel values of the first pixel points of the target quantity adjacent to the edge position aiming at each edge position on the missing region, and supplementing the edge position by using the average value to obtain a supplementing region.
With reference to the second aspect, embodiments of the present application provide a fifth possible implementation manner of the second aspect, where the first obtaining module is specifically configured to, when used to obtain a road landscape data set for training an image segmentation model:
acquiring a plurality of initial road landscape images;
Performing image processing on the initial road landscape image aiming at each initial road landscape image to generate a sample image corresponding to the initial road landscape image; the image processing includes: one or more of image flipping, image cropping, image scaling;
and taking the sample image corresponding to the initial road landscape image as the road landscape image in the road landscape data set.
With reference to the second aspect, embodiments of the present application provide a sixth possible implementation manner of the second aspect, where the method further includes:
the second acquisition module is used for acquiring a road landscape image to be identified after the training module obtains the image segmentation model;
the second input module is used for inputting the road landscape image to be identified into the image segmentation model, and performing second image feature processing on the road landscape image to be identified through the image segmentation model; the second image feature processing includes: and identifying third object features of each object contained in the road landscape image to be identified, carrying out network semantic segmentation on each object contained in the road landscape image to be identified according to the third object features to obtain a third segmented image, comparing the types of the segmented objects contained in the third segmented image with preset types, and deleting object features corresponding to objects which are not contained in the preset types from the objects contained in the third segmented image to obtain a target segmented image.
In a third aspect, embodiments of the present application further provide an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of any one of the possible implementations of the first aspect.
In a fourth aspect, the present embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of any of the possible implementations of the first aspect described above.
According to the image segmentation model training method, the device, the electronic equipment and the readable storage medium, the road landscape image containing the environmental interference information and the labeling image corresponding to each road landscape image are used for training an image segmentation model, in the image segmentation model, network semantic segmentation is carried out on each object contained in the road landscape image through the first object feature of each object contained in the identified road landscape image, a first segmentation image is obtained, the first segmentation image is compared with the labeling image corresponding to the first segmentation image, and the object feature which does not exist in the second object feature of each object labeled in the labeling image is deleted from the first segmentation image, so that a second segmentation image is obtained; and training the image segmentation model according to the loss value between the second segmentation image and the marked image.
In the scheme, since all objects in the road landscape image are marked in the marking image and the environmental interference information existing in the road landscape image is not marked, the object features which are not existing in the second object features of each object marked in the marking image are deleted from the first segmentation image, namely the object features of the environmental interference information are deleted from the first segmentation image, so that the image segmentation model learns to remove the environmental interference information in the road landscape image when the network semantic segmentation is carried out, the influence of the environmental interference information on the distinguishing and identifying of all objects in the road landscape image is reduced, and the accuracy of the image segmentation model for identifying all objects in the road landscape image is improved.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 shows a flowchart of an image segmentation model training method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of a missing region on a third segmented image according to an embodiment of the present disclosure;
FIG. 3 is a schematic view of another missing region on a third segmented image according to an embodiment of the present disclosure;
fig. 4 shows a schematic structural diagram of an image segmentation model training device according to an embodiment of the present application;
fig. 5 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
Autopilot can be divided into three parts: sensing, decision making and control. Of these three parts, perception is the most fundamental and very important part. The perception is mainly responsible for acquiring road landscape images from the outside in the automatic driving field, and then identifying the drivable area, the environmental information such as signal lamps, indication signs and the like from the acquired road landscape images, and taking the drivable area, the environmental information, the signal lamps, the indication signs and the like as decision conditions. Autopilot requires perception on constantly changing roads, so that road landscape images need to be analyzed with great precision.
When the road landscape image is analyzed, in particular, the shot road landscape image is subjected to semantic segmentation, so that different objects contained in the road landscape image can be distinguished and identified, for example, which are indication lamps, which are traffic signs, which are obstacles or pedestrians and the like are identified from the road landscape image.
However, when the automatic driving vehicle runs in windy and sandy, rainy and snowy weather, some environmental interference information (such as sand dust, rain and snow and illumination intensity) may exist in the road landscape image obtained by shooting at this time, and the environmental interference information may cause poor definition of the road landscape image, so that the distinction and recognition of each object in the road landscape image are affected, and the recognition accuracy is lower.
In view of the foregoing, embodiments of the present application provide an image segmentation model training method, apparatus, electronic device, and readable storage medium, so as to improve accuracy of identifying objects in a road landscape image by using an image segmentation model, which is described below by way of embodiments.
Embodiment one:
for the sake of understanding the present embodiment, first, a detailed description is given of an image segmentation model training method disclosed in the embodiments of the present application. Fig. 1 shows a flowchart of an image segmentation model training method according to an embodiment of the present application, where the embodiment of the present application includes two stages: and an image segmentation model training stage for identifying each object stage contained in the road landscape image to be identified by using the image segmentation model. The image segmentation model training stage is to train the initial image segmentation model by using the road landscape data set to obtain a trained image segmentation model, and is specifically shown in fig. 1. The following describes the training phase of the image segmentation model in detail, and specifically includes the following steps:
s101: acquiring a road landscape data set for training an image segmentation model; the road landscape data set includes: road landscape images without environment interference information, road landscape images with environment interference information and annotation images corresponding to the road landscape images; the labeling image is an image for labeling the categories of different kinds of objects in the road landscape image.
The environmental interference information includes: one or more of rain, snow, sand, strong light, weak light. The road landscape image without the environment interference information does not contain the environment interference information, and the road landscape image with the environment interference information contains the environment interference information. The road landscape image can also comprise objects such as roads, vehicles, buildings, pedestrians, signs, flower beds and the like on the roads.
Each road landscape image corresponds to a labeling image, the labeling image is an image for labeling the types of objects of different types in the road landscape image, in a specific embodiment, the road landscape image is manually labeled, the objects of each type in the road landscape image are respectively labeled by using different colors or different labeling forms, and the labeled image is used as the labeling image of the road landscape image. The categories of the objects contained in the road landscape image may include: buildings, pedestrians, obstacles, roads, etc., in which the obstacle may be subdivided, e.g., into vehicles, animals, etc. With respect to specific classification, this embodiment is not limited thereto.
Specifically, when labeling objects of each category in the road landscape image, the labeling can be performed by attaching the objects according to the shape of each object. When the objects of each category in the road landscape image are marked by using different colors, the objects of each category can be marked with the same color. For example, when labeling pedestrians, the pedestrians can be labeled with blue according to the shapes of the pedestrians and the attached pedestrians, and all pedestrians in the road landscape image are labeled with blue.
S102: inputting the road landscape image into an initial image segmentation model, and performing first image feature processing on the road landscape image through the initial image segmentation model; the first image feature processing includes: identifying first object features of each object contained in the road landscape image, performing network semantic segmentation on each object contained in the road landscape image according to the first object features to obtain a first segmented image, comparing the first segmented image with a labeling image corresponding to the first segmented image, and deleting object features which are not present in second object features of each object labeled in the labeling image from the first segmented image to obtain a second segmented image.
The first segmentation image is an image obtained by performing network semantic segmentation on the road landscape image through an initial image segmentation model, namely, labeling objects of various categories in the road landscape image through the initial image segmentation model, specifically, the initial image segmentation model labels attached objects by using different colors according to the shapes of the objects.
If the road landscape image contains the environmental interference information, the first segmentation image corresponding to the road landscape image may also label the environmental interference information, so that the embodiment compares the label image corresponding to the road landscape image with the first segmentation image corresponding to the road landscape image, and the labeling image labels all objects in the road landscape image and does not label the environmental interference information in the road landscape image, so that the environmental interference information is not present in the obtained second segmentation image by deleting the object features which are not present in the second object features of each object labeled in the label image from the first segmentation image, namely deleting the object features of the environmental interference information from the first segmentation image.
S103: and performing loss function calculation by using the second divided image and the marked image corresponding to the second divided image to obtain a loss value.
Although the second segmentation image has no environment interference information, after the first image feature processing is performed on the road landscape image through the initial image segmentation model, each object marked with the road landscape image may not be correct, for example, there may be situations of missed marks (i.e. no marks are performed on pedestrians), wrong marks (i.e. the pedestrians are marked as buildings), and the like. The labeling result in the resulting second segmented image may be different from the labeling result on the labeled image. Therefore, the loss value is obtained by performing loss function calculation using the second divided image and the label image corresponding to the second divided image.
S104: and carrying out back propagation training on the initial image segmentation model by using the loss value so as to adjust the learnable parameters in the initial image segmentation model, and ending model training after the learnable parameters are converged, thereby obtaining the image segmentation model.
In one specific implementation, the initial image segmentation model satisfies the training cutoff condition when convergence is completed on the learnable parameters in the initial image segmentation model. Wherein the training cutoff condition includes one or more of: the recognition accuracy of the initial image segmentation model is larger than the preset recognition accuracy, the recognition speed of the initial image segmentation model is larger than the preset recognition speed, and the training times of the initial image segmentation model are larger than the preset training times.
In one possible embodiment, the road landscape image includes overlapping objects;
in performing step S102 to identify the first object feature of each object contained in the road-landscape image, it may be specifically performed as follows:
s1021: when the overlapped objects are the same kind of object, the overlapped objects are taken as a target object, and the first image features of the target object are identified through the initial image segmentation model.
For example, when the overlapped object is two pedestrians, the two pedestrians are taken as one target object, and the first image feature of the target object is identified through the initial image segmentation model.
S1022: when the overlapped objects are different kinds of objects, respectively identifying each object in the overlapped objects through an initial image segmentation model to obtain a first object characteristic of each object.
For example, when the overlapped object is a pedestrian and a vehicle, the pedestrian and the vehicle are respectively identified through an initial image segmentation model, so as to obtain a first object feature of the pedestrian and a first object feature of the vehicle.
In another possible embodiment, if the road landscape image does not include overlapped objects, each object in the road landscape image is respectively identified through the initial image segmentation model, so as to obtain a first object feature of each object.
In one possible implementation manner, when step S102 is performed to delete, from the first segmented image, object features that are not present in the second object feature of each object marked in the marked image, so as to obtain a second segmented image, the following steps may be specifically performed:
s1023: deleting object features which are not existed in the second object features of each object marked in the marked image from the first segmented image to obtain a third segmented image; the object features not existing in the second object features are features of the environmental interference information; the third divided image is an image from which the environmental interference information is removed.
And deleting object features which are not existed in the second object features of each object marked in the marked image from the first segmented image, and obtaining a third segmented image which is an image with a missing part of content. In an exemplary embodiment, when the first segmented image includes snowflakes and the second object feature of each object marked in the marked image does not include any object feature of the snowflakes, the object feature of the snowflakes is deleted from the first segmented image, and then the content of the position of the original snowflakes in the obtained third segmented image is deleted.
S1024: and supplementing the missing region of the missing pixel points after the environmental interference information is removed in the third segmented image according to the pixel values of the first pixel points in the target range around the missing region on the third segmented image to obtain a second segmented image.
In this embodiment, for a content missing region in the third divided image, the missing region is supplemented according to pixel values of the first pixel points in the target range around the missing region, so as to obtain the second divided image. So that no ambient interference information exists in the second segmented image, nor is there a missing region.
In one possible implementation manner, when performing step S1024 to supplement the missing area according to the pixel values of the first pixel points in the target range around the missing area on the third divided image, the following steps may be specifically performed:
s10241: and calculating the difference of pixel values between the first pixel points positioned at two sides of the center point on the straight line passing through the center point in the target range based on the center point of the missing region, and obtaining the difference corresponding to each straight line.
Fig. 2 is a schematic diagram of a missing region on a third segmented image according to an embodiment of the present application, where, as shown in fig. 2, a target range may be a range defined by a distance between a center point of the missing region and a target length on the third segmented image.
In this embodiment of the present application, the line passing through the center point may be determined according to a preset included angle, and the preset included angle between each line is 10 degrees, which is exemplary.
And calculating the difference of pixel values between first pixel points positioned on two sides of the center point on each straight line, wherein the first pixel points are pixel points in a target range on the third divided image.
S10242: and determining a target straight line corresponding to the maximum difference value from the straight lines according to the difference value corresponding to each straight line, taking the direction from large to small of the pixel value of the first pixel points positioned on the two sides of the center point on the target straight line as a target direction, and calculating the missing length of the missing region in the target direction.
As shown in fig. 2, in a specific embodiment, the direction indicated by the arrow is the target direction, and the missing length of the missing area in the target direction is calculated, specifically, the number of pixels missing in the target direction of the missing area may be calculated, and the physical missing length of the missing area in the target direction may also be calculated.
S10243: and calculating the ratio of the difference value corresponding to the target straight line to the missing length to obtain the variation value of the pixel value of the second pixel point in the target direction in the missing region.
In this embodiment, the second pixel is a pixel that needs to be supplemented on the missing area.
S10244: and supplementing the image value of the second image number point in the missing area according to the pixel values of the first pixel points positioned on the two sides of the center point on the target straight line, the target direction and the change value.
For example, when the missing length is the number of pixels missing in the target direction, if the ratio of the difference value corresponding to the target straight line to the missing length is calculated to obtain 3, the value of the change of the pixel value of the second pixel point in the target direction in the missing region is obtained and indicates that the pixel values of two adjacent second pixel points in the target direction differ by 3.
For example, if the pixel values of the first pixel points located at two sides of the center point on the target straight line are respectively 100 and 115, and the missing length is 5, and the ratio is 3, then the pixel values of every two adjacent second pixel points in the target direction are respectively 115, 112, 109, 106, 103.
The pixel values of the second pixel points on the target straight line are 115, 112, 109, 106, 103 in sequence, and then, when determining the pixel values of the other second pixel points in the missing area, which are located in parallel with the target straight line, the pixel values of the second pixel points in the missing area, which are located in the vertical direction with respect to the target straight line, may be specifically referred to as shown in fig. 2.
In another possible embodiment, when performing step S1024 to supplement the missing area according to the pixel values of the first pixel points in the target range around the missing area on the third segmented image, the following steps may be specifically performed:
S10245: and calculating the average value of the pixel values of the first pixel points of the target number adjacent to each edge position on the missing region, and supplementing the edge position by using the average value to obtain a supplementing region.
Fig. 3 is a schematic diagram showing another missing region on the third segmented image provided in the embodiment of the present application, as shown in fig. 3, in this embodiment, for each third pixel point on the edge of the missing region, a mean value of pixel values of a target number of first pixel points adjacent to the third pixel point is calculated, and the edge position is complemented by using the mean value, so as to obtain a complemented region.
S10246: and removing the supplementary region from the missing region to obtain a target missing region, and judging whether the area of the target missing region is 0.
S10247: if the area of the target missing region is 0, it indicates that the missing region replenishment is completed.
S10248: if the area of the target missing region is not 0, taking the target missing region as a new missing region, and continuing to execute the steps: and calculating the average value of the pixel values of the first pixel points of the target number adjacent to each edge position on the missing region, and supplementing the edge position by using the average value to obtain a supplementing region.
If the area of the target missing region is not 0, it means that the missing region is not completely replenished, and the target missing region needs to be replenished continuously.
In one possible implementation, when step S101 is performed to acquire the road landscape data set for training the image segmentation model, the following steps may be specifically performed:
s1011: a plurality of initial road-scene images are acquired.
And acquiring a plurality of initial road landscape images and labeling images corresponding to each initial road landscape image. The initial road-scene image may be an initial road-scene image without environmental disturbance information, an initial road-scene image with environmental disturbance information.
The initial road landscape image can be shot by radar, shot by a camera or downloaded from a network.
S1012: performing image processing on each initial road landscape image to generate a sample image corresponding to the initial road landscape image; the image processing includes: one or more of image flipping, image cropping, image scaling.
The image overturning can overturn the initial road landscape image according to a preset overturning angle, the image cutting can cut the initial road landscape image into at least one sample image, the size of the sample image obtained after cutting is changed, and the positions of all objects in the sample image relative to all objects in the initial road landscape image are changed. The image scaling may be a random scaling, as well as a change in the position of each object in the scaled sample image relative to each object in the initial road-scene image.
S1013: and taking the sample image corresponding to the initial road landscape image as a road landscape image in the road landscape data set.
The number of samples is increased by performing image processing on the initial road-scene image.
In one possible embodiment, after the image segmentation model is obtained in step S104, a detailed description is next given of a stage of identifying each object included in the road landscape image to be identified using the image segmentation model, specifically including the following steps:
s105: and obtaining a road landscape image to be identified.
The image segmentation model is used for carrying out image feature processing on the road landscape image to be identified, which is obtained in real time, and identifying which is an indicator lamp, which is a traffic sign, which is an obstacle or pedestrian and the like from the road landscape image to be identified.
In the embodiment of the application, the image segmentation model can be applied to an automatic driving automobile, and the road landscape image to be identified can be obtained in real time through shooting equipment in the automatic driving automobile.
S106: inputting the road landscape image to be identified into an image segmentation model, and performing second image feature processing on the road landscape image to be identified through the image segmentation model; the second image feature processing includes: and identifying third object features of each object contained in the road landscape image to be identified, performing network semantic segmentation on each object contained in the road landscape image to be identified according to the third object features to obtain a third segmented image, comparing the types of the segmented objects contained in the third segmented image with preset types, and deleting the object features corresponding to the objects which are not in the preset types from the objects contained in the third segmented image to obtain a target segmented image.
The third segmentation image is an image obtained by performing network semantic segmentation on the road landscape image to be identified through the image segmentation model, namely, labeling each type of object in the road landscape image to be identified through the image segmentation model, specifically, the image segmentation model labels the attached object by using different colors according to the shape of each object.
The preset category may be a category other than the environmental interference information, such as a vehicle, a building, a pedestrian, an obstacle, a road, and the like. Does not contain environmental interference substances such as snowflakes, raindrops, sand grains and the like. Comparing the types of the segmented objects contained in the third segmented image with preset types, and deleting object features corresponding to objects which are not in the preset types from the objects contained in the third segmented image to obtain a target segmented image. Namely, object features corresponding to objects such as snowflakes, raindrops, sand grains and the like are deleted from the objects contained in the third segmented image, so that environment interference information does not exist in the obtained target segmented image.
Embodiment two:
based on the same technical concept, the embodiment of the present application further provides an image segmentation model training device, and fig. 4 shows a schematic structural diagram of the image segmentation model training device provided by the embodiment of the present application, as shown in fig. 4, where the device includes:
A first acquisition module 401 for acquiring a road landscape data set for training an image segmentation model; the road landscape data set comprises: the road landscape image without environment interference information, the road landscape image with environment interference information and the annotation image corresponding to each road landscape image; the labeling image is an image for labeling the categories of different types of objects in the road landscape image;
a first input module 402, configured to input the road landscape image into an initial image segmentation model, and perform a first image feature processing on the road landscape image through the initial image segmentation model; the first image feature processing includes: identifying first object features of each object contained in the road landscape image, performing network semantic segmentation on each object contained in the road landscape image according to the first object features to obtain a first segmented image, comparing the first segmented image with the labeling image corresponding to the first segmented image, and deleting object features which are not present in the second object features of each object labeled in the labeling image from the first segmented image to obtain a second segmented image;
A first calculation module 403, configured to perform a loss function calculation using the second segmentation image and the labeling image corresponding to the second segmentation image, to obtain a loss value;
and the training module 404 is configured to perform back propagation training on the initial image segmentation model by using the loss value, so as to adjust the learnable parameters in the initial image segmentation model, and end model training after convergence of the learnable parameters is completed, so as to obtain the image segmentation model.
Optionally, the road landscape image comprises overlapped objects;
the first input module 402, when configured to identify a first object feature of each object included in the road-scene image, is specifically configured to:
when the overlapped objects are the same kind of objects, taking the overlapped objects as a target object, and identifying first image features of the target object through the initial image segmentation model;
when the overlapped objects are different kinds of objects, respectively identifying each object in the overlapped objects through the initial image segmentation model to obtain first object characteristics of each object.
Optionally, the first input module 402 is configured to delete, from the first segmented image, object features that are not present in the second object feature of each object marked in the marked image, so as to obtain a second segmented image, specifically:
Deleting object features which are not existed in the second object features of each object marked in the marked image from the first segmented image to obtain a third segmented image; the object features which are not present in the second object features are features of environmental interference information; the third segmentation image is an image from which the environmental interference information is removed;
and supplementing a missing region of the missing pixel points after the environmental interference information is removed in the third segmented image according to the pixel values of the first pixel points in the target range around the missing region on the third segmented image, so as to obtain the second segmented image.
Optionally, the first input module 402 is configured to, when configured to supplement the missing area according to the pixel values of the first pixel points in the target range around the missing area on the third segmented image, specifically:
calculating the difference of pixel values between the first pixel points positioned at two sides of the center point on the straight line passing through the center point in the target range based on the center point of the missing region, and obtaining a difference value corresponding to each straight line;
determining a target straight line corresponding to the maximum difference value from the straight lines according to the difference value corresponding to each straight line, taking the direction from large to small of the pixel value of the first pixel points positioned on the two sides of the center point on the target straight line as a target direction, and calculating the missing length of the missing region in the target direction;
Calculating the ratio of the difference value corresponding to the target straight line to the missing length to obtain a change value of the pixel value of a second pixel point in the missing area in the target direction;
and supplementing the image value of the second image number point in the missing area according to the pixel value of the first pixel point, the target direction and the change value, which are positioned on the two sides of the center point, on the target straight line.
Optionally, the first input module 402 is configured to, when configured to supplement the missing area according to the pixel values of the first pixel points in the target range around the missing area on the third segmented image, specifically:
calculating the average value of the pixel values of the first pixel points of the target quantity adjacent to each edge position on the missing region, and supplementing the edge position by using the average value to obtain a supplementing region;
removing the supplementary region from the missing region to obtain a target missing region, and judging whether the area of the target missing region is 0;
if the area of the target missing region is 0, the missing region is completely supplemented;
If the area of the target missing region is not 0, taking the target missing region as a new missing region, and continuing to execute the steps: and calculating the average value of the pixel values of the first pixel points of the target quantity adjacent to the edge position aiming at each edge position on the missing region, and supplementing the edge position by using the average value to obtain a supplementing region.
Optionally, the first obtaining module 401, when used for obtaining a road landscape data set for training an image segmentation model, is specifically configured to:
acquiring a plurality of initial road landscape images;
performing image processing on the initial road landscape image aiming at each initial road landscape image to generate a sample image corresponding to the initial road landscape image; the image processing includes: one or more of image flipping, image cropping, image scaling;
and taking the sample image corresponding to the initial road landscape image as the road landscape image in the road landscape data set.
Optionally, the method further comprises:
the second obtaining module is configured to obtain a road landscape image to be identified after the training module 404 obtains the image segmentation model;
The second input module is used for inputting the road landscape image to be identified into the image segmentation model, and performing second image feature processing on the road landscape image to be identified through the image segmentation model; the second image feature processing includes: and identifying third object features of each object contained in the road landscape image to be identified, carrying out network semantic segmentation on each object contained in the road landscape image to be identified according to the third object features to obtain a third segmented image, comparing the types of the segmented objects contained in the third segmented image with preset types, and deleting object features corresponding to objects which are not contained in the preset types from the objects contained in the third segmented image to obtain a target segmented image.
Reference is made to the description of the first embodiment for specific implementation of method steps and principles, and detailed descriptions thereof are omitted.
Embodiment III:
based on the same technical concept, the embodiment of the present application further provides an electronic device, and fig. 5 shows a schematic structural diagram of the electronic device provided in the embodiment of the present application, as shown in fig. 5, the electronic device 500 includes: the processor 501, the memory 502 and the bus 503, the memory stores machine readable instructions executable by the processor, and when the electronic device is running, the processor 501 communicates with the memory 502 through the bus 503, and the processor 501 executes the machine readable instructions to perform the method steps described in the first embodiment.
Reference is made to the description of the first embodiment for specific implementation of method steps and principles, and detailed descriptions thereof are omitted.
Embodiment four:
based on the same technical idea, a fourth embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, which when executed by a processor performs the method steps described in the first embodiment.
Reference is made to the description of the first embodiment for specific implementation of method steps and principles, and detailed descriptions thereof are omitted.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present application, and are not intended to limit the scope of the present application, but the present application is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, the present application is not limited thereto. Any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or make equivalent substitutions for some of the technical features within the technical scope of the disclosure of the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An image segmentation model training method, comprising the steps of:
acquiring a road landscape data set for training an image segmentation model; the road landscape data set comprises: the road landscape image without environment interference information, the road landscape image with environment interference information and the annotation image corresponding to each road landscape image; the labeling image is an image for labeling the categories of different types of objects in the road landscape image;
Inputting the road landscape image into an initial image segmentation model, and performing first image feature processing on the road landscape image through the initial image segmentation model; the first image feature processing includes: identifying first object features of each object contained in the road landscape image, performing network semantic segmentation on each object contained in the road landscape image according to the first object features to obtain a first segmented image, comparing the first segmented image with the labeling image corresponding to the first segmented image, and deleting object features which are not present in the second object features of each object labeled in the labeling image from the first segmented image to obtain a second segmented image;
performing loss function calculation by using the second segmented image and the labeling image corresponding to the second segmented image to obtain a loss value;
and carrying out back propagation training on the initial image segmentation model by using the loss value so as to adjust the learnable parameters in the initial image segmentation model, and ending model training after the learnable parameters are converged, so as to obtain the image segmentation model.
2. The image segmentation model training method according to claim 1, wherein the road landscape image contains overlapped objects;
the identifying a first object feature of each object contained in the road-scene image includes:
when the overlapped objects are the same kind of objects, taking the overlapped objects as a target object, and identifying first image features of the target object through the initial image segmentation model;
when the overlapped objects are different kinds of objects, respectively identifying each object in the overlapped objects through the initial image segmentation model to obtain first object characteristics of each object.
3. The method for training an image segmentation model according to claim 1, wherein deleting object features, which are not present in the second object features of each object marked in the marked image, from the first segmented image to obtain a second segmented image comprises:
deleting object features which are not existed in the second object features of each object marked in the marked image from the first segmented image to obtain a third segmented image; the object features which are not present in the second object features are features of environmental interference information; the third segmentation image is an image from which the environmental interference information is removed;
And supplementing a missing region of the missing pixel points after the environmental interference information is removed in the third segmented image according to the pixel values of the first pixel points in the target range around the missing region on the third segmented image, so as to obtain the second segmented image.
4. The image segmentation model training method according to claim 3, wherein the supplementing the missing region according to the pixel values of the first pixel points in the target range around the missing region on the third segmented image comprises:
calculating the difference of pixel values between the first pixel points positioned at two sides of the center point on the straight line passing through the center point in the target range based on the center point of the missing region, and obtaining a difference value corresponding to each straight line;
determining a target straight line corresponding to the maximum difference value from the straight lines according to the difference value corresponding to each straight line, taking the direction from large to small of the pixel value of the first pixel points positioned on the two sides of the center point on the target straight line as a target direction, and calculating the missing length of the missing region in the target direction;
calculating the ratio of the difference value corresponding to the target straight line to the missing length to obtain a change value of the pixel value of a second pixel point in the missing area in the target direction;
And supplementing the image value of the second pixel point in the missing area according to the pixel value of the first pixel point, the target direction and the change value, which are positioned on the two sides of the center point, on the target straight line.
5. A method of training an image segmentation model according to claim 3, wherein the supplementing the missing region based on pixel values of first pixels within a target range around the region on the third segmented image comprises:
calculating the average value of the pixel values of the first pixel points of the target quantity adjacent to each edge position on the missing region, and supplementing the edge position by using the average value to obtain a supplementing region;
removing the supplementary region from the missing region to obtain a target missing region, and judging whether the area of the target missing region is 0;
if the area of the target missing region is 0, the missing region is completely supplemented;
if the area of the target missing region is not 0, taking the target missing region as a new missing region, and continuing to execute the steps: and calculating the average value of the pixel values of the first pixel points of the target quantity adjacent to the edge position aiming at each edge position on the missing region, and supplementing the edge position by using the average value to obtain a supplementing region.
6. The image segmentation model training method as set forth in claim 1, wherein acquiring the road-landscape dataset for training the image segmentation model comprises:
acquiring a plurality of initial road landscape images;
performing image processing on the initial road landscape image aiming at each initial road landscape image to generate a sample image corresponding to the initial road landscape image; the image processing includes: one or more of image flipping, image cropping, image scaling;
and taking the sample image corresponding to the initial road landscape image as the road landscape image in the road landscape data set.
7. The image segmentation model training method as set forth in claim 1, further comprising, after the obtaining the image segmentation model:
acquiring a road landscape image to be identified;
inputting the road landscape image to be identified into the image segmentation model, and performing second image feature processing on the road landscape image to be identified through the image segmentation model; the second image feature processing includes: and identifying third object features of each object contained in the road landscape image to be identified, carrying out network semantic segmentation on each object contained in the road landscape image to be identified according to the third object features to obtain a third segmented image, comparing the types of the segmented objects contained in the third segmented image with preset types, and deleting object features corresponding to objects which are not contained in the preset types from the objects contained in the third segmented image to obtain a target segmented image.
8. An image segmentation model training device, comprising:
the first acquisition module is used for acquiring a road landscape data set for training an image segmentation model; the road landscape data set comprises: the road landscape image without environment interference information, the road landscape image with environment interference information and the annotation image corresponding to each road landscape image; the labeling image is an image for labeling the categories of different types of objects in the road landscape image;
the first input module is used for inputting the road landscape image into an initial image segmentation model, and performing first image feature processing on the road landscape image through the initial image segmentation model; the first image feature processing includes: identifying first object features of each object contained in the road landscape image, performing network semantic segmentation on each object contained in the road landscape image according to the first object features to obtain a first segmented image, comparing the first segmented image with the labeling image corresponding to the first segmented image, and deleting object features which are not present in the second object features of each object labeled in the labeling image from the first segmented image to obtain a second segmented image;
The first calculation module is used for carrying out loss function calculation by using the second segmentation image and the labeling image corresponding to the second segmentation image to obtain a loss value;
and the training module is used for carrying out back propagation training on the initial image segmentation model by utilizing the loss value so as to adjust the learnable parameters in the initial image segmentation model, and ending model training after the learnable parameters are converged, so as to obtain the image segmentation model.
9. An electronic device, comprising: a processor, a memory and a bus, said memory storing machine readable instructions executable by said processor, said processor and said memory communicating over the bus when the electronic device is running, said machine readable instructions when executed by said processor performing the steps of the image segmentation model training method according to any of claims 1 to 7.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of the image segmentation model training method according to any one of claims 1 to 7.
CN202210015383.4A 2022-01-07 2022-01-07 Image segmentation model training method and device, electronic equipment and readable storage medium Active CN114359233B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210015383.4A CN114359233B (en) 2022-01-07 2022-01-07 Image segmentation model training method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210015383.4A CN114359233B (en) 2022-01-07 2022-01-07 Image segmentation model training method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN114359233A CN114359233A (en) 2022-04-15
CN114359233B true CN114359233B (en) 2024-04-02

Family

ID=81107419

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210015383.4A Active CN114359233B (en) 2022-01-07 2022-01-07 Image segmentation model training method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114359233B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117689660B (en) * 2024-02-02 2024-05-14 杭州百子尖科技股份有限公司 Vacuum cup temperature quality inspection method based on machine vision

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084095A (en) * 2019-03-12 2019-08-02 浙江大华技术股份有限公司 Method for detecting lane lines, lane detection device and computer storage medium
CN110189341A (en) * 2019-06-05 2019-08-30 北京青燕祥云科技有限公司 A kind of method, the method and device of image segmentation of Image Segmentation Model training
CN110246142A (en) * 2019-06-14 2019-09-17 深圳前海达闼云端智能科技有限公司 A kind of method, terminal and readable storage medium storing program for executing detecting barrier
WO2021189847A1 (en) * 2020-09-03 2021-09-30 平安科技(深圳)有限公司 Training method, apparatus and device based on image classification model, and storage medium
EP3910590A2 (en) * 2021-03-31 2021-11-17 Beijing Baidu Netcom Science Technology Co., Ltd. Method and apparatus of processing image, electronic device, and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084095A (en) * 2019-03-12 2019-08-02 浙江大华技术股份有限公司 Method for detecting lane lines, lane detection device and computer storage medium
CN110189341A (en) * 2019-06-05 2019-08-30 北京青燕祥云科技有限公司 A kind of method, the method and device of image segmentation of Image Segmentation Model training
CN110246142A (en) * 2019-06-14 2019-09-17 深圳前海达闼云端智能科技有限公司 A kind of method, terminal and readable storage medium storing program for executing detecting barrier
WO2021189847A1 (en) * 2020-09-03 2021-09-30 平安科技(深圳)有限公司 Training method, apparatus and device based on image classification model, and storage medium
EP3910590A2 (en) * 2021-03-31 2021-11-17 Beijing Baidu Netcom Science Technology Co., Ltd. Method and apparatus of processing image, electronic device, and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
冯语姗 ; 王子磊 ; .自上而下注意图分割的细粒度图像分类.中国图象图形学报.2016,(第09期),全文. *
青晨 ; 禹晶 ; 肖创柏 ; 段娟 ; .深度卷积神经网络图像语义分割研究进展.中国图象图形学报.2020,(第06期),全文. *

Also Published As

Publication number Publication date
CN114359233A (en) 2022-04-15

Similar Documents

Publication Publication Date Title
CN109165549B (en) Road identification obtaining method based on three-dimensional point cloud data, terminal equipment and device
US20200250440A1 (en) System and Method of Determining a Curve
CN110879950A (en) Multi-stage target classification and traffic sign detection method and device, equipment and medium
CN111666805A (en) Category tagging system for autonomous driving
Zakaria et al. Lane detection in autonomous vehicles: A systematic review
CN110619279A (en) Road traffic sign instance segmentation method based on tracking
Borkar et al. An efficient method to generate ground truth for evaluating lane detection systems
CN114413881A (en) Method and device for constructing high-precision vector map and storage medium
JP5522475B2 (en) Navigation device
CN106778548A (en) Method and apparatus for detecting barrier
CN111325136B (en) Method and device for labeling object in intelligent vehicle and unmanned vehicle
CN112990293A (en) Point cloud marking method and device and electronic equipment
CN114359233B (en) Image segmentation model training method and device, electronic equipment and readable storage medium
CN113835102A (en) Lane line generation method and device
CN114841910A (en) Vehicle-mounted lens shielding identification method and device
CN112580489A (en) Traffic light detection method and device, electronic equipment and storage medium
CN111414903B (en) Method, device and equipment for identifying content of indication board
CN111523368A (en) Information processing device, server, and traffic management system
CN115620047A (en) Target object attribute information determination method and device, electronic equipment and storage medium
CN115909241A (en) Lane line detection method, system, electronic device and storage medium
CN117237907A (en) Traffic signal lamp identification method and device, storage medium and electronic equipment
CN115690717A (en) Traffic light detection method and device, computing equipment and storage medium
CN113841154A (en) Obstacle detection method and device
Sagar et al. A vison based lane detection approach using vertical lane finder method
KR102540624B1 (en) Method for create map using aviation lidar and computer program recorded on record-medium for executing method therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant