CN113223146A - Data labeling method and device based on three-dimensional simulation scene and storage medium - Google Patents

Data labeling method and device based on three-dimensional simulation scene and storage medium Download PDF

Info

Publication number
CN113223146A
CN113223146A CN202110442147.6A CN202110442147A CN113223146A CN 113223146 A CN113223146 A CN 113223146A CN 202110442147 A CN202110442147 A CN 202110442147A CN 113223146 A CN113223146 A CN 113223146A
Authority
CN
China
Prior art keywords
coloring
image frame
image
illumination
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110442147.6A
Other languages
Chinese (zh)
Inventor
林涛
陈振武
张枭勇
刘宇鸣
张炳振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Urban Transport Planning Center Co Ltd
Original Assignee
Shenzhen Urban Transport Planning Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Urban Transport Planning Center Co Ltd filed Critical Shenzhen Urban Transport Planning Center Co Ltd
Priority to CN202110442147.6A priority Critical patent/CN113223146A/en
Publication of CN113223146A publication Critical patent/CN113223146A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Image Generation (AREA)

Abstract

The invention provides a data labeling method, a device and a storage medium based on a three-dimensional simulation scene, which comprises the following steps: acquiring a three-dimensional simulation scene, extracting an image frame, and preprocessing the image frame to obtain an original scene coloring result; acquiring a marking type and a coloring strategy corresponding to the marking type, acquiring a target attribute parameter of each pixel of a target in an image frame according to the marking type, and acquiring a coloring result according to the coloring strategy; the labeling type comprises an illumination attribute, and the coloring strategy corresponding to the illumination attribute comprises the following steps: segmenting time, uniformly processing the image frames in each time period, labeling areas in an illumination range, and overlapping labeled data at corresponding pixels of the image frames belonging to the same time period to obtain an illumination coloring result; and selecting a corresponding coloring result according to the requirement and overlapping the coloring result of the original scene to obtain an output image. Therefore, the image frame can be dyed by taking the pixel as a unit, and the labeling of the photo-thermodynamic diagram of the image frame can be realized.

Description

Data labeling method and device based on three-dimensional simulation scene and storage medium
Technical Field
The invention relates to the technical field of image processing, in particular to a data annotation method and device based on a three-dimensional simulation scene and a storage medium.
Background
With the development of virtual reality and three-dimensional rendering technologies, virtual scenes can be made to a false or spurious degree. The three-dimensional simulation scene can bring immersive experience to the sense of people, and can also be used as a training data source of relevant algorithms such as computer vision. The shooting using the three-dimensional simulation scene to replace the actual scene has the advantages that: the manual data acquisition workload can be greatly reduced; automatic data annotation which cannot be done in a real shooting scene can be realized; data under special events may be generated in conjunction with simulation algorithms. The existing training data set is usually obtained by manually obtaining an image to be detected and manually labeling the image, so that the labor is consumed and the efficiency is low.
Disclosure of Invention
The invention solves the problem of how to label images obtained in a three-dimensional simulation scene at various angles at a pixel level.
In order to solve the above problems, the present invention provides a data annotation method based on a three-dimensional simulation scene, comprising:
acquiring a three-dimensional simulation scene, extracting an image frame, and preprocessing the image frame to obtain an original scene coloring result; acquiring a label type and a coloring strategy corresponding to the label type, and acquiring a target attribute parameter of each pixel of a target in the image frame according to the label type; according to the target attribute parameters and the coloring strategy, coloring the image frame to obtain a coloring result; the labeling type comprises an illumination attribute, and the coloring strategy corresponding to the illumination attribute comprises the following steps: the time is segmented, the image frames in each time period are processed in a unified mode by taking the time period as a unit, the area in the illumination range is labeled, the labeled data of the corresponding pixels of the image frames belonging to the same time period are overlapped, and the illumination coloring result is obtained.
Compared with the prior art, the method has the advantages that the three-dimensional simulation scene is obtained, the corresponding marking types are obtained based on different coloring strategies, and then the image frames are colored according to the coloring strategies, so that the marked image frames can be ensured to be suitable for training of various algorithms; by superposing and labeling the areas of the illumination range in pixel units, the illumination coloring result is obtained, and the thermodynamic diagram reflecting the illumination time can be ensured to be accurately obtained.
Optionally, the annotation comprises the position and range of annotation illumination; the time segmentation is to uniformly process the image frames in each time period by taking the time period as a unit, label the area in the illumination range, and superimpose the label data of the corresponding pixels of the image frames belonging to the same time period to obtain the illumination coloring result, wherein the step of obtaining the illumination coloring result comprises the following steps: segmenting the time period with illumination in the three-dimensional simulation scene, calculating the illumination range of each image frame belonging to the same time period, and rendering the area in the illumination range into a first color; counting the number of times each pixel of each image frame belonging to the same time period is rendered as a first color; calculating thermodynamic diagrams according to the times, overlapping the thermodynamic diagrams back to the first frame image frame of each time period, and rendering to obtain the illumination coloring result.
Therefore, by segmenting the illumination time period, a more detailed illumination coloring result can be obtained, the thermodynamic diagram is obtained by coloring the illumination area, and then the illumination coloring result is obtained.
Optionally, the coloring the image frame according to the target attribute parameter and the coloring policy according to the target attribute parameter, and obtaining a coloring result includes: performing coloring by using the vertex coloring device based on the coloring strategy, coloring the image frame and obtaining a vertex coloring image; obtaining, using the fragment shader, the shading result based on the vertex shading image, the shading policy, and the target attribute parameters, wherein the target attribute parameters include at least one of: type parameter, speed parameter, acceleration parameter, depth parameter, height parameter, color parameter of the target.
Therefore, the pixels in the image frame are dyed respectively through the type parameter, the speed parameter, the acceleration parameter, the depth parameter, the height parameter and the color parameter of the target in the image frame, and dyeing results corresponding to the attributes are generated and used for generating corresponding marked images.
Optionally, the rendering using the vertex shader based on the rendering policy, the rendering the image frame, the obtaining a vertex-rendered image comprising:
and coloring a target by using the vertex coloring device according to the scene information of the image frame to obtain a vertex coloring image, wherein the scene information comprises a viewpoint position and a chartlet normal.
Therefore, the vertex shader colors the vertex based on the viewpoint position and the chartlet normal, and the vertex coloring result is guaranteed to be obtained.
Optionally, the obtaining, using the fragment shader, the shading result based on the vertex shading image, the shading policy, and the target attribute parameter comprises:
and acquiring texture colors by using the fragment shader, and superposing the texture colors to the vertex shading image, wherein the texture colors are obtained by sampling texture pixels based on texture coordinates.
Therefore, the fragment shader processes the vertex shading result, obtains texture colors, and performs subsequent processing on the image frame through the texture colors.
Optionally, when the annotation type is a panorama segmentation attribute, the obtaining, by using the fragment shader, a texture color, and superimposing the texture color on the vertex-rendered image further includes: based on the fact that the object in the image frame is marked to be endowed with a category and an instance ID for each pixel in the image frame, each object is respectively endowed with different colors, and a panoramic segmentation coloring result is generated; when the annotation type is a depth attribute, obtaining a texture color by using the fragment shader, and after the texture color is superimposed on the vertex shading image, the method further includes: calculating the distance between a pixel point in the image frame and a viewpoint, normalizing the distance and generating a depth marking coloring result; when the annotation type is a velocity attribute, obtaining a texture color by using the fragment shader, and after the texture color is superimposed on the vertex shading image, the method further includes: marking an object of the image frame based on the speed of the target in the image frame, coloring by taking a speed scalar as a standard, and generating a speed marking coloring result; when the annotation type is a height attribute, obtaining a texture color by using the fragment shader, and after the texture color is superimposed on the vertex shading image, the method further includes: and coloring the height of the three-dimensional simulation scene corresponding to the pixel points in the image frame to generate a height labeling coloring result.
Therefore, when the labeling types are different, corresponding attribute information and a coloring strategy are respectively obtained, the image frame is rendered and colored, and a corresponding coloring result is obtained.
Optionally, the output image comprises: and structured labeling data comprising the category, the instance ID, the distance from a viewpoint, the height from the ground, the illumination time and the speed, the acceleration, the position and the direction angle of each target of each pixel of the image frame.
Thereby, an image of the structured annotation data can be guaranteed to be obtained.
Optionally, before the calculating the illumination range of each image frame belonging to the same time period, the method includes:
and storing the original scene coloring result and the first frame image frame of each time period into a temporary queue.
Therefore, the original scene coloring result and the first frame image frame of each time period are stored in the temporary queue to be called when the subsequent processing is carried out, and the efficiency of the subsequent processing is guaranteed to be improved.
The invention also provides a data labeling device based on the three-dimensional simulation scene, which comprises the following components:
the system comprises a preprocessing module, a data processing module and a data processing module, wherein the preprocessing module is used for acquiring a three-dimensional simulation scene, extracting an image frame, and preprocessing the image frame to obtain an original scene coloring result; the input module is used for acquiring a label type and a coloring strategy corresponding to the label type and acquiring a target attribute parameter of each pixel of a target in the image frame according to the label type; the coloring module is used for coloring the image frame according to the target attribute parameters and the coloring strategy to obtain a coloring result; the labeling type comprises an illumination attribute, and the coloring strategy corresponding to the illumination attribute comprises the following steps: segmenting time, processing the image frame of each time period by taking the time period as a unit, labeling an area in an illumination range, and overlapping labeled data at corresponding pixels of the image frames belonging to the same time period to obtain an illumination coloring result; and the output module is used for selecting the corresponding coloring result according to the requirement and overlapping the coloring result with the original scene coloring result to obtain an output image.
Compared with the prior art and the three-dimensional simulation scene data labeling method, the three-dimensional simulation scene-based data labeling device has the same advantages, and the description is omitted.
The invention further provides a computer-readable storage medium, which stores a computer program, and when the computer program is read and executed by a processor, the data annotation method based on the three-dimensional simulation scene is realized.
Compared with the prior art and the data labeling method of the three-dimensional simulation scene, the computer-readable storage medium has the same advantages, and the description is omitted here.
Drawings
FIG. 1 is a schematic diagram of an embodiment of a data annotation method for a three-dimensional simulation scene according to the present invention;
FIG. 2 is a schematic diagram of an embodiment of the data annotation method for a three-dimensional simulation scene according to the invention, which is detailed in step S300;
FIG. 3 is a flowchart of an embodiment of a data annotation method for a three-dimensional simulation scene according to the present invention;
FIG. 4 is a schematic diagram of an embodiment of a data annotation method for a three-dimensional simulation scene according to the present invention;
FIG. 5 is a schematic diagram of another embodiment of a data annotation method for a three-dimensional simulation scene according to the present invention;
FIG. 6 is a schematic diagram of a data annotation method for a three-dimensional simulation scene according to another embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
The invention provides a data annotation method based on a three-dimensional simulation scene, as shown in fig. 1, comprising the following steps:
step S100, acquiring a three-dimensional simulation scene, extracting an image frame, and preprocessing the image frame to obtain an original scene coloring result.
Preprocessing the image frame to obtain an original scene coloring result, namely, preprocessing the image frame to obtain a basic scene picture for subsequent rendering.
In one embodiment, the pre-processing is performed by performing a preliminary annotation on the image frame to serve as an image basis for a subsequent output image.
In one embodiment, a three-dimensional simulation scene is subjected to scene modeling and scene setting for a traffic monitoring scene, and then a viewpoint is selected to generate and label image data in the process of urban traffic operation simulation.
The three-dimensional simulation scene may be a simulation scene with consecutive frames, that is, a dynamic three-dimensional scene, so in an embodiment, consecutive image frames are extracted from the three-dimensional scene for preprocessing, and stored in a temporary queue for labeling dynamic targets, and the emphasis is to label the change of the targets in the consecutive image frames or the states of the targets in the image frames.
In another embodiment, image frames are extracted based on a three-dimensional simulation scene, the image frames are discrete image frames and are used for labeling the segmentation data of the image, and the emphasis is to label the information of the target in the fixed image.
In an embodiment, the image frame is a complete view of a certain angle in the three-dimensional scene image, that is, a complete image observed from the angle, and the complete view is taken as the image frame, so that more information can be obtained in one image frame; in another embodiment, only the labeling information of a specific target is needed, so the image frame is an image sub-block intercepted from a complete view, and the more accurate labeling is provided when the image sub-block is used for subsequent deep learning training, the calculation efficiency of deep learning is improved, and the calculation amount is reduced.
Optionally, before performing the pre-processing on the image frame, the method comprises: defining object types including roads, sidewalks, zebra crossings, cars, buses, trucks, pedestrians, guardrails, signboards, buildings.
The purpose of defining the target is to specify the range and type of the target to be processed, and in one embodiment, if an object does not belong to the defined target, the object is classified as a background and is not labeled with data.
And the preprocessing comprises prerendering, the function is to prerender the current image frame and store the copy to a temporary queue for subsequent processing, and when the image frame needs to be called, the image is called from the temporary queue for further processing.
Optionally, the pre-processing the image frame comprises:
acquiring the image frame, and extracting the information of the pixels of the target in the image frame;
and preprocessing the image frame to obtain the original scene coloring result.
Optionally, before extracting the information of the pixels of the object within the image frame, the method comprises: and judging whether the pixel belongs to a preset target or not.
The image frame is preprocessed, and the image frame further includes extracting pixel information of a target in the image frame, that is, extracting information of the target where the pixel is located, for example, in an embodiment, as shown in fig. 3, a process of extracting the information of the pixel is to determine whether a current pixel is a specified target, if so, extracting information of the current pixel, including a distance from a viewpoint of the pixel, a z-axis height, a category of the object, an instance number of the object, a speed and a centroid position of the object, a type of the object, speed information, acceleration information, depth information, and height information, where the distance from the viewpoint of the pixel and the centroid position are calculated in a three-dimensional simulation scene. The information of the pixels is used to provide information data for processing by subsequent shaders.
The image frame is preprocessed to obtain an original scene coloring result, the original scene coloring result is used for subsequent rendering and is overlapped with the subsequent coloring result to form a final output image, and meanwhile, the attributes of all objects required by the subsequent rendering are extracted and transmitted to serve as a data basis of the subsequent processing.
In one embodiment, the image frame is not further rendered, only the original rendering results are output, and the original rendering results are output as an output image.
Step S200, obtaining a marking type and a coloring strategy corresponding to the marking type, and obtaining a target attribute parameter of each pixel of a target in the image frame according to the marking type.
Sending the matrix, the vertex, the normal and the attribute information in the image frame to a GPU, wherein the matrix, the vertex and the normal are used for calculating a vertex coloring image; the vertex shader and the fragment shader perform rendering shading in the GPU.
Optionally, a vertex shader is used to determine the color of a point by performing vertex shading calculations on the position of the viewpoint in the image frame, the map normal, and the illumination.
Optionally, the fragment shader acquires texture colors and superimposes the texture colors on the vertex shading image, and acquires corresponding shading results according to different attributes, wherein the texture colors are acquired by sampling from texture pixels based on texture coordinates.
The fragment shader fills colors in the vertex shading image, and since the preprocessing in step S100 provides a plurality of information, different information is respectively shaded using different strategies to generate shading results corresponding to information types, for example, in one embodiment, a shading result at a height above the ground and a shading result at a speed are rendered, and the target is rendered into different colors based on different heights using the target height information transmitted by the preprocessing; the target is rendered into different colors based on different speeds using the pre-processed transmitted target speed information.
The marking type is selected from the target attribute parameters of the pixels, the corresponding attribute in the image frame is selected as the attribute type to be marked according to the requirement, then the corresponding coloring strategy is obtained, each attribute corresponds to the respective coloring strategy, the target attribute parameter of each pixel of the target is obtained according to the attribute type to be marked, and the target attribute parameter of each pixel of the non-target can also be obtained.
In an embodiment, if the attribute type to be labeled is a depth map, selecting the depth in the image frame as the attribute type to be labeled according to requirements, then obtaining a depth coloring policy, and rendering corresponding colors for all pixels in the image frame according to the depth. Therefore, in the embodiment, the depth data of all pixels in the image frame is obtained, different colors are determined according to different depths, and finally the image frame is rendered.
And step S300, coloring the image frame according to the target attribute parameters and the coloring strategy to obtain a coloring result.
As shown in fig. 4 (the attached drawing is an image after gray processing), the annotation type includes an illumination attribute, and the coloring policy corresponding to the illumination attribute includes: segmenting time, uniformly processing the image frames of each time period by taking the time period as a unit, labeling areas in an illumination range, and overlapping labeled data at corresponding pixels of the image frames belonging to the same time period to obtain an illumination coloring result;
the marking is to color the image frame, color the image in an RGB color space to obtain the marked image, and superimpose the color data on the image to obtain the data-superimposed image, i.e., the illumination coloring result, which can reflect the length of the illumination time of each region in the image frame.
The rendering results include illumination rendering results that are a result of processing a plurality of image frames in a same perspective.
And selecting a corresponding coloring result according to the requirement and overlapping the coloring result of the original scene to obtain an output image.
And respectively coloring the information of multiple aspects contained in the same image frame to obtain coloring results corresponding to the information.
Optionally, the annotation comprises the position and range of annotation illumination; the time segmentation, the unified processing of the image frames in each time period, the labeling of the regions in the illumination range, the superposition of the labeled data of the corresponding pixels of the image frames belonging to the same time period, and the obtaining of the illumination coloring result comprises the following steps: segmenting the time period with illumination in the three-dimensional simulation scene, calculating the illumination range of each image frame belonging to the same time period, and rendering the area in the illumination range into a first color; counting the number of times each pixel of each image frame belonging to the same time period is rendered as a first color; calculating thermodynamic diagrams according to the times, overlapping the thermodynamic diagrams back to the first frame image frame of each time period, and rendering to obtain the illumination coloring result.
Before calculating the illumination range of each image frame belonging to the same time period, the method comprises the following steps: and storing the original scene coloring result and the first frame image frame of each time period into a temporary queue.
The rendering process of the illumination coloring result is different from the rendering process of other coloring results, the main difference is that the other rendering results use one frame of image frame to directly color the image frame, the illumination coloring result is to superpose the illumination marking data of the image frame in continuous time period to form an illumination map, and the illumination map can intuitively display the illumination time length of each area in the image frame.
In one embodiment, the time of day is segmented, every two hours is used as a time period, all image frames in each time period are obtained, and the first image frames of all time periods are stored in a temporary queue as an image carrier of the coloring result. And calculating an illumination range of all image frames from the zero point to two points based on the light source, rendering an area in the illumination range to be white and rendering a non-illumination area to be black, then calculating a thermodynamic diagram according to the times of rendering black or white, and superposing the thermodynamic diagram on the image frame of the first frame at the zero point to form an illumination map. And obtaining the illumination maps of other time periods in the same way.
Optionally, the illumination patterns are superimposed to form an illumination coloration result.
Taking the case of the above embodiment as an example, in another embodiment, the illumination maps of all time periods of a day are superimposed, and as the illumination rendering result, the illumination rendering result is superimposed onto the first frame image frame at the zero point.
Optionally, as shown in fig. 5 (the drawing is an image after the gray processing), the image frame is colored according to the target attribute parameter and the coloring strategy, and the method further includes the following coloring results:
when the annotation type is a panorama segmentation attribute, obtaining a texture color by using the fragment shader, and after the texture color is superimposed on the vertex shading image, the method further includes: based on the fact that the object in the image frame is marked to be endowed with a category and an instance ID for each pixel in the image frame, each object is respectively endowed with different colors, and a panoramic segmentation coloring result is generated;
when the annotation type is a depth attribute, obtaining a texture color by using the fragment shader, and after the texture color is superimposed on the vertex shading image, the method further includes: calculating the distance between a pixel point in the image frame and a viewpoint, normalizing the distance and generating a depth marking coloring result;
when the annotation type is a velocity attribute, obtaining a texture color by using the fragment shader, and after the texture color is superimposed on the vertex shading image, the method further includes: marking an object of the image frame based on the speed of the target in the image frame, coloring by taking a speed scalar as a standard, and generating a speed marking coloring result;
when the annotation type is a height attribute, obtaining a texture color by using the fragment shader, and after the texture color is superimposed on the vertex shading image, the method further includes: and coloring the height of the three-dimensional simulation scene corresponding to the pixel points in the image frame to generate a height labeling coloring result.
In an embodiment, when the annotation type is a panorama segmentation attribute, a depth attribute, a speed attribute, and a height attribute, acquiring a frame of image frame, preprocessing the image frame and storing the image frame in a temporary queue, acquiring the image frame from the temporary queue based on the annotation type and a coloring policy corresponding to the annotation type, and coloring the image frame according to the corresponding coloring policy.
As shown in fig. 5, an example diagram of a depth map, a speed map, and a height map is shown from left to right.
Corresponding object attribute parameters are respectively obtained from step S200 to respectively color the image frames.
In an embodiment, after an image frame is extracted and preprocessed, the image frame is colored, and a lighting coloring result, a panorama segmentation coloring result, a depth labeling coloring result, a speed labeling coloring result and a height labeling coloring result are respectively obtained and stored in a temporary queue to be output after being overlapped with an original coloring result.
Optionally, the rendering result is not an outputtable rendering result, and is subjected to superposition processing with the original rendering result, and a result after generating the final image pixel color is an output result.
Alternatively, as shown in fig. 2, step S300 includes:
step S301, the vertex shader is used for shading based on the shading strategy, the image frame is shaded, and a vertex shaded image is obtained.
The vertex shader is used for describing operations executed on the vertexes, such as coordinate transformation, generation of each vertex color through calculation of an illumination formula and calculation of texture coordinates, can improve the scene rendering speed, performs vertex shading calculation according to the viewpoint position of an image frame, the normal line of a chartlet and illumination, and shades the vertexes in the image frame to obtain a vertex shading image.
In one embodiment, the vertex shader is used for performing vertex shading on the image frame to obtain a vertex shading image, and the vertex shading image is sent to the fragment shader; and calculating texture coordinates, and providing the texture coordinates to the fragment shader for further rendering and shading.
According to the current rendering mode, different attribute information is acquired from the GPU for corresponding coloring, in one embodiment, the current rendering mode is a depth marking mode, so that the depth value of a pixel of the target image frame is acquired from the GPU, and the depth value comprises the distance of the pixel from a viewpoint. And then dyeing the target in the image frame through a preset dyeing strategy to obtain a depth marking and coloring result.
Optionally, the vertex shader is used to color an object according to scene information of the image frame, and the vertex colored image is obtained, where the scene information includes a viewpoint position and a map normal.
Optionally, a texture color is obtained using the fragment shader and superimposed onto the vertex shading image, wherein the texture color is obtained by sampling from texels based on texture coordinates.
And combining the texture coordinates and the colors by the fragment shader, further filling and coloring the vertex coloring image, firstly obtaining texture colors in the image frame, and then filling colors in the image frame according to the current rendering mode to obtain the coloring image.
Step S302, obtaining the rendering result based on the vertex rendered image, the rendering policy, and the target attribute parameter by using the fragment shader, wherein the target attribute parameter includes at least one of: type parameter, speed parameter, acceleration parameter, depth parameter, height parameter, color parameter of the target.
And step S400, selecting a corresponding coloring result according to the requirement, and overlapping the coloring result with the original scene coloring result to obtain an output image.
And when a colored image needs to be acquired, overlapping the processed coloring result with the original scene coloring to obtain an output image.
In one embodiment, as shown in fig. 3, if the depth image needs to be output, the processed depth rendering result is superimposed with the original scene rendering to generate the final image pixel color.
Optionally, the output image comprises: and structured labeling data comprising the category, the instance ID, the distance from a viewpoint, the height from the ground, the illumination time and the speed, the acceleration, the position and the direction angle of each target of each pixel of the image frame.
The content of the structured labeling data comprises all labeling types of the three-dimensional simulation scene data labeling method, and all data types in the image frame can be comprehensively obtained to support the training of a machine learning algorithm.
As shown in fig. 6, fig. 6 is an output image marked with panorama segmentation, and the figure is an image subjected to gray processing, which is an image finally used for output and obtained by superimposing an original marked image on an image frame subjected to panorama segmentation and marking to obtain a panorama segmentation and coloring result.
The invention also provides a data labeling device based on the three-dimensional simulation scene, which comprises the following components:
the system comprises a preprocessing module, a data processing module and a data processing module, wherein the preprocessing module is used for acquiring a three-dimensional simulation scene, extracting an image frame, and preprocessing the image frame to obtain an original scene coloring result; the input module is used for acquiring a label type and a coloring strategy corresponding to the label type and acquiring a target attribute parameter of each pixel of a target in the image frame according to the label type; the coloring module is used for coloring the image frame according to the target attribute parameters and the coloring strategy to obtain a coloring result; the labeling type comprises an illumination attribute, and the coloring strategy corresponding to the illumination attribute comprises the following steps: segmenting time, processing the image frame of each time period by taking the time period as a unit, labeling an area in an illumination range, and overlapping labeled data at corresponding pixels of the image frames belonging to the same time period to obtain an illumination coloring result; and the output module is used for selecting the corresponding coloring result according to the requirement and overlapping the coloring result with the original scene coloring result to obtain an output image.
Compared with the prior art and the three-dimensional simulation scene data labeling method, the three-dimensional simulation scene-based data labeling device has the same advantages, and the description is omitted.
The invention further provides a computer-readable storage medium, which stores a computer program, and when the computer program is read and executed by a processor, the data annotation method based on the three-dimensional simulation scene is realized.
Compared with the prior art and the data labeling method of the three-dimensional simulation scene, the computer-readable storage medium has the same advantages, and the description is omitted here.
Although the present disclosure has been described above, the scope of the present disclosure is not limited thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the spirit and scope of the present disclosure, and these changes and modifications are intended to be within the scope of the present disclosure.

Claims (10)

1. A data annotation method based on a three-dimensional simulation scene is characterized by comprising the following steps:
acquiring a three-dimensional simulation scene, extracting an image frame, and preprocessing the image frame to obtain an original scene coloring result;
acquiring a label type and a coloring strategy corresponding to the label type, and acquiring a target attribute parameter of each pixel of a target in the image frame according to the label type;
according to the target attribute parameters and the coloring strategy, coloring the image frame to obtain a coloring result; the labeling type comprises an illumination attribute, and the coloring strategy corresponding to the illumination attribute comprises the following steps: segmenting time, uniformly processing the image frames of each time period by taking the time period as a unit, labeling areas in an illumination range, and overlapping labeled data at corresponding pixels of the image frames belonging to the same time period to obtain an illumination coloring result;
and selecting a corresponding coloring result according to the requirement and overlapping the coloring result of the original scene to obtain an output image.
2. The method for labeling data based on three-dimensional simulation scene as claimed in claim 1, wherein the step of segmenting time, uniformly processing the image frames in each time period by taking the time period as a unit, labeling the area in the illumination range, and overlapping the labeled data at the corresponding pixels of the image frames belonging to the same time period to obtain the illumination coloring result comprises:
segmenting the time period with illumination in the three-dimensional simulation scene, calculating the illumination range of each image frame belonging to the same time period, and rendering the area in the illumination range into a first color;
counting the number of times each pixel of each image frame belonging to the same time period is rendered as a first color;
calculating thermodynamic diagrams according to the times, overlapping the thermodynamic diagrams back to the first frame image frame of each time period, and rendering to obtain the illumination coloring result.
3. The method for labeling data based on three-dimensional simulation scene as claimed in claim 2, wherein said coloring said image frame according to said target attribute parameter and said coloring strategy to obtain a coloring result comprises:
performing coloring by using a vertex coloring device based on the coloring strategy, coloring the image frame and obtaining a vertex coloring image;
obtaining, using a fragment shader, the shading result based on the vertex shading image, the shading policy, and the target attribute parameters, wherein the target attribute parameters include at least one of: a type parameter, a velocity parameter, an acceleration parameter, a depth parameter, a height parameter, and a color parameter of the target.
4. The method for annotating data based on a three-dimensional simulation scene according to claim 3, wherein said rendering said image frames using a vertex shader based on said rendering strategy, said obtaining vertex rendered images comprising:
and coloring a target according to the scene information of the image frame by using the vertex coloring device to obtain a vertex coloring image, wherein the scene information comprises a viewpoint position and a chartlet normal.
5. The method of claim 4, wherein the obtaining the rendering result based on the vertex rendered image, the rendering policy, and the target attribute parameter using a fragment shader comprises:
and acquiring texture colors by using the fragment shader, and superposing the texture colors to the vertex shading image, wherein the texture colors are obtained by sampling texture pixels based on texture coordinates.
6. The method for labeling data based on a three-dimensional simulation scene according to claim 5,
when the annotation type is a panorama segmentation attribute, obtaining a texture color by using the fragment shader, and after the texture color is superimposed on the vertex shading image, the method further includes:
based on the fact that the object in the image frame is marked to be endowed with a category and an instance ID for each pixel in the image frame, each object is respectively endowed with different colors, and a panoramic segmentation coloring result is generated;
when the annotation type is a depth attribute, obtaining a texture color by using the fragment shader, and after the texture color is superimposed on the vertex shading image, the method further includes:
calculating the distance between a pixel point in the image frame and a viewpoint, normalizing the distance and generating a depth marking coloring result;
when the annotation type is a velocity attribute, obtaining a texture color by using the fragment shader, and after the texture color is superimposed on the vertex shading image, the method further includes:
marking an object of the image frame based on the speed of the target in the image frame, coloring by taking a speed scalar as a standard, and generating a speed marking coloring result;
when the annotation type is a height attribute, obtaining a texture color by using the fragment shader, and after the texture color is superimposed on the vertex shading image, the method further includes:
and coloring the height of the three-dimensional simulation scene corresponding to the pixel points in the image frame to generate a height labeling coloring result.
7. The method of claim 6, wherein the outputting the image comprises:
and structured labeling data comprising the category, the instance ID, the distance from a viewpoint, the height from the ground, the illumination time and the speed, the acceleration, the position and the direction angle of each target of each pixel of the image frame.
8. The method for labeling data based on three-dimensional simulation scene as claimed in claim 2, wherein before calculating the illumination range of each image frame belonging to the same time period, further comprising:
and storing the original scene coloring result and the first frame image frame of each time period into a temporary queue.
9. A data labeling device based on a three-dimensional simulation scene is characterized by comprising:
the system comprises a preprocessing module, a data processing module and a data processing module, wherein the preprocessing module is used for acquiring a three-dimensional simulation scene, extracting an image frame, and preprocessing the image frame to obtain an original scene coloring result;
the input module is used for acquiring a label type and a coloring strategy corresponding to the label type and acquiring a target attribute parameter of each pixel of a target in the image frame according to the label type;
the coloring module is used for coloring the image frame according to the target attribute parameters and the coloring strategy to obtain a coloring result; the labeling type comprises an illumination attribute, and the coloring strategy corresponding to the illumination attribute comprises the following steps: segmenting time, processing the image frame of each time period by taking the time period as a unit, labeling an area in an illumination range, and overlapping labeled data at corresponding pixels of the image frames belonging to the same time period to obtain an illumination coloring result;
and the output module is used for selecting the corresponding coloring result according to the requirement and overlapping the coloring result with the original scene coloring result to obtain an output image.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program, which when read and executed by a processor, implements the method for data annotation based on three-dimensional simulation scenarios according to any one of claims 1 to 8.
CN202110442147.6A 2021-04-23 2021-04-23 Data labeling method and device based on three-dimensional simulation scene and storage medium Pending CN113223146A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110442147.6A CN113223146A (en) 2021-04-23 2021-04-23 Data labeling method and device based on three-dimensional simulation scene and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110442147.6A CN113223146A (en) 2021-04-23 2021-04-23 Data labeling method and device based on three-dimensional simulation scene and storage medium

Publications (1)

Publication Number Publication Date
CN113223146A true CN113223146A (en) 2021-08-06

Family

ID=77089026

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110442147.6A Pending CN113223146A (en) 2021-04-23 2021-04-23 Data labeling method and device based on three-dimensional simulation scene and storage medium

Country Status (1)

Country Link
CN (1) CN113223146A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113920207A (en) * 2021-09-28 2022-01-11 海南电网有限责任公司澄迈供电局 Power failure plan analysis system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070200864A1 (en) * 2006-02-28 2007-08-30 Tucker Amy R Method and system for gathering per-frame image statistics while preserving resolution and runtime performance in a real-time visual simulation
CN107154063A (en) * 2017-04-19 2017-09-12 腾讯科技(深圳)有限公司 The shape method to set up and device in image shows region
CN109086798A (en) * 2018-07-03 2018-12-25 迈吉客科技(北京)有限公司 A kind of data mask method and annotation equipment
CN112258610A (en) * 2020-10-10 2021-01-22 北京五一视界数字孪生科技股份有限公司 Image labeling method and device, storage medium and electronic equipment
CN112381918A (en) * 2020-12-03 2021-02-19 腾讯科技(深圳)有限公司 Image rendering method and device, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070200864A1 (en) * 2006-02-28 2007-08-30 Tucker Amy R Method and system for gathering per-frame image statistics while preserving resolution and runtime performance in a real-time visual simulation
CN107154063A (en) * 2017-04-19 2017-09-12 腾讯科技(深圳)有限公司 The shape method to set up and device in image shows region
CN109086798A (en) * 2018-07-03 2018-12-25 迈吉客科技(北京)有限公司 A kind of data mask method and annotation equipment
CN112258610A (en) * 2020-10-10 2021-01-22 北京五一视界数字孪生科技股份有限公司 Image labeling method and device, storage medium and electronic equipment
CN112381918A (en) * 2020-12-03 2021-02-19 腾讯科技(深圳)有限公司 Image rendering method and device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王晨昊;: "基于几何映射的遥感成像光照仿真方法", 系统仿真学报, no. 03, 8 March 2015 (2015-03-08) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113920207A (en) * 2021-09-28 2022-01-11 海南电网有限责任公司澄迈供电局 Power failure plan analysis system

Similar Documents

Publication Publication Date Title
CN108648269B (en) Method and system for singulating three-dimensional building models
CN111508052B (en) Rendering method and device of three-dimensional grid body
WO2019239211A2 (en) System and method for generating simulated scenes from open map data for machine learning
JPH0683979A (en) Method and system for displaying computer graphic accompaned by formation of shadow
US5767857A (en) Method, apparatus, and software product for generating outlines for raster-based rendered images
CN112639846A (en) Method and device for training deep learning model
CN105006021A (en) Color mapping method and device suitable for rapid point cloud three-dimensional reconstruction
KR20240001021A (en) Image rendering method and apparatus, electronic device, and storage medium
CN113593027B (en) Three-dimensional avionics display control interface device
US20220230385A1 (en) Method, device and storage medium for processing image
CN115937461A (en) Multi-source fusion model construction and texture generation method, device, medium and equipment
CN115375857A (en) Three-dimensional scene reconstruction method, device, equipment and storage medium
WO2020199057A1 (en) Self-piloting simulation system, method and device, and storage medium
CN113223146A (en) Data labeling method and device based on three-dimensional simulation scene and storage medium
CN117095110B (en) Sequence-independent transparent rendering method and system for Internet three-dimensional map
Wei et al. Simulating shadow interactions for outdoor augmented reality with RGBD data
CN112509110A (en) Automatic image data set acquisition and labeling framework for land confrontation intelligent agent
US10964096B2 (en) Methods for detecting if an object is visible
CN112288842A (en) Shadow map algorithm-based quantitative analysis method and device for terrain visible area
CN114972612B (en) Image texture generation method based on three-dimensional simplified model and related equipment
US20220398808A1 (en) Method to determine from photographs the placement and progress of building elements in comparison with a building plan
CN113838199B (en) Three-dimensional terrain generation method
CN116245943A (en) Continuous frame point cloud data labeling method and device based on web
She et al. A line-feature label placement algorithm for interactive 3D map
CN114170409A (en) Method for automatically judging display label of three-dimensional model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination