CN113496238A - Model training method, point cloud data stylization method, device, equipment and medium - Google Patents

Model training method, point cloud data stylization method, device, equipment and medium Download PDF

Info

Publication number
CN113496238A
CN113496238A CN202010203528.4A CN202010203528A CN113496238A CN 113496238 A CN113496238 A CN 113496238A CN 202010203528 A CN202010203528 A CN 202010203528A CN 113496238 A CN113496238 A CN 113496238A
Authority
CN
China
Prior art keywords
point cloud
data
stylized
sample
cloud data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010203528.4A
Other languages
Chinese (zh)
Inventor
李艳丽
赫桂望
蔡金华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Three Hundred And Sixty Degree E Commerce Co ltd
Original Assignee
Beijing Jingdong Three Hundred And Sixty Degree E Commerce Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Three Hundred And Sixty Degree E Commerce Co ltd filed Critical Beijing Jingdong Three Hundred And Sixty Degree E Commerce Co ltd
Priority to CN202010203528.4A priority Critical patent/CN113496238A/en
Publication of CN113496238A publication Critical patent/CN113496238A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The embodiment of the invention discloses a model training method, a point cloud data stylizing device and a point cloud data stylizing medium, wherein the model training method comprises the following steps: acquiring original point cloud data, and performing semantic analysis on the original point cloud data to obtain sample point cloud data containing semantic information of each point in the original point cloud data; acquiring sample style data corresponding to the sample point cloud data, and generating training sample data based on the sample point cloud data and the sample style data; and training the pre-constructed point cloud stylized model by using the training sample data to obtain the trained point cloud stylized model. The model training method provided by the embodiment of the invention can be used for segmenting and fusing the sample point cloud data when the point cloud stylized model is trained, so that the trained stylized model can improve the stylized effect of a complex street view scene.

Description

Model training method, point cloud data stylization method, device, equipment and medium
Technical Field
The embodiment of the invention relates to the field of image processing, in particular to a model training method, a point cloud data stylizing device and a point cloud data stylizing medium.
Background
With the development of automatic driving, city simulation, three-dimensional printing and virtual reality technologies, the editing and application of point cloud data become more and more important. The point cloud data stylization is a main research direction for editing point cloud data, and is mainly applied to style change of coordinate positions and/or reflection intensity and color attributes of input point clouds, for example, conversion of summer street view point clouds into winter street view point clouds, and can be applied to the fields of game manufacturing, city simulation, virtual reality and the like.
Currently, the Point cloud stylization method is mainly Neural Style Transfer for Point cloud (NST). The method mainly comprises the steps of inputting an original point cloud and a target point cloud, and guiding and transferring the style of the target point cloud to the original point cloud so as to complete point cloud stylization. Specifically, the NST objective function includes two part indexes: 1) and 2) expecting the stylized point cloud to be consistent with the original point cloud in terms of content characteristics, 2) expecting the stylized point cloud to be consistent with the target point cloud in terms of style characteristics, namely extracting the content characteristics in the original point cloud and the style characteristics in the target point cloud, and combining the content characteristics with the style characteristics to obtain the stylized point cloud data.
In the process of implementing the invention, the inventor finds that at least the following technical problems exist in the prior art: because the streetscape scene is too complex and comprises different point cloud entities (such as trees, vehicles, human bodies, buildings and the like), the style of each entity is different, and good effect is difficult to produce.
Disclosure of Invention
The embodiment of the invention provides a model training method, a point cloud data stylizing device and a point cloud data stylizing medium, so that the stylizing effect of a complex street view scene is improved.
In a first aspect, an embodiment of the present invention provides a point cloud stylized model training method, including:
acquiring original point cloud data, and performing semantic analysis on the original point cloud data to obtain sample point cloud data containing semantic information of each point in the original point cloud data;
acquiring sample style data corresponding to the sample point cloud data, and generating training sample data based on the sample point cloud data and the sample style data;
and training the pre-constructed point cloud stylized model by using the training sample data to obtain the trained point cloud stylized model.
In a second aspect, an embodiment of the present invention further provides a point cloud data stylizing method, including:
acquiring point cloud data to be stylized, and performing semantic analysis on the point cloud data to be stylized to obtain marked point cloud data containing semantic information of each point in the original point cloud data to be stylized;
obtaining style point cloud data corresponding to the marked point cloud data, inputting the marked point cloud data and the style point cloud data into a trained point cloud stylized model, and obtaining target point cloud data output by the point cloud stylized model, wherein the trained point cloud stylized model is obtained by training by using the point cloud stylized model method provided by any embodiment of the invention;
and generating a target scene picture corresponding to the point cloud data to be stylized according to the target point cloud data, and displaying the target scene picture.
In a third aspect, an embodiment of the present invention further provides a point cloud stylized model training apparatus, including:
the sample data acquisition module is used for acquiring original point cloud data, and performing semantic analysis on the original point cloud data to obtain sample point cloud data containing semantic information of each point in the original point cloud data;
the training data generation module is used for acquiring sample style data corresponding to the sample point cloud data and generating training sample data based on the sample point cloud data and the sample style data;
and the stylized model training module is used for training the point cloud stylized model which is constructed in advance by using the training sample data to obtain the trained point cloud stylized model.
In a fourth aspect, an embodiment of the present invention further provides a point cloud data stylizing apparatus, including:
the marked point cloud acquisition module is used for acquiring point cloud data to be stylized, and performing semantic analysis on the point cloud data to be stylized to obtain marked point cloud data containing semantic information of each point in the point cloud data to be stylized;
the target point cloud obtaining module is used for obtaining style point cloud data corresponding to the marked point cloud data, inputting the marked point cloud data and the style point cloud data into a trained point cloud stylized model and obtaining target point cloud data output by the point cloud stylized model, wherein the trained point cloud stylized model is obtained by training by using the point cloud stylized model method provided by any embodiment of the invention;
and the target scene display module is used for generating a target scene picture corresponding to the point cloud data to be stylized according to the target point cloud data and displaying the target scene picture.
In a fifth aspect, an embodiment of the present invention further provides a computer device, where the computer device includes:
one or more processors;
storage means for storing one or more programs;
when executed by one or more processors, cause the one or more processors to implement a point cloud stylized model training methodology as provided by any embodiment of the present invention; and/or implement a point cloud data stylization method as provided by any embodiment of the invention.
In a sixth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the point cloud stylization model training method provided in any embodiment of the present invention; and/or implement a point cloud data stylization method as provided by any embodiment of the invention.
According to the embodiment of the invention, original point cloud data is obtained, and semantic analysis is carried out on the original point cloud data to obtain sample point cloud data containing semantic information of each point in the original point cloud data; acquiring sample style data corresponding to the sample point cloud data, and generating training sample data based on the sample point cloud data and the sample style data; training a point cloud stylized model which is constructed in advance by using training sample data to obtain the trained point cloud stylized model, so that when the point cloud stylized model is trained, the model can segment and fuse sample point cloud data, and the trained stylized model can improve the stylized effect of a complex street scene.
Drawings
Fig. 1a is a flowchart of a point cloud stylized model training method according to an embodiment of the present invention;
FIG. 1b is a schematic structural diagram of a point cloud stylization model according to an embodiment of the present invention;
FIG. 1c is a schematic structural diagram of another point cloud stylization model provided in accordance with an embodiment of the present invention;
FIG. 2 is a flowchart of a point cloud data stylizing method according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a point cloud stylized model training apparatus according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a point cloud data stylizing apparatus according to a fourth embodiment of the present invention;
fig. 5 is a schematic structural diagram of a computer device according to a fifth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1a is a flowchart of a point cloud stylized model training method according to an embodiment of the present invention. The embodiment can be applied to the situation when the point cloud stylized model is trained. The method may be performed by a point cloud stylized model training apparatus, which may be implemented in software and/or hardware, for example, which may be configured in a computer device. As shown in fig. 1a, the method comprises:
s110, original point cloud data are obtained, semantic analysis is carried out on the original point cloud data, and sample point cloud data containing semantic information of each point in the original point cloud data are obtained.
In this embodiment, the original point cloud data may be point cloud data before being stylized. For example, the original point cloud data may be point cloud data of a street view scene. The manner of acquiring the original point cloud data is not limited herein. Optionally, the original point cloud data may be obtained by extracting sampling points from a virtual scene, the original point cloud data may be obtained by scanning a scene with a single laser radar device, or the original point cloud data may be obtained by scanning a scene with a laser radar and a camera.
In order to improve the stylizing effect on the point cloud data of the complex street view, in this embodiment, the point cloud data of each instance may be extracted from the original point cloud data, an individual stylizing operation is performed on each instance, and then the stylized point cloud data of each instance are fused to obtain the stylized overall point cloud data. The extraction of the point cloud data of each instance from the original point cloud data requires extraction according to semantic information of each point, and therefore semantic analysis needs to be performed on the original point cloud data after the original point cloud data is acquired.
Optionally, performing semantic analysis on the original point cloud data may be performing semantic identification and semantic numbering on the points in the point cloud. The semantic identification identifies an instance type of the point (e.g., identifies that the point belongs to a tree, a vehicle, a human body, or the ground), and the semantic number identifies an instance number of the point (e.g., identifies that the point belongs to a few trees, a few vehicles, or a few human bodies). On the basis, when the streetscape point cloud is taken as the original point cloud data, the point cloud belonging to the background scene (such as the ground or a large-area building) and the point cloud belonging to the foreground scene (such as pedestrians, vehicles, trees, street lamps and traffic lights) are included, and different identifications can be carried out on the point clouds in different scenes. Illustratively, the background scene is only subjected to semantic identification, and the foreground object is subjected to semantic identification and semantic numbering. In this embodiment, when the original point cloud data is obtained in different manners, the manner of performing semantic analysis on the original point cloud data is also different.
When the original point cloud data is obtained by extracting sampling points from the virtual scene, the semantic information of each point can be directly obtained from the data of the virtual scene. It can be understood that the construction process of the virtual machine scene is a process for assembling different semantic entities, and therefore the virtual scene already includes semantic identification and semantic numbering. And directly assigning the semantic identifier and the semantic number of the virtual scene to the sampling point.
When the raw point cloud data is acquired by a single lidar device scanning a scene, the attributes of each point in the raw point cloud data include coordinates (X, Y, Z) and reflection intensity. Under the condition, the point cloud scene analysis method (such as 2017 CVPR PointNet) can be used for carrying out scene analysis on the point cloud to obtain the semantic identifier of each point cloud, and then the point cloud example segmentation method (such as 2019 CVPR R-PointNet) is used for carrying out semantic numbering on the foreground point cloud to complete the semantic analysis on the original point cloud data.
When the raw point cloud data is acquired by a laser radar and a camera scanning scene, each point cloud point attribute in the raw point cloud data comprises coordinates (X, Y, Z), reflection intensity and color (R, G, B). In this case, the video frame may be subjected to scene parsing and instance segmentation, and then the semantic identification and semantic number identification of the image pixel are transferred to the point cloud point. Taking a panoramic camera as an example, firstly, carrying out time synchronization and space calibration on the panoramic camera and a laser radar to obtain a projection point of each laser point under a certain panoramic frame; secondly, performing scene analysis on each frame of panoramic image by using the existing image scene analysis method (such as 2017 CVPR RefineNet and the like); then, carrying out Instance Segmentation by using a Video Instance Segmentation method (such as 2019 Arxiv Video Instance Segmentation), wherein the Video Instance Segmentation is a method for detecting, segmenting and tracking objects in a Video frame, and finally obtaining semantic identification and semantic number of each object on the panoramic image; and finally, acquiring the semantic identifier and the semantic number of each laser point according to the corresponding relation between the laser points and the panoramic projection points. If the camera is a common camera, part of point cloud points have no corresponding image projection points, and for the point cloud points, the nearest point cloud points with the image projection points are assigned with semantic identifications and semantic numbers by using a nearest neighbor algorithm.
And S120, obtaining sample style data corresponding to the sample point cloud data, and generating training sample data based on the sample point cloud data and the sample style data.
In this embodiment, after the sample point cloud data is obtained, sample style data corresponding to the sample point cloud data is obtained, and training sample data is generated according to a large amount of sample point cloud data and the sample style data corresponding to the sample point cloud data. The sample style data is point cloud data which corresponds to each instance in the sample point cloud data and is consistent with the stylized target style, and the sample style data is a complete cluster of point clouds of real objects.
Optionally, the sample style data may be selected from a database by a user, and the point cloud stylization model training device may directly obtain the sample style data selected by the user. Optionally, the sample style data may be automatically screened from the database according to semantic information in the sample point cloud data. In one embodiment, obtaining sample style data corresponding to sample point cloud data comprises: and acquiring sample style data corresponding to the sample point cloud data according to the semantic information and the target style of each point of the sample point cloud data. Specifically, the sample point cloud data comprises point cloud data of a plurality of examples and semantic identifiers of the examples, and for each example in the sample point cloud data, the point cloud data with identifiers consistent with the semantic identifiers of the example and styles consistent with the target styles are obtained from a database according to the semantic identifiers of the example and serve as sample style data corresponding to the example.
S130, training the pre-constructed point cloud stylized model by using the training sample data to obtain the trained point cloud stylized model.
And after sample training data are obtained, training the pre-constructed point cloud stylized model by using the training sample data to obtain the trained point cloud stylized model.
In this embodiment, the point cloud data is segmented, independent stylization operation is performed on each instance data after segmentation, and then stylized instances are fused to complete stylization of the point cloud data. Correspondingly, a corresponding data segmentation module, a corresponding stylization module and a corresponding data fusion module need to be constructed in the point cloud stylization model. In one embodiment of the present invention, the pre-constructed point cloud stylization model includes: the data segmentation module, the stylization module and the data fusion module are used for segmenting data; the data segmentation module is used for carrying out data segmentation on the sample point cloud data and outputting at least two sample instance data; the stylization module is used for stylizing the sample instance data according to the sample style data corresponding to the sample instance data and outputting the stylized data corresponding to the sample instance data; and the data fusion module is used for fusing the stylized data corresponding to the sample instance data output by the stylized module and outputting target fusion data.
Fig. 1b is a schematic structural diagram of a point cloud stylization model according to an embodiment of the present invention. As shown in fig. 1b, the point cloud stylization model includes a data segmentation module 10, a stylization module 20, and a data fusion module 30. The stylizing module 20 may be 2019 Arxiv Neural Style Transfer for Point cloud (NST), and the number may be plural. The input of the data segmentation module 10 is point cloud data, and the output is example data; the input of each stylizing module 20 is a certain example data output by the data segmentation module 10 and style data corresponding to the example data, and the output is stylized data stylized by the example data; the input of the data fusion module 30 is the stylized data output by each stylization module 20, and the output is the target fusion data. In the point cloud stylization model in fig. 1b, the point cloud data is segmented by the data segmentation module to obtain example data of each example, the stylization module performs stylization processing on each example, and the data fusion module fuses the stylized data of each example after the stylization processing, so that the stylization effect of the complex street view scene is improved.
Fig. 1c is a schematic structural diagram of another point cloud stylization model according to an embodiment of the present invention. The stylization of the point cloud data of a complex scene is schematically shown in fig. 1 c. As shown in fig. 1c, the solid boxes represent data layers and the dashed boxes represent network layers. The point cloud stylization model comprises a data segmentation module 10, a plurality of stylization modules 20 and a data fusion module 30; the stylization module 20 includes an NST network 210 and a first feature extraction layer 220; the data fusion module 30 includes a data merging layer 310 and a second feature extraction layer 320. Wherein the first feature extraction layer and the second feature extraction layer may be constructed based on a multi-layer perceptron (MLP). Optionally, the first feature extraction layer may be a two-layer MLP network, and the second feature extraction layer may be a three-layer MLP network.
Taking the style of street view point cloud data as an example, the street view point cloud data (Nx6) is subjected to semantic analysis to distinguish background point clouds and different example point clouds, and the street view point cloud data subjected to the semantic analysis is divided into the background point cloud (NB x6) and the example point clouds (N1 x6, N2 x6, N3 x6, … … and Nj x6) by a data partitioning layer (Split) 10; then, for each instance point cloud and background point cloud, feature data is extracted independently through the NST network 210 and the first feature extraction layer 220 in the stylization module 20; then, the feature data extracted from each instance point cloud and the background point cloud are merged by the data merging layer 310 in the data merging module 30, and then the merged data are imported into the second feature extraction layer 320, so as to extract the final target merging data. In the above process, the first feature extraction layer 220 may be two layers of MLP network MPLs (1024, 512), and may extract 512-dimensional feature data, and the second feature extraction layer 320 may be three layers of MLP network MPLs (256, 64, 16) and may extract 16-dimensional feature data.
On the basis of the scheme, training a pre-constructed point cloud stylized model by using training sample data to obtain a trained point cloud stylized model, and the method comprises the following steps: acquiring stylized data output by a stylized module and target fusion data output by a data fusion module; and determining a target loss value based on the stylized data and the target fusion data, and training the point cloud stylized model by taking the target loss value reaching a convergence condition as a target to obtain the trained point cloud stylized model.
In this embodiment, when training the point cloud stylized model by using training sample data, the stylized data of each instance data and the target fusion data obtained by fusing the stylized data of each instance are obtained through the above processes, then the target loss value is calculated by combining the instance data of each sample, the style data of the sample, the stylized data and the target fusion data, and when the target loss value satisfies the convergence condition, the trained point cloud stylized model is obtained. Optionally, the target loss value meeting the convergence condition may be that a difference between two adjacent target loss values is smaller than a set threshold, or the number of iterations reaches a set target number of iterations.
In one embodiment of the present invention, determining a target loss value based on the stylized data and the target fused data comprises: determining a local stylized loss value according to the stylized data, sample instance data corresponding to the stylized data and sample style data corresponding to the stylized data; determining an overall stylized loss value according to the target fusion data; and obtaining a target loss value according to the local stylized loss value and the overall stylized loss value.
In consideration of the fact that the geometric positions of the stylized data obtained by stylizing the examples may be misaligned, overlapping may exist between adjacent stylized point clouds in the target fusion data generated by fusing the stylized data. In this embodiment, the overall stylized loss value is used as one of the calculation parameters of the target loss value, so as to avoid overlapping between vector stylized point clouds in the fused point cloud data.
Optionally, both the local stylized loss value and the overall stylized loss value may be used as target loss values, and the point cloud stylized model is trained to obtain a trained point cloud stylized model by taking the target that both the local stylized loss value and the overall stylized loss value reach convergence conditions; and calculating a target loss value based on the local stylized loss value and the overall stylized loss value, and training the point cloud stylized model by taking the target loss value reaching the convergence condition as a target to obtain the trained point cloud stylized model. For example, the sum of the local stylized loss value and the overall stylized loss value may be used as a target loss value, different weights may be set for the local stylized loss value and the overall stylized loss value according to actual requirements, and the local stylized loss value and the overall stylized loss value are weighted and summed to obtain the target loss value.
Illustratively, the value may be obtained by loss (p) ═ λ Σi∈{1,...,j}Losslocal(Pi,Ci,Si)/j+(1-λ)Lossglobal(P) calculating a target loss value. Wherein, CiRepresenting example point clouds i, PiRepresenting a stylized point cloud corresponding to an instance point cloud i, SiRepresenting the style point cloud corresponding to the example point cloud i, Loss (P) is the target Loss value, LosslocalFor the local stylized Loss function, it is desirable that each instance point cloud have independent consistency in content and style, LossglobalThe method is characterized in that the method is an integral stylized loss function, no overlapping problem exists between point cloud entities with expected styles, lambda is weight and can be set according to actual requirements, and lambda belongs to [0, 1 ]]. In order to ensure the additivity of the local stylized loss value and the overall stylized loss value, the local stylized loss value and the overall stylized loss value are subjected to probability normalization.
Alternatively, the local stylized loss value for the example point cloud i may be calculated by the following formula:
Losslocal(Pi,Ci,Si)=wLosslocal_content(Pi,Ci,Si)+(1-w)Losslocal_style(Pi,Ci,Si)
wherein, CiRepresenting example point clouds i, PiRepresenting a stylized point cloud corresponding to an instance point cloud i, SiRepresenting a style point cloud, Loss, corresponding to the instance point cloud ilocal(Pi,Ci,Si) Local stylized Loss function, Loss, for an example point cloud ilocal_content(Pi,Ci,Si) Local content Loss function, Loss, for an example point cloud ilocal_style(Pi,Ci,Si) Is a local style loss function of the example point cloud i, w is weight and can be set according to actual requirements, and w belongs to [0, 1 ]]。
Can pass through Losslocal_content(Pi,Ci,Si)=exp(-∑l||Fl(Pi)-Fl(Ci)||21) Calculating local content Loss value of example point cloud i, through Losslocal_style(Pi,Ci,Si)=exp(-∑l||G(Fl(Pi))-G(Fl(Ci))||22) A local style loss value for the instance point cloud i is calculated. Wherein G is a gram matrix, beta1、β2The internal parameters can be set according to experience.
In one embodiment of the present invention, determining an overall stylized loss value from target fusion data comprises: determining edge data of the stylized point cloud cluster corresponding to each sample instance data according to the target fusion data; and determining cross region data of adjacent stylized point cloud clusters according to the edge data of each stylized point cloud cluster, and determining an overall stylized loss value based on each cross region data. Optionally, the cross loss value of each adjacent stylized point cloud cluster may be calculated based on the cross basis data of each adjacent stylized point cloud cluster, and the sum of the cross loss values of all adjacent stylized point cloud clusters is taken as the overall stylized loss value.
Illustratively, by Lossglobal(P)=∑(i,j)exp(-crossR(Pi,Pj)2/33) Calculating an overall stylized loss value, wherein cross R (P)i,Pj) Representing a point cloud cluster P in target fusion datai(i.e., the point cloud cluster corresponding to instance point cloud i) and point cloud cluster Pj(i.e., the point cloud cluster corresponding to the instance point cloud j) volume of the intersection region of the outside bounding box, β3The internal parameters can be set according to experience.
According to the embodiment of the invention, original point cloud data is obtained, and semantic analysis is carried out on the original point cloud data to obtain sample point cloud data containing semantic information of each point in the original point cloud data; acquiring sample style data corresponding to the sample point cloud data, and generating training sample data based on the sample point cloud data and the sample style data; training a point cloud stylized model which is constructed in advance by using training sample data to obtain the trained point cloud stylized model, so that when the point cloud stylized model is trained, the model can segment and fuse sample point cloud data, and the trained stylized model can improve the stylized effect of a complex street scene.
Example two
Fig. 2 is a flowchart of a point cloud data stylizing method according to a second embodiment of the present invention. The embodiment is applicable to the situation when stylizing the point cloud data, and is particularly applicable to the situation when stylizing the point cloud data of a complex street view. The method may be performed by a point cloud data stylizing apparatus, which may be implemented in software and/or hardware, for example, which may be configured in a computer device. As shown in fig. 2, the method includes:
s210, point cloud data to be stylized are obtained, semantic analysis is conducted on the point cloud data to be stylized, and marked point cloud data containing semantic information of all points in the point cloud data to be stylized are obtained.
In this embodiment, the point cloud data to be stylized may be acquired in various ways. Illustratively, point cloud data to be stylized can be acquired by extracting sampling points from a virtual scene, point cloud data to be stylized can be acquired by scanning a scene through a single laser radar device, and point cloud data to be stylized can also be acquired by scanning a scene through a laser radar and a camera.
Optionally, the method for performing semantic analysis on the point cloud data to be stylized is determined according to the acquisition method of the point cloud data to be stylized. Specifically, the content of performing semantic analysis on the point cloud data to be stylized may refer to the content of performing semantic analysis on the original point cloud data in the above embodiment, which is not described herein again.
And S220, obtaining style point cloud data corresponding to the marked point cloud data, inputting the marked point cloud data and the style point cloud data into the trained point cloud stylized model, and obtaining target point cloud data output by the point cloud stylized model.
In this embodiment, after the marked point cloud data including the semantic information of each point in the original point cloud data is obtained, the style point cloud data corresponding to each instance data in the marked point cloud data is obtained according to the semantic information in the marked point cloud data, and the marked point cloud data and the style point cloud data are input into the trained point cloud stylized model to obtain the target point cloud data output by the point cloud stylized model. The trained point cloud stylized model is obtained by training by using the point cloud stylized model method provided by any embodiment of the invention.
And S230, generating a target scene picture corresponding to the point cloud data to be stylized according to the target point cloud data, and displaying the target scene picture.
And after the target point cloud data is obtained, generating a stylized target scene picture from the target point cloud data, and displaying the target scene picture, namely completing the stylized conversion of the scene corresponding to the point cloud data to be stylized.
The embodiment of the invention obtains the point cloud data to be stylized, and carries out semantic analysis on the point cloud data to be stylized to obtain marked point cloud data containing semantic information of each point in the point cloud data to be stylized; obtaining style point cloud data corresponding to the marked point cloud data, inputting the marked point cloud data and the style point cloud data into a trained point cloud stylized model, and obtaining target point cloud data output by the point cloud stylized model, wherein the trained point cloud stylized model is obtained by training by using the point cloud stylized model method provided by any embodiment of the invention; and generating a target scene picture corresponding to the point cloud data to be stylized according to the target point cloud data, and displaying the target scene picture, so that the point cloud stylization model can segment and fuse the point cloud data according to the marked point cloud data containing semantic information, thereby improving the stylization effect of the complex street view scene.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a point cloud stylized model training apparatus according to a third embodiment of the present invention. The point cloud stylized model training device may be implemented in software and/or hardware, for example, the point cloud stylized model training device may be configured in a computer device. As shown in fig. 3, the apparatus includes a sample data obtaining module 310, a training data generating module 320, and a stylized model training module 330, where:
the sample data acquisition module 310 is configured to acquire original point cloud data, perform semantic analysis on the original point cloud data, and obtain sample point cloud data including semantic information of each point in the original point cloud data;
the training data generation module 320 is configured to obtain sample style data corresponding to the sample point cloud data, and generate training sample data based on the sample point cloud data and the sample style data;
a stylized model training module 330, configured to train a pre-constructed point cloud stylized model using training sample data to obtain a trained point cloud stylized model
The method comprises the steps of obtaining original point cloud data through a sample data obtaining module, and performing semantic analysis on the original point cloud data to obtain sample point cloud data containing semantic information of each point in the original point cloud data; the training data generation module acquires sample style data corresponding to the sample point cloud data and generates training sample data based on the sample point cloud data and the sample style data; the stylized model training module trains a pre-constructed point cloud stylized model by using training sample data to obtain a trained point cloud stylized model, so that when the point cloud stylized model is trained, the model can segment and fuse sample point cloud data, and the trained stylized model can improve the stylized effect of a complex street scene.
Optionally, on the basis of the above scheme, the pre-constructed point cloud stylized model includes: the data segmentation module, the stylization module and the data fusion module are used for segmenting data; the data segmentation module is used for carrying out data segmentation on the sample point cloud data and outputting at least two sample instance data; the stylization module is used for stylizing the sample data according to the sample style data corresponding to the sample data and outputting the stylized data corresponding to the sample data; and the data fusion module is used for fusing the stylized data corresponding to the example data output by the stylized module and outputting target fusion data.
Optionally, on the basis of the foregoing scheme, the stylized model training module 330 includes:
the result data acquisition unit is used for acquiring the stylized data output by the stylized module and the target fusion data output by the data fusion module;
and the stylized model training unit is used for determining a target loss value based on the stylized data and the target fusion data, training the point cloud stylized model by taking the target loss value reaching a convergence condition as a target, and obtaining the trained point cloud stylized model.
Optionally, on the basis of the above scheme, the stylized model training unit is specifically configured to:
determining a local stylized loss value according to the stylized data, sample instance data corresponding to the stylized data and sample style data corresponding to the stylized data;
determining an overall stylized loss value according to the target fusion data;
and obtaining a target loss value according to the local stylized loss value and the overall stylized loss value.
Optionally, on the basis of the above scheme, the stylized model training unit is specifically configured to:
determining edge data of the stylized point cloud cluster corresponding to each sample instance data according to the target fusion data;
and determining cross region data of adjacent stylized point cloud clusters according to the edge data of each stylized point cloud cluster, and determining an overall stylized loss value based on each cross region data.
Optionally, on the basis of the above scheme, the training data generating module 320 is specifically configured to:
and acquiring sample style data corresponding to the sample point cloud data according to the semantic information and the target style of each point of the sample point cloud data.
The point cloud stylized model training device provided by the embodiment of the invention can execute the point cloud stylized model training method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 4 is a schematic structural diagram of a point cloud data stylizing apparatus according to a fourth embodiment of the present invention. The point cloud data stylizing apparatus may be implemented in software and/or hardware, for example, the point cloud data stylizing apparatus may be configured in a computer device. As shown in fig. 4, the apparatus includes a marker point cloud obtaining module 410, a target point cloud obtaining module 420, and a target scene display module 430, wherein:
a marked point cloud obtaining module 410, configured to obtain point cloud data to be stylized, perform semantic analysis on the point cloud data to be stylized, and obtain marked point cloud data including semantic information of each point in the point cloud data to be stylized;
the target point cloud obtaining module 420 is configured to obtain style point cloud data corresponding to the marked point cloud data, input the marked point cloud data and the style point cloud data into a trained point cloud stylized model, and obtain target point cloud data output by the point cloud stylized model, where the trained point cloud stylized model is obtained by training using the point cloud stylized model method provided by any embodiment of the present invention;
and the target scene display module 430 is configured to generate a target scene picture corresponding to the point cloud data to be stylized according to the target point cloud data, and display the target scene picture.
The embodiment of the invention obtains point cloud data to be stylized through a marking point cloud obtaining module, and carries out semantic analysis on the point cloud data to be stylized to obtain marking point cloud data containing semantic information of each point in the point cloud data to be stylized; the target point cloud obtaining module obtains style point cloud data corresponding to the marked point cloud data, and the marked point cloud data and the style point cloud data are input into a trained point cloud stylized model to obtain target point cloud data output by the point cloud stylized model, wherein the trained point cloud stylized model is obtained by training by using the point cloud stylized model method provided by any embodiment of the invention; the target scene display module generates a target scene picture corresponding to point cloud data to be stylized according to the target point cloud data and displays the target scene picture, so that the point cloud stylization model can segment and fuse the point cloud data according to marked point cloud data containing semantic information, and the stylization effect of the complex street scene is improved.
The point cloud data stylizing device provided by the embodiment of the invention can execute the point cloud data stylizing method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
EXAMPLE five
Fig. 5 is a schematic structural diagram of a computer device according to a fifth embodiment of the present invention. FIG. 5 illustrates a block diagram of an exemplary computer device 512 suitable for use in implementing embodiments of the present invention. The computer device 512 shown in FIG. 5 is only an example and should not bring any limitations to the functionality or scope of use of embodiments of the present invention.
As shown in FIG. 5, computer device 512 is in the form of a general purpose computing device. Components of computer device 512 may include, but are not limited to: one or more processors 516, a system memory 528, and a bus 518 that couples the various system components including the system memory 528 and the processors 516.
Bus 518 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and processor 516, or a local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Computer device 512 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by computer device 512 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 528 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)530 and/or cache memory 532. The computer device 512 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage 534 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, and commonly referred to as a "hard drive"). Although not shown in FIG. 5, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 518 through one or more data media interfaces. Memory 528 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 540 having a set (at least one) of program modules 542, including but not limited to an operating system, one or more application programs, other program modules, and program data, may be stored in, for example, the memory 528, each of which examples or some combination may include an implementation of a network environment. The program modules 542 generally perform the functions and/or methods of the described embodiments of the invention.
The computer device 512 may also communicate with one or more external devices 514 (e.g., keyboard, pointing device, display 524, etc.), with one or more devices that enable a user to interact with the computer device 512, and/or with any devices (e.g., network card, modem, etc.) that enable the computer device 512 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 522. Also, computer device 512 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via network adapter 520. As shown, the network adapter 520 communicates with the other modules of the computer device 512 via the bus 518. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the computer device 512, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processor 516 executes programs stored in the system memory 528 to execute various functional applications and data processing, for example, implementing a point cloud stylized model training method provided by the embodiment of the present invention, the method includes:
acquiring original point cloud data, and performing semantic analysis on the original point cloud data to obtain sample point cloud data containing semantic information of each point in the original point cloud data;
acquiring sample style data corresponding to the sample point cloud data, and generating training sample data based on the sample point cloud data and the sample style data;
training a pre-constructed point cloud stylized model by using training sample data to obtain a trained point cloud stylized model;
and/or, the method for stylizing point cloud data provided by the embodiment of the invention is realized, and the method comprises the following steps:
acquiring point cloud data to be stylized, and performing semantic analysis on the point cloud data to be stylized to obtain marked point cloud data containing semantic information of each point in the point cloud data to be stylized;
acquiring guide point cloud data corresponding to the marked point cloud data, inputting the marked point cloud data and the guide point cloud data into a trained point cloud stylized model, and acquiring target point cloud data output by the point cloud stylized model, wherein the trained point cloud stylized model is obtained by training by using the point cloud stylized model method provided by any embodiment of the invention;
and generating a target scene picture corresponding to the point cloud data to be stylized according to the target point cloud data, and displaying the target scene picture.
Of course, those skilled in the art will appreciate that the processor may also implement the technical solution of the point cloud stylization model training method and/or the point cloud data stylization method provided in any embodiment of the present invention.
EXAMPLE six
The sixth embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the point cloud stylized model training method provided in the sixth embodiment of the present invention, where the method includes:
acquiring original point cloud data, and performing semantic analysis on the original point cloud data to obtain sample point cloud data containing semantic information of each point in the original point cloud data;
acquiring sample style data corresponding to the sample point cloud data, and generating training sample data based on the sample point cloud data and the sample style data;
training a pre-constructed point cloud stylized model by using training sample data to obtain a trained point cloud stylized model;
and/or, the method for stylizing point cloud data provided by the embodiment of the invention is realized, and the method comprises the following steps:
acquiring point cloud data to be stylized, and performing semantic analysis on the point cloud data to be stylized to obtain marked point cloud data containing semantic information of each point in the point cloud data to be stylized;
acquiring guide point cloud data corresponding to the marked point cloud data, inputting the marked point cloud data and the guide point cloud data into a trained point cloud stylized model, and acquiring target point cloud data output by the point cloud stylized model, wherein the trained point cloud stylized model is obtained by training by using the point cloud stylized model method provided by any embodiment of the invention;
and generating a target scene picture corresponding to the point cloud data to be stylized according to the target point cloud data, and displaying the target scene picture.
Of course, the computer program stored on the computer-readable storage medium provided by the embodiments of the present invention is not limited to the above method operations, and may also perform related operations of the point cloud stylization model training method and/or the point cloud data stylization method provided by any embodiments of the present invention.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (11)

1. A point cloud stylized model training method is characterized by comprising the following steps:
acquiring original point cloud data, and performing semantic analysis on the original point cloud data to obtain sample point cloud data containing semantic information of each point in the original point cloud data;
acquiring sample style data corresponding to the sample point cloud data, and generating training sample data based on the sample point cloud data and the sample style data;
and training the pre-constructed point cloud stylized model by using the training sample data to obtain the trained point cloud stylized model.
2. The method of claim 1, wherein the pre-constructed point cloud stylization model comprises: the data segmentation module, the stylization module and the data fusion module are used for segmenting data; wherein the content of the first and second substances,
the data segmentation module is used for performing data segmentation on the sample point cloud data and outputting at least two sample instance data;
the stylization module is used for stylizing the sample instance data according to the sample style data corresponding to the sample instance data and outputting the stylized data corresponding to the sample instance data;
and the data fusion module is used for fusing the stylized data corresponding to the sample instance data output by the stylized module and outputting target fusion data.
3. The method of claim 2, wherein the training a pre-constructed point cloud stylized model using the training sample data to obtain a trained point cloud stylized model comprises:
acquiring stylized data output by the stylized module and target fusion data output by the data fusion module;
and determining a target loss value based on the stylized data and the target fusion data, and training the point cloud stylized model by taking the target loss value reaching a convergence condition as a target to obtain the trained point cloud stylized model.
4. The method of claim 3, wherein determining a target loss value based on the stylized data and the target fused data comprises:
determining a local stylization loss value according to the stylized data, sample instance data corresponding to the stylized data and sample style data corresponding to the stylized data;
determining an overall stylized loss value according to the target fusion data;
and obtaining the target loss value according to the local stylized loss value and the overall stylized loss value.
5. The method of claim 4, wherein determining an overall stylized loss value from the target fusion data comprises:
determining edge data of the stylized point cloud cluster corresponding to each sample instance data according to the target fusion data;
and determining intersection region data of adjacent stylized point cloud clusters according to the edge data of each stylized point cloud cluster, and determining the overall stylized loss value based on each intersection region data.
6. The method of claim 1, wherein the obtaining sample style data corresponding to the sample point cloud data comprises:
and acquiring sample style data corresponding to the sample point cloud data according to the semantic information and the target style of each point of the sample point cloud data.
7. A method for stylizing point cloud data, comprising:
acquiring point cloud data to be stylized, and performing semantic analysis on the point cloud data to be stylized to obtain marked point cloud data containing semantic information of each point in the point cloud data to be stylized;
acquiring style point cloud data corresponding to the marked point cloud data, inputting the marked point cloud data and the style point cloud data into a trained point cloud stylized model, and acquiring target point cloud data output by the point cloud stylized model, wherein the trained point cloud stylized model is obtained by training by using the point cloud stylized model method of any one of claims 1-6;
and generating a target scene picture corresponding to the point cloud data to be stylized according to the target point cloud data, and displaying the target scene picture.
8. A point cloud stylized model training device, comprising:
the system comprises a sample data acquisition module, a data analysis module and a data analysis module, wherein the sample data acquisition module is used for acquiring original point cloud data and performing semantic analysis on the original point cloud data to obtain sample point cloud data containing semantic information of each point in the original point cloud data;
the training data generation module is used for acquiring sample style data corresponding to the sample point cloud data and generating training sample data based on the sample point cloud data and the sample style data;
and the stylized model training module is used for training a point cloud stylized model which is constructed in advance by using the training sample data to obtain the trained point cloud stylized model.
9. A point cloud data stylization apparatus, comprising:
the system comprises a marking point cloud acquisition module, a marking point cloud analysis module and a marking point cloud analysis module, wherein the marking point cloud acquisition module is used for acquiring point cloud data to be stylized, and performing semantic analysis on the point cloud data to be stylized to obtain marking point cloud data containing semantic information of each point in the point cloud data to be stylized;
a target point cloud obtaining module, configured to obtain style point cloud data corresponding to the marked point cloud data, input the marked point cloud data and the style point cloud data into a trained point cloud stylized model, and obtain target point cloud data output by the point cloud stylized model, where the trained point cloud stylized model is obtained by using the point cloud stylized model method according to any one of claims 1 to 6;
and the target scene display module is used for generating a target scene picture corresponding to the point cloud data to be stylized according to the target point cloud data and displaying the target scene picture.
10. A computer device, the device comprising:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the point cloud stylized model training method of any of claims 1-6; and/or, implementing the point cloud data stylization method as recited in claim 7.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a point cloud stylized model training method as claimed in any one of claims 1 to 6; and/or, implementing the point cloud data stylization method as recited in claim 7.
CN202010203528.4A 2020-03-20 2020-03-20 Model training method, point cloud data stylization method, device, equipment and medium Pending CN113496238A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010203528.4A CN113496238A (en) 2020-03-20 2020-03-20 Model training method, point cloud data stylization method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010203528.4A CN113496238A (en) 2020-03-20 2020-03-20 Model training method, point cloud data stylization method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN113496238A true CN113496238A (en) 2021-10-12

Family

ID=77993816

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010203528.4A Pending CN113496238A (en) 2020-03-20 2020-03-20 Model training method, point cloud data stylization method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN113496238A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805803A (en) * 2018-06-13 2018-11-13 衡阳师范学院 A kind of portrait style moving method based on semantic segmentation Yu depth convolutional neural networks
CN108961349A (en) * 2018-06-29 2018-12-07 广东工业大学 A kind of generation method, device, equipment and the storage medium of stylization image
CN109308679A (en) * 2018-08-13 2019-02-05 深圳市商汤科技有限公司 A kind of image style conversion side and device, equipment, storage medium
CN109559363A (en) * 2018-11-23 2019-04-02 网易(杭州)网络有限公司 Stylized processing method, device, medium and the electronic equipment of image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805803A (en) * 2018-06-13 2018-11-13 衡阳师范学院 A kind of portrait style moving method based on semantic segmentation Yu depth convolutional neural networks
CN108961349A (en) * 2018-06-29 2018-12-07 广东工业大学 A kind of generation method, device, equipment and the storage medium of stylization image
CN109308679A (en) * 2018-08-13 2019-02-05 深圳市商汤科技有限公司 A kind of image style conversion side and device, equipment, storage medium
WO2020034481A1 (en) * 2018-08-13 2020-02-20 深圳市商汤科技有限公司 Image style conversion method and apparatus, device, and storage medium
CN109559363A (en) * 2018-11-23 2019-04-02 网易(杭州)网络有限公司 Stylized processing method, device, medium and the electronic equipment of image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
乔丽莎: "基于深度学习的图像风格艺术化", 中国优秀硕士学位论文全文数据库 (信息科技辑), no. 12, pages 138 - 1614 *
叶武剑;高海健;翁韶伟;高智;王善进;张春玉;刘怡俊;: "基于CGAN网络的二阶段式艺术字体渲染方法", 广东工业大学学报, no. 03, pages 47 - 55 *
李慧;万晓霞;: "深度卷积神经网络下的图像风格迁移算法", 计算机工程与应用, no. 02, pages 176 - 183 *

Similar Documents

Publication Publication Date Title
US20220051056A1 (en) Semantic segmentation network structure generation method and apparatus, device, and storage medium
CN111582175A (en) High-resolution remote sensing image semantic segmentation method sharing multi-scale countermeasure characteristics
CN111402414A (en) Point cloud map construction method, device, equipment and storage medium
CN109558854B (en) Obstacle sensing method and device, electronic equipment and storage medium
CN112132197A (en) Model training method, image processing method, device, computer equipment and storage medium
CN110379020A (en) A kind of laser point cloud painting methods and device based on generation confrontation network
CN109859562A (en) Data creation method, device, server and storage medium
CN114429528A (en) Image processing method, image processing apparatus, image processing device, computer program, and storage medium
CN112884764A (en) Method and device for extracting land parcel in image, electronic equipment and storage medium
US10902608B2 (en) Segmentation for holographic images
CN115860102B (en) Pre-training method, device, equipment and medium for automatic driving perception model
DE102022100360A1 (en) MACHINE LEARNING FRAMEWORK APPLIED IN A SEMI-SUPERVISED SETTING TO PERFORM INSTANCE TRACKING IN A SEQUENCE OF IMAGE FRAMES
CN112023400A (en) Height map generation method, device, equipment and storage medium
CN114519819B (en) Remote sensing image target detection method based on global context awareness
WO2022242352A1 (en) Methods and apparatuses for building image semantic segmentation model and image processing, electronic device, and medium
CN110378284B (en) Road front view generation method and device, electronic equipment and storage medium
Dhatbale et al. Deep learning techniques for vehicle trajectory extraction in mixed traffic
Chen et al. Semantic segmentation and data fusion of microsoft bing 3d cities and small uav-based photogrammetric data
Ballouch et al. Toward a deep learning approach for automatic semantic segmentation of 3D lidar point clouds in urban areas
CN112528058B (en) Fine-grained image classification method based on image attribute active learning
CN114004972A (en) Image semantic segmentation method, device, equipment and storage medium
US20240037911A1 (en) Image classification method, electronic device, and storage medium
CN113763438A (en) Point cloud registration method, device, equipment and storage medium
CN116844129A (en) Road side target detection method, system and device for multi-mode feature alignment fusion
CN113379748A (en) Point cloud panorama segmentation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination