CN111476834A - Method and device for generating image and electronic equipment - Google Patents

Method and device for generating image and electronic equipment Download PDF

Info

Publication number
CN111476834A
CN111476834A CN201910068605.7A CN201910068605A CN111476834A CN 111476834 A CN111476834 A CN 111476834A CN 201910068605 A CN201910068605 A CN 201910068605A CN 111476834 A CN111476834 A CN 111476834A
Authority
CN
China
Prior art keywords
image
object model
preset object
light source
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910068605.7A
Other languages
Chinese (zh)
Other versions
CN111476834B (en
Inventor
苏健
张学志
于雷
张骞
黄畅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Horizon Robotics Technology Research and Development Co Ltd
Original Assignee
Beijing Horizon Robotics Technology Research and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Horizon Robotics Technology Research and Development Co Ltd filed Critical Beijing Horizon Robotics Technology Research and Development Co Ltd
Priority to CN201910068605.7A priority Critical patent/CN111476834B/en
Publication of CN111476834A publication Critical patent/CN111476834A/en
Application granted granted Critical
Publication of CN111476834B publication Critical patent/CN111476834B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/586Depth or shape recovery from multiple images from multiple light sources, e.g. photometric stereo

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Disclosed is a method of generating an image, comprising: determining reflection information of each pixel point in the first image; determining light source information in a scene where the first image is shot according to the first image, the surface normal map corresponding to the first image and the reflection information; editing and rendering a preset object model to be added in the first image according to the first image, the surface normal map, the reflection information and the light source information to obtain a second image; and obtaining a second depth image corresponding to the second image according to the first depth image corresponding to the first image and the preset object model. Different preset object models can be added in different positions of the first image according to needs, a large number of second images and second depth images can be obtained, time and energy in the process of training the neural network can be saved, cost can be reduced, the possibility of errors of the second depth images can be reduced, and extra confrontation training is avoided.

Description

Method and device for generating image and electronic equipment
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for generating an image and electronic equipment.
Background
In recent years, machine learning techniques centered on deep learning have attracted attention. Through deep learning, the distances between the current vehicle and surrounding vehicles, pedestrians and obstacles can be judged, so that automatic driving of the automobile is gradually possible. In all depth learning methods, the monocular image-based depth estimation algorithm has the advantages of convenience in deployment, low calculation cost and the like, and is concerned by the academic and industrial fields increasingly.
The existing depth estimation algorithm based on the monocular image needs a large amount of images and depth images (data with depth labels) to train a depth estimation neural network model, the acquisition of the training data is time-consuming and labor-consuming and high in cost, and is influenced by factors such as noise, and the obtained depth image is easy to have errors.
Disclosure of Invention
In order to solve the technical problem, embodiments of the present application provide a method and an apparatus for generating an image, and an electronic device.
According to an aspect of the present application, there is provided a method of generating an image, including: determining reflection information of each pixel point in the first image; determining light source information in a scene where the first image is shot according to the first image, the surface normal map corresponding to the first image and the reflection information; editing and rendering a preset object model to be added in the first image according to the first image, the surface normal map, the reflection information and the light source information to obtain a second image; and obtaining a second depth image corresponding to the second image according to the first depth image corresponding to the first image and the preset object model.
According to another aspect of the present application, there is provided an apparatus for generating an image, including: the reflection information determining module is used for determining the reflection information of each pixel point in the first image; the light source determining module is used for determining light source information in a scene where the first image is shot according to the first image, the surface normal map corresponding to the first image and the reflection information; the second image acquisition module is used for editing and rendering a preset object model to be added in the first image according to the first image, the surface normal map, the reflection information and the light source information to obtain a second image; and the second depth image acquisition module is used for obtaining a second depth image corresponding to the second image according to the first depth image corresponding to the first image and the preset object model.
According to another aspect of the present application, there is provided a computer-readable storage medium having stored thereon a computer program for executing the method of any of the above.
According to another aspect of the present application, there is provided an electronic apparatus including: a processor; a memory for storing the processor-executable instructions; the processor is configured to perform any of the methods described above.
According to the method for generating the image, different preset object models can be added in different positions of the first image according to needs, so that a large number of second images and second depth images can be obtained through the different preset object models, and then the large number of second images and second depth images are used as annotation data for training the depth estimation neural network model to train the depth estimation neural network model, so that time and energy for training the neural network can be saved; the cost can be reduced because a large amount of sample data is avoided to be collected; in addition, the second depth image is obtained through the real first image, so that the possibility of errors of the second depth image can be reduced, and additional countermeasure training on the annotation data is avoided.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.
FIG. 1 is a schematic diagram of a scenario of an exemplary system of the present application.
Fig. 2 is a flowchart illustrating a method for generating an image according to an exemplary embodiment of the present application.
Fig. 3 is a schematic flowchart of determining light source information in a scene where a first image is captured according to the first image, a surface normal map corresponding to the first image, and reflection information according to an exemplary embodiment of the present application.
Fig. 4 is a flowchart illustrating editing and rendering a preset object model to be added in a first image according to the first image, a surface normal map, reflection information, and light source information to obtain a second image according to an exemplary embodiment of the present application.
Fig. 5 is a schematic flowchart of a process of obtaining a second depth image corresponding to a second image according to a first depth image corresponding to a first image and a preset object model according to an exemplary embodiment of the present application.
Fig. 6 is a schematic flowchart of determining pixel coordinates of a preset object model according to camera parameters of a first image and three-dimensional coordinates of the preset object model according to an exemplary embodiment of the present application.
Fig. 7 is a flowchart illustrating a method for generating an image according to another exemplary embodiment of the present application.
Fig. 8 is a schematic structural diagram of an apparatus for generating an image according to an exemplary embodiment of the present application.
Fig. 9 is a schematic structural diagram of a light source determination module in an apparatus for generating an image according to an exemplary embodiment of the present application.
Fig. 10 is a schematic structural diagram of a second image acquisition module in an apparatus for generating an image according to an exemplary embodiment of the present application.
Fig. 11 is a schematic structural diagram of a second depth image obtaining module in an apparatus for generating an image according to an exemplary embodiment of the present application.
Fig. 12 is a schematic structural diagram of a pixel coordinate determination unit in an apparatus for generating an image according to an exemplary embodiment of the present application.
Fig. 13 is a schematic structural diagram of an apparatus for generating an image according to another exemplary embodiment of the present application.
Fig. 14 is a block diagram of an electronic device provided in an exemplary embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be understood that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and that the present application is not limited by the example embodiments described herein.
Summary of the application
Currently, images can be synthesized by 3D (Dimensions) engine rendering while generating depth images. However, the 3D engine rendering of the synthesized image has a large difference from the real captured image, and the training of the depth estimation neural network model using such an image usually requires the introduction of additional countertraining to reduce the influence of the difference from the real captured image.
In view of the above technical problems, a basic concept of the present application is to provide a method, an apparatus, and an electronic device for generating an image, which can add different preset object models in different positions of a first image according to needs, so that a large number of second images and second depth images can be obtained through the different preset object models, and further, the large number of second images and second depth images are used as annotation data for training a depth estimation neural network model to train the depth estimation neural network model, so that time and effort for training the neural network can be saved; the cost can be reduced because a large amount of sample data is avoided to be collected; in addition, the second depth image is obtained through the real first image, so that the possibility of errors of the second depth image can be reduced, and additional countermeasure training on the annotation data is avoided.
It should be noted that the application scope of the present application is not limited to the field of image processing technology. For example, the technical solution mentioned in the embodiments of the present application may also be applied to other intelligent mobile devices for providing technical support for image processing of the intelligent mobile devices.
Various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
Exemplary System
FIG. 1 is a schematic diagram of a scenario of an exemplary system of the present application. As shown in fig. 1, parameter estimation is performed on a first image (the first image may be an RGB image or a grayscale image); performing light source estimation based on the first image and the result of the parameter estimation (or based on the first image, the result of the parameter estimation, and the first depth image); and editing and rendering according to the light source estimation result, the first image, the first depth image and the preset object model to obtain a second image and a second depth image. Specific implementation procedures are described in detail below in the following method and apparatus embodiments.
Exemplary method
Fig. 2 is a flowchart illustrating a method for generating an image according to an exemplary embodiment of the present application. The method for generating the image can be applied to the technical field of image processing of automobiles and can also be applied to the field of image processing functions of intelligent robots. As shown in fig. 2, a method for generating an image according to an embodiment of the present application includes the following steps:
step 101, determining reflection information of each pixel point in the first image.
It should be noted that the first image may be an RGB image or a grayscale image, and the first image may be a sample image in a sample library.
The reflection information includes diffuse reflection parameters and specular reflection parameters. In this embodiment, the reflection information of each pixel point may refer to a diffuse reflection parameter corresponding to the pixel point. The diffuse reflection refers to a phenomenon that light rays are randomly reflected to all directions by a rough surface, and is used for indicating how the material of an object reflects illumination. In this embodiment, the diffuse reflection parameter r (x, y) of each pixel point (x, y) in the first image may be determined by the following formula:
Figure BDA0001956544240000051
wherein r (x, y) represents the diffuse reflection parameter of the pixel point (x, y),
Figure BDA0001956544240000052
ixrepresents the gradient i of the pixel point (x, y) in the horizontal directionyThe gradient of the pixel points (x, y) in the vertical direction is represented, T is a preset threshold value, T is more than or equal to 0 and less than or equal to 255, p is a natural number and is generally 1 or 2.
And 102, determining light source information in a scene where the first image is shot according to the first image, the surface normal map corresponding to the first image and the reflection information.
In one embodiment, the light source information may include light source location and light source intensity, and the like. When the light source information in the scene in which the first image is captured is obtained, that is, when the first image is captured, the light source information in the scene in which the first image is located is, for example: the scene where the first image is shot is in a room, the scene in the room is shot through the camera to obtain the first image, the first image comprises a window in the room and a table lamp in a lighting state, and then the sunlight passing through the window and the table lamp in the lighting state can be regarded as the light source information.
In an embodiment, the first image may be input into a trained preset normal map to extract a neural network, so as to obtain a surface normal map corresponding to the first image. The preset normal map extraction neural network can be obtained by training a convolution neural network through a large number of sample images.
And 103, editing and rendering the preset object model to be added in the first image according to the first image, the surface normal map, the reflection information and the light source information to obtain a second image.
In an embodiment, the preset object model may be a person, an animal, a machine, or the like, and the preset object model may be added to the first image according to an actual application condition, and the editing and rendering are performed to obtain the second image. For example: the scene where the first image is shot is in a room, a window and a desk lamp in a lighting state are arranged in the room, if the preset object model is the three-dimensional model of the cat, the three-dimensional model of the cat can be added below the window in the first image, the three-dimensional model of the cat is edited and rendered, and a second image is obtained, so that the three-dimensional model of the cat can be added in the second image.
And 104, obtaining a second depth image corresponding to the second image according to the first depth image corresponding to the first image and a preset object model.
It should be noted that the first depth image corresponds to the first image, and the first depth image may be a sample depth image in a sample library.
According to the method for generating the image, different preset object models can be added in different positions of the first image according to needs, so that a large number of second images and second depth images can be obtained through the different preset object models, and then the large number of second images and the second depth images are used as annotation data for training the depth estimation neural network model to train the depth estimation neural network model, so that time and energy for training the neural network can be saved; the cost can be reduced because a large amount of sample data is avoided to be collected; in addition, the second depth image is obtained through the real first image, so that the possibility of errors of the second depth image can be reduced, and additional countermeasure training on the annotation data is avoided.
An exemplary embodiment of the present application provides another method of generating an image. The embodiment shown in the present application is extended based on the embodiment shown in fig. 2 of the present application, and the differences between the embodiment shown in the present application and the embodiment shown in fig. 2 are mainly described below, and the same parts are not described again. The method for generating the image provided by the embodiment of the application further comprises the following steps:
and determining a surface normal corresponding to each pixel point in the first depth image to obtain a surface normal map corresponding to the first image.
In an embodiment, the surface normal corresponding to each pixel point can be obtained by calculating a normal of a plane to which each pixel point and a preset number of pixel points around the pixel point are fitted in a 3D coordinate system. The number of the preset pixel points can be selected according to the actual application condition, and no specific limitation is made on the number.
According to the method for generating the image, the surface normal map corresponding to the first image can be directly obtained by using the first depth image, the implementation process is simple, extra resources are not needed, resources and space can be saved, and the implementation speed is improved.
Fig. 3 is a schematic flowchart of determining light source information in a scene where a first image is captured according to the first image, a surface normal map corresponding to the first image, and reflection information according to an exemplary embodiment of the present application. The embodiment shown in fig. 3 of the present application is extended based on the embodiment shown in fig. 2 of the present application, and the differences between the embodiment shown in fig. 3 and the embodiment shown in fig. 2 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 3, in the method for generating an image provided in the embodiment of the present application, the light source information includes a light source position and a light source intensity; determining light source information in a scene where the first image is shot according to the first image, the surface normal map corresponding to the first image and the reflection information (namely step 102), wherein the method comprises the following steps:
step 102a, image segmentation is carried out on the first image to obtain a plurality of image sub-regions.
The first image is image-divided to obtain a plurality of image sub-regions (sets of pixels, also referred to as super-pixels). The super-pixel is a small area formed by a series of pixel points which are adjacent in position and similar in characteristics such as color, brightness, texture and the like. Most of these small regions retain effective information for further image segmentation, and generally do not destroy the boundary information of objects in the image.
Step 102b, determining a feature vector of each image sub-region by using the surface normal map and the reflection information.
It should be noted that, the feature vector of each image sub-region is determined by using the surface normal map and the reflection information, and may be implemented by any feasible manner according to the actual application condition, which is not specifically limited.
In this embodiment, the reflection information is a diffuse reflection parameter, and the feature vector E of each image sub-region j is determined by using the surface normal map and the reflection informationjThe following were used:
Figure BDA0001956544240000081
wherein E isj(n) values of the feature vectors of the image sub-region j calculated by the nth operator, SjRepresenting the area range of the image sub-area j, I (x, y) representing the superposition of the surface normal value and the diffuse reflection parameter value corresponding to the pixel point (x, y), Fn(x, y) represents n operators, n takes the value of 17, comprises 9 texture template operators, 6 edge operators in different directions and 2 color operators, k takes the values of 2 and 4, represents energy characteristics when k takes the value of 2, and represents peak characteristics when k takes the value of 4.
And then calculating the feature vectors of four adjacent image subregions around the image subregion j and the feature vectors of two scales by the formula, and superposing the calculated feature vectors to construct a feature vector with dimensions of 17 × 2 × 5 × 2 to 340, wherein in the feature vector with dimensions of 17 × 2 × 5 × 2, 17 represents 17 operators in the order from left to right, 2 represents two cases of K values of 2 and 4, 5 represents 5 image subregions, and 2 represents 2 scales. It should be noted that, if the image sub-region j is located at the corner, and there are no four adjacent image sub-regions, the feature vector corresponding to the non-existing adjacent image sub-region is replaced with 0; two dimensions, typically the original dimension of the image sub-region j and a dimension smaller than the original dimension of the image sub-region j (typically 50% of the dimension is selected).
And step 102c, determining the light source position in the first image according to the feature vector of each image subregion and a preset light source two-class neural network.
It should be noted that the feature vector of each image sub-region (also referred to as super-pixel) is divided into two as the preset light sourceAnd the input of the neural network takes whether the image subarea is a light source as the output, and if one image subarea is judged to be the light source, the position of the image subarea is the position of the light source. Determining the position of the light source in the first image, i.e. determining the coordinates of the pixel point of the light source in the first image, the coordinates of the pixel point of the ith light source being (x)l,yl) It is expressed that l is a natural number.
Step 102d, determining the intensity of the light source in the first image according to the first image and the position of the light source in the first image.
It should be noted that, determining the intensity of the light source in the first image according to the first image and the position of the light source in the first image may be implemented in any feasible manner according to the actual application condition, and this is not particularly limited.
In the embodiment of the present application, determining the intensity of the light source in the first image is implemented by using the following formula:
Figure BDA0001956544240000091
wherein, LlRepresenting the intensity of the first light source, pixels representing pixels in the first image, IlRepresenting a pixel point (x) in the first imagel,yl) Pixel value of Rl(L) indicating the pixel point (x) rendered under the action of the first light source Ll,yl) The pixel value of (3) can be estimated as several values of the light source intensity of the first light source L, the light source intensity of the minimum of the above expression among the estimated several values, and the light source intensity LlThe result of (1).
According to the method for generating the image, the light source position and the light source intensity in the first image can be obtained, so that the second image and the second depth image generated according to the first image are more real and effective.
Fig. 4 is a flowchart illustrating editing and rendering a preset object model to be added in a first image according to the first image, a surface normal map, reflection information, and light source information to obtain a second image according to an exemplary embodiment of the present application. The embodiment shown in fig. 4 of the present application is extended based on the embodiment shown in fig. 3 of the present application, and the differences between the embodiment shown in fig. 4 and the embodiment shown in fig. 3 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 4, in the method for generating an image according to the embodiment of the present application, editing and rendering a preset object model to be added in a first image according to the first image, a surface normal map, reflection information, and light source information to obtain a second image (i.e., step 103), including:
and 103a, limiting the placing position of the preset object model through the surface normal map.
In an embodiment, the placing position of the preset object model may be constrained by the surface normal map, so as to avoid the preset object model being placed outside the boundary of the first image.
Step 103b, camera parameters of the first image are determined.
It should be noted that the camera parameters include an intra-camera parameter and an extra-camera parameter. The in-camera parameters are parameters related to the characteristics of the camera itself, such as the focal length, pixel size, etc. of the camera. The camera-out parameters are parameters in a world coordinate system, such as the position, rotation direction, etc. of the camera.
And 103c, determining the pixel coordinates of the preset object model according to the camera parameters of the first image and the three-dimensional coordinates of the preset object model.
The three-dimensional coordinates (i.e., three-dimensional cartesian coordinates (x, y, z)) are expressions of points in a three-dimensional cartesian coordinate system, where x, y, and z are coordinate values of x, y, and z axes that share a common zero point and are orthogonal to each other. The pixel coordinates (x, y) are the location of the pixel in the image.
And 103d, editing and rendering the first image and the preset object model according to the pixel coordinates, the reflection information, the light source position and the light source intensity of the preset object model to obtain a second image.
It should be noted that the first image and the preset object model are edited and rendered, and the preset object model is used to replace the object at the corresponding position in the first image, so as to obtain the second image.
According to the method for generating the image, the placing position of the preset object model can be limited through the surface normal map, so that the preset object model can be prevented from exceeding the boundary of the first image, and the generated second image is more real and effective.
Fig. 5 is a schematic flowchart of a process of obtaining a second depth image corresponding to a second image according to a first depth image corresponding to a first image and a preset object model according to an exemplary embodiment of the present application. The embodiment shown in fig. 5 of the present application is extended on the basis of the embodiment shown in fig. 4 of the present application, and the differences between the embodiment shown in fig. 5 and the embodiment shown in fig. 4 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 5, in the method for generating an image according to the embodiment of the present application, obtaining a second depth image corresponding to a second image according to a first depth image corresponding to a first image and a preset object model (i.e. step 104), includes:
and 104a, obtaining the depth value of each pixel point in the preset object model according to the three-dimensional coordinates of the preset object model.
It should be noted that, a z-coordinate value of the three-dimensional coordinate (x, y, z) of each point in the preset object model may be used as the depth value of each corresponding pixel point in the preset object model.
And 104b, obtaining a second depth image according to the first depth image and the depth value of each pixel point in the preset object model.
It should be noted that, the depth value of the pixel point in the same portion of the second depth image as the first depth image is the depth value of the pixel point in the corresponding portion of the first depth image, and the depth value of the pixel point in the portion of the preset object model in the second depth image is the depth value of the pixel point of the preset object model.
According to the method for generating the image, the depth value of each pixel point in the preset object model can be obtained according to the three-dimensional coordinates of the preset object model, and the second depth image can be obtained according to the depth values of each pixel point in the first depth image and the preset object model, so that the method is simple and rapid to achieve, and the data are real and effective.
Fig. 6 is a schematic flowchart of determining pixel coordinates of a preset object model according to camera parameters of a first image and three-dimensional coordinates of the preset object model according to an exemplary embodiment of the present application. The embodiment shown in fig. 6 of the present application is extended based on the embodiment shown in fig. 4 of the present application, and the differences between the embodiment shown in fig. 6 and the embodiment shown in fig. 4 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 6, in the method for generating an image according to the embodiment of the present application, determining pixel coordinates of a preset object model according to camera parameters of a first image and three-dimensional coordinates of the preset object model (i.e. step 103c) includes:
and step 103c1, setting reference pixel points of the preset object model.
It should be noted that, the pixel point at the center of the preset object model may be set as a reference pixel point (x)1,y1)。
Step 103c2, setting the pixel coordinate and the depth value of the reference pixel point.
It should be noted that the coordinates of the preset object model in the pixel coordinate system may be changed by changing the position of the preset object model in the first image, so as to set the pixel coordinates of the reference pixel point. Changing the position of the preset object model in the first image can be realized by dragging and the like. Setting reference pixel point (x) according to practical application condition1,y1) D, the value range of the depth value D is more than 0 and less than or equal to D (x)1,y1) Wherein D (x)1,y1) Representation and reference pixel (x)1,y1) And the depth value of the corresponding pixel point in the corresponding first depth image.
And 103c3, calculating the three-dimensional coordinates of the reference pixel point by using a preset three-dimensional coordinate calculation formula according to the camera parameters of the first image, the pixel coordinates and the depth values of the reference pixel point.
It should be noted that the preset three-dimensional coordinate calculation formula may be selected according to an actual application condition, and is not limited thereto.
The preset three-dimensional coordinate calculation formula in this embodiment is:
W(x2,y2,z2)=D(x1,y1)K-1[x1,y1,1]
wherein, W (x)2,y2,z2) Representing a reference pixel (x)1,y1) K denotes a camera internal reference matrix, D (x)1,y1) Representation and reference pixel (x)1,y1) And the depth value of the corresponding pixel point in the corresponding first depth image.
And 103c4, calculating the pixel coordinate of each pixel point in the preset object model by using a preset pixel coordinate calculation formula according to the camera parameter of the first image, the three-dimensional coordinate of the reference pixel point, the three-dimensional coordinate of the preset object model and the relative position of the reference pixel point and each pixel point in the preset object model.
It should be noted that the preset three-dimensional coordinate calculation formula may be selected according to an actual application condition, and is not limited thereto.
The preset pixel coordinate calculation formula in this embodiment is:
Figure BDA0001956544240000131
wherein (x)t,yt) Representing a point (x) in a preset object modelt,yt,zt) (x) pixel coordinates of (c)2,y2,z2) Representing a reference pixel (x)1,y1) Three-dimensional coordinates of (1), Δ xtDenotes xtAnd x2Relative position (also called deviation, can take the value of x)t-x2)、ΔytDenotes ytAnd y2Relative position of (a) can also be called a deviation, which can take the value yt-y2)、ΔztDenotes ztAnd z2Relative position (also called deviation, can take the value z)t-z2) And K denotes a camera reference matrix.
According to the method for generating the image, the pixel coordinates of the preset object model can be obtained, and the second image can be conveniently generated subsequently.
Fig. 7 is a flowchart illustrating a method for generating an image according to another exemplary embodiment of the present application. The embodiment shown in fig. 7 of the present application is extended based on the embodiments shown in fig. 2 to 6 of the present application, and the differences between the embodiment shown in fig. 7 and the embodiments shown in fig. 2 to 6 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 7, in the method for generating an image according to the embodiment of the present application, before editing and rendering a preset object model to be added in a first image according to the first image, a surface normal map corresponding to the first image, reflection information, and light source information to obtain a second image (i.e., step 103), the method further includes:
step 105, adding a preset object model in the first image.
It should be noted that, the preset object model may be added to the corresponding position in the first image according to the specific content of the first image and the specific content of the preset object model. The preset object model is a 3D model and can be a person, an animal, a plant, a machine and the like. According to actual needs, a large number of object models can be constructed and added to the first image to generate a large number of second images and second depth images.
According to the method for generating the image, the preset object model is added into the first image, the second image and the second depth image can be generated, and a large amount of sample data does not need to be collected, so that time and energy can be saved, and cost can be reduced.
Exemplary devices
Fig. 8 is a schematic structural diagram of an apparatus for generating an image according to an exemplary embodiment of the present application. The device for generating the image can be applied to the field of image processing of automobiles and can also be applied to the field of image processing functions of intelligent robots. As shown in fig. 8, an apparatus for generating an image according to an embodiment of the present application includes:
a reflection information determining module 201, configured to determine reflection information of each pixel point in the first image;
the light source determining module 202 is configured to determine light source information in a scene where the first image is captured according to the first image, a surface normal map corresponding to the first image, and reflection information;
the second image acquisition module 203 is configured to edit and render a preset object model to be added in the first image according to the first image, the surface normal map, the reflection information and the light source information to obtain a second image;
and the second depth image obtaining module 204 is configured to obtain a second depth image corresponding to the second image according to the first depth image corresponding to the first image and the preset object model.
An exemplary embodiment of the present application provides a schematic structural diagram of the reflection information determination module 201 in an apparatus for generating an image. The embodiment shown in the present application is extended based on the embodiment shown in fig. 8 of the present application, and the differences between the embodiment shown in the present application and the embodiment shown in fig. 8 are mainly described below, and the descriptions of the same parts are omitted.
In the apparatus for generating an image according to the embodiment of the present application, the image determining module 201 is further configured to determine a surface normal corresponding to each pixel point in the first depth image, so as to obtain a surface normal map corresponding to the first image.
Fig. 9 is a schematic structural diagram of the light source determining module 202 in the apparatus for generating an image according to an exemplary embodiment of the present application. The embodiment shown in fig. 9 of the present application is extended based on the embodiment shown in fig. 8 of the present application, and the differences between the embodiment shown in fig. 9 and the embodiment shown in fig. 8 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 9, in the apparatus for generating an image according to an embodiment of the present application, the light source information includes a light source position and a light source intensity, and the light source determining module 202 includes:
an image segmentation unit 202a, configured to perform image segmentation on a first image to obtain a plurality of image sub-regions;
a feature vector determination unit 202b for determining a feature vector for each image sub-region using the surface normal map and the reflection information;
the light source position determining unit 202c is configured to determine a light source position in the first image according to the feature vector of each image sub-region and a preset light source two-class neural network;
a light source intensity determining unit 202d for determining the light source intensity in the first image according to the first image and the light source position in the first image.
Fig. 10 is a schematic structural diagram of the second image obtaining module 203 in the apparatus for generating an image according to an exemplary embodiment of the present application. The embodiment shown in fig. 10 of the present application is extended based on the embodiment shown in fig. 9 of the present application, and the differences between the embodiment shown in fig. 10 and the embodiment shown in fig. 9 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 10, in the apparatus for generating an image according to the embodiment of the present application, the second image obtaining module 203 includes:
a position limiting unit 203a for limiting the placement position of the preset object model by the surface normal map;
a camera parameter determination unit 203b for determining camera parameters of the first image;
a pixel coordinate determination unit 203c, configured to determine a pixel coordinate of the preset object model according to the camera parameter of the first image and the three-dimensional coordinate of the preset object model;
the second image determining unit 203d is configured to edit and render the first image and the preset object model according to the pixel coordinates, the reflection information, the light source position, and the light source intensity of the preset object model, so as to obtain a second image.
Fig. 11 is a schematic structural diagram of the second depth image obtaining module 204 in the apparatus for generating an image according to an exemplary embodiment of the present application. The embodiment shown in fig. 11 of the present application is extended based on the embodiment shown in fig. 10 of the present application, and the differences between the embodiment shown in fig. 11 and the embodiment shown in fig. 10 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 11, in the apparatus for generating an image according to the embodiment of the present application, the second depth image obtaining module 204 includes:
the depth value determining unit 204a is configured to obtain a depth value of each pixel point in the preset object model according to the three-dimensional coordinates of the preset object model;
the second depth image determining unit 204b is configured to obtain a second depth image according to the first depth image and the depth value of each pixel point in the preset object model.
Fig. 12 is a schematic structural diagram of the pixel coordinate determination unit 203c in the image generation apparatus according to an exemplary embodiment of the present application. The embodiment shown in fig. 12 of the present application is extended based on the embodiment shown in fig. 10 of the present application, and the differences between the embodiment shown in fig. 12 and the embodiment shown in fig. 10 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 12, in the apparatus for generating an image according to the embodiment of the present application, the pixel coordinate determination unit 203c includes:
a reference pixel point setting subunit 203c1, configured to set a reference pixel point of a preset object model;
a data setting subunit 203c2 configured to set pixel coordinates and depth values of the reference pixel points;
a three-dimensional coordinate calculation subunit 203c3, configured to calculate, according to the camera parameter of the first image, the pixel coordinate and the depth value of the reference pixel, the three-dimensional coordinate of the reference pixel by using a preset three-dimensional coordinate calculation formula;
the pixel coordinate calculating subunit 203c4 is configured to calculate, according to the camera parameter of the first image, the three-dimensional coordinate of the reference pixel point, the three-dimensional coordinate of the preset object model, and the relative position between the reference pixel point and each pixel point in the preset object model, the pixel coordinate of each pixel point in the preset object model by using a preset pixel coordinate calculation formula.
Fig. 13 is a schematic structural diagram of an apparatus for generating an image according to another exemplary embodiment of the present application. The embodiment shown in fig. 13 of the present application is extended based on the embodiments shown in fig. 8 to 12 of the present application, and the differences between the embodiment shown in fig. 13 and the embodiments shown in fig. 8 to 12 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 13, the apparatus for generating an image according to an embodiment of the present application further includes:
an adding module 205, configured to add a preset object model in the first image.
It should be understood that fig. 8 to 13 provide the reflection information determining module 201, the light source determining module 202, the second image acquiring module 203, the second depth image acquiring module 204, and the adding module 205 in the apparatus for generating an image. The operations and functions of the image segmentation unit 202a, the feature vector determination unit 202b, the light source position determination unit 202c, and the light source intensity determination unit 202d included in the light source determination module 202, the position limitation unit 203a, the camera parameter determination unit 203b, the pixel coordinate determination unit 203c, and the second image determination unit 203d included in the second image acquisition module 203, the depth value determination unit 204a, and the second depth image determination unit 204b included in the second depth image acquisition module 204, the reference pixel point setting subunit 203c1, the data setting subunit 203c2, the three-dimensional coordinate calculation subunit 203c3, and the pixel coordinate calculation subunit 203c4 included in the pixel coordinate determination unit 203c may refer to the method for generating an image provided in fig. 1 to 7 described above, and are not described herein again to avoid repetition.
Exemplary electronic device
FIG. 14 illustrates a block diagram of an electronic device in accordance with an embodiment of the present application.
As shown in fig. 14, the electronic device 11 includes one or more processors 11a and a memory 11 b.
The processor 11a may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 11 to perform desired functions.
Memory 11b may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), a hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer-readable storage medium and executed by the processor 11a to implement the method of generating an image of the various embodiments of the present application described above and/or other desired functions. Various contents such as an input signal, a signal component, a noise component, etc. may also be stored in the computer-readable storage medium.
In one example, the electronic device 11 may further include: an input device 11c and an output device 11d, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, the input device 11c may be a camera or a microphone, a microphone array, or the like as described above, for capturing an input signal of an image or a sound source. When the electronic device is a stand-alone device, the input means 11c may be a communication network connector for receiving the acquired input signals from the neural network processor.
Further, the input device 11c may include, for example, a keyboard, a mouse, and the like.
The output device 11d can output various information including the determined output voltage, output current information, and the like to the outside. The output devices 11d may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Of course, for the sake of simplicity, only some of the components related to the present application in the electronic device 11 are shown in fig. 14, and components such as a bus, an input/output interface, and the like are omitted. In addition, the electronic device 11 may include any other suitable components, depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the power parameter adjustment method according to various embodiments of the present application described in the "exemplary methods" section of this specification above.
The computer program product may be written with program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps in the power parameter adjustment method according to various embodiments of the present application described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (10)

1. A method of generating an image, comprising:
determining reflection information of each pixel point in the first image;
determining light source information in a scene where the first image is shot according to the first image, the surface normal map corresponding to the first image and the reflection information;
editing and rendering a preset object model to be added in the first image according to the first image, the surface normal map, the reflection information and the light source information to obtain a second image;
and obtaining a second depth image corresponding to the second image according to the first depth image corresponding to the first image and the preset object model.
2. The method of claim 1, wherein the method further comprises:
and determining a surface normal corresponding to each pixel point in the first depth image to obtain a surface normal map corresponding to the first image.
3. The method of claim 1, wherein the light source information comprises a light source location and a light source intensity; determining light source information in a scene where the first image is shot according to the first image, the surface normal map corresponding to the first image and the reflection information, wherein the determining comprises the following steps:
performing image segmentation on the first image to obtain a plurality of image subregions;
determining a feature vector for each of the image sub-regions using the surface normal map and the reflection information;
determining the light source position in the first image according to the feature vector of each image subregion and a preset light source two-class neural network;
determining the light source intensity in the first image from the first image and the light source location in the first image.
4. The method of claim 3, wherein the editing and rendering the preset object model to be added in the first image according to the first image, the surface normal map, the reflection information and the light source information to obtain a second image comprises:
limiting the placing position of the preset object model through the surface normal map;
determining camera parameters for the first image;
determining pixel coordinates of the preset object model according to the camera parameters of the first image and the three-dimensional coordinates of the preset object model;
and editing and rendering the first image and the preset object model according to the pixel coordinates of the preset object model, the reflection information, the light source position and the light source intensity to obtain the second image.
5. The method of claim 4, wherein obtaining a second depth image corresponding to the second image according to the first depth image corresponding to the first image and the preset object model comprises:
obtaining the depth value of each pixel point in the preset object model according to the three-dimensional coordinates of the preset object model;
and obtaining the second depth image according to the first depth image and the depth value of each pixel point in the preset object model.
6. The method of claim 4, wherein determining pixel coordinates of the preset object model from camera parameters of the first image and three-dimensional coordinates of the preset object model comprises:
setting reference pixel points of the preset object model;
setting pixel coordinates and depth values of the reference pixel points;
calculating the three-dimensional coordinates of the reference pixel points by using a preset three-dimensional coordinate calculation formula according to the camera parameters of the first image, the pixel coordinates and the depth values of the reference pixel points;
and calculating the pixel coordinate of each pixel point in the preset object model by using a preset pixel coordinate calculation formula according to the camera parameter of the first image, the three-dimensional coordinate of the reference pixel point, the three-dimensional coordinate of the preset object model and the relative position of the reference pixel point and each pixel point in the preset object model.
7. The method according to any one of claims 1-6, wherein before the editing and rendering of the preset object model to be added in the first image according to the first image, the surface normal map, the reflection information and the information of the light source to obtain a second image, the method further comprises:
and adding the preset object model in the first image.
8. An apparatus for generating an image, comprising:
the reflection information determining module is used for determining the reflection information of each pixel point in the first image;
the light source determining module is used for determining light source information in a scene where the first image is shot according to the first image, the surface normal map corresponding to the first image and the reflection information;
the second image acquisition module is used for editing and rendering a preset object model to be added in the first image according to the first image, the surface normal map, the reflection information and the light source information to obtain a second image;
and the second depth image acquisition module is used for obtaining a second depth image corresponding to the second image according to the first depth image corresponding to the first image and the preset object model.
9. A computer-readable storage medium, storing a computer program for executing the method of generating an image according to any one of claims 1 to 7.
10. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor configured to perform the method of generating an image of any of claims 1-7.
CN201910068605.7A 2019-01-24 2019-01-24 Method and device for generating image and electronic equipment Active CN111476834B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910068605.7A CN111476834B (en) 2019-01-24 2019-01-24 Method and device for generating image and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910068605.7A CN111476834B (en) 2019-01-24 2019-01-24 Method and device for generating image and electronic equipment

Publications (2)

Publication Number Publication Date
CN111476834A true CN111476834A (en) 2020-07-31
CN111476834B CN111476834B (en) 2023-08-11

Family

ID=71743594

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910068605.7A Active CN111476834B (en) 2019-01-24 2019-01-24 Method and device for generating image and electronic equipment

Country Status (1)

Country Link
CN (1) CN111476834B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002117413A (en) * 2000-10-10 2002-04-19 Univ Tokyo Image generating device and image generating method for reflecting light source environmental change in real time
US20140126835A1 (en) * 2012-11-08 2014-05-08 Sony Corporation Image processing apparatus and method, and program
US20150161818A1 (en) * 2012-07-30 2015-06-11 Zinemath Zrt. System And Method For Generating A Dynamic Three-Dimensional Model
CN105825544A (en) * 2015-11-25 2016-08-03 维沃移动通信有限公司 Image processing method and mobile terminal
US20160232707A1 (en) * 2014-01-22 2016-08-11 Tencent Technology (Shenzhen) Company Limited Image processing method and apparatus, and computer device
CN106710003A (en) * 2017-01-09 2017-05-24 成都品果科技有限公司 Three-dimensional photographing method and system based on OpenGL ES (Open Graphics Library for Embedded System)
CN106873828A (en) * 2017-01-21 2017-06-20 司承电子科技(上海)有限公司 A kind of implementation method of the 3D press key input devices for being applied to virtual reality products
WO2017192467A1 (en) * 2016-05-02 2017-11-09 Warner Bros. Entertainment Inc. Geometry matching in virtual reality and augmented reality
CN108509887A (en) * 2018-03-26 2018-09-07 深圳超多维科技有限公司 A kind of acquisition ambient lighting information approach, device and electronic equipment
CN108525298A (en) * 2018-03-26 2018-09-14 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN109087346A (en) * 2018-09-21 2018-12-25 北京地平线机器人技术研发有限公司 Training method, training device and the electronic equipment of monocular depth model
CN109118582A (en) * 2018-09-19 2019-01-01 东北大学 A kind of commodity three-dimensional reconstruction system and method for reconstructing
CN109155078A (en) * 2018-08-01 2019-01-04 深圳前海达闼云端智能科技有限公司 Generation method, device, electronic equipment and the storage medium of the set of sample image

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002117413A (en) * 2000-10-10 2002-04-19 Univ Tokyo Image generating device and image generating method for reflecting light source environmental change in real time
US20150161818A1 (en) * 2012-07-30 2015-06-11 Zinemath Zrt. System And Method For Generating A Dynamic Three-Dimensional Model
US20140126835A1 (en) * 2012-11-08 2014-05-08 Sony Corporation Image processing apparatus and method, and program
US20160232707A1 (en) * 2014-01-22 2016-08-11 Tencent Technology (Shenzhen) Company Limited Image processing method and apparatus, and computer device
CN105825544A (en) * 2015-11-25 2016-08-03 维沃移动通信有限公司 Image processing method and mobile terminal
WO2017192467A1 (en) * 2016-05-02 2017-11-09 Warner Bros. Entertainment Inc. Geometry matching in virtual reality and augmented reality
CN106710003A (en) * 2017-01-09 2017-05-24 成都品果科技有限公司 Three-dimensional photographing method and system based on OpenGL ES (Open Graphics Library for Embedded System)
CN106873828A (en) * 2017-01-21 2017-06-20 司承电子科技(上海)有限公司 A kind of implementation method of the 3D press key input devices for being applied to virtual reality products
CN108509887A (en) * 2018-03-26 2018-09-07 深圳超多维科技有限公司 A kind of acquisition ambient lighting information approach, device and electronic equipment
CN108525298A (en) * 2018-03-26 2018-09-14 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN109155078A (en) * 2018-08-01 2019-01-04 深圳前海达闼云端智能科技有限公司 Generation method, device, electronic equipment and the storage medium of the set of sample image
CN109118582A (en) * 2018-09-19 2019-01-01 东北大学 A kind of commodity three-dimensional reconstruction system and method for reconstructing
CN109087346A (en) * 2018-09-21 2018-12-25 北京地平线机器人技术研发有限公司 Training method, training device and the electronic equipment of monocular depth model

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ENRICO BONI;: "Ultrasound Open Platforms for Next-Generation Imaging Technique Development" *
刘万奎;刘越;: "用于增强现实的光照估计研究综述" *
张艾嘉;赵岩;王世刚;: "适于多种反射现象的光照估计方法" *

Also Published As

Publication number Publication date
CN111476834B (en) 2023-08-11

Similar Documents

Publication Publication Date Title
KR102051889B1 (en) Method and system for implementing 3d augmented reality based on 2d data in smart glass
CN108381549B (en) Binocular vision guide robot rapid grabbing method and device and storage medium
US11481862B2 (en) System and method for real-time, simultaneous object detection and semantic segmentation
CN110782517B (en) Point cloud labeling method and device, storage medium and electronic equipment
CN109117806B (en) Gesture recognition method and device
US10909724B2 (en) Method, apparatus, and computer readable medium for adjusting color annotation of an image
CN112651881B (en) Image synthesizing method, apparatus, device, storage medium, and program product
CN115239861A (en) Face data enhancement method and device, computer equipment and storage medium
CN114830177A (en) Electronic device and method for controlling the same
US20220301276A1 (en) Object detection device, object detection method, and computer readable medium
EP4068220A1 (en) Image processing device, image processing method, moving device, and storage medium
CN112308910A (en) Data generation method and device and storage medium
CN111369611B (en) Image pixel depth value optimization method, device, equipment and storage medium thereof
CN112668596B (en) Three-dimensional object recognition method and device, recognition model training method and device
CN111476834B (en) Method and device for generating image and electronic equipment
CN115861601B (en) Multi-sensor fusion sensing method and device
CN116017129A (en) Method, device, system, equipment and medium for adjusting angle of light supplementing lamp
CN104408720A (en) Image processing method and device
CN116228850A (en) Object posture estimation method, device, electronic equipment and readable storage medium
JP2021125137A (en) Image processing apparatus and image processing method
CN112541948B (en) Object detection method, device, terminal equipment and storage medium
CN113033248A (en) Image identification method and device and computer readable storage medium
KR102617776B1 (en) Method and apparatus for automatically generating surface material of 3D model
CN115273013B (en) Lane line detection method, system, computer and readable storage medium
KR102419579B1 (en) Apparatus and method for generating learning data for determining whether intellectual property is infringed

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant