CN108399634B - RGB-D data generation method and device based on cloud computing - Google Patents

RGB-D data generation method and device based on cloud computing Download PDF

Info

Publication number
CN108399634B
CN108399634B CN201810041680.XA CN201810041680A CN108399634B CN 108399634 B CN108399634 B CN 108399634B CN 201810041680 A CN201810041680 A CN 201810041680A CN 108399634 B CN108399634 B CN 108399634B
Authority
CN
China
Prior art keywords
data
rgb
depth
virtual camera
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810041680.XA
Other languages
Chinese (zh)
Other versions
CN108399634A (en
Inventor
王洛威
王恺
廉士国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Beijing Technologies Co Ltd
Original Assignee
Cloudminds Beijing Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Beijing Technologies Co Ltd filed Critical Cloudminds Beijing Technologies Co Ltd
Priority to CN201810041680.XA priority Critical patent/CN108399634B/en
Publication of CN108399634A publication Critical patent/CN108399634A/en
Application granted granted Critical
Publication of CN108399634B publication Critical patent/CN108399634B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The method and the device for generating the RGB-D data based on cloud computing comprise the following steps: establishing a virtual 3D scene model; determining respective viewing positions of a virtual camera in the virtual 3D scene model and respective viewing directions at each viewing position; the virtual camera collects RGB data and collects depth data at each viewing position and direction; and generating RGB-D data according to the RGB data and the depth data. According to the method and the device, the massive RGB-D data can be generated quickly based on the virtual scene and the virtual camera, and the data are richer and the quality is better.

Description

RGB-D data generation method and device based on cloud computing
Technical Field
The application relates to the technical field of three-dimensional live-action, in particular to a method and a device for generating RGB-D data based on cloud computing.
Background
RGB-D (RGB-Depth, color Depth) training data generally refers to data acquired using a color camera and a Depth camera that includes both image color and Depth. The data can be used for three-dimensional reconstruction or three-dimensional positioning experiment or evaluation, and training data can be provided for certain deep learning algorithms, for example, the deep learning algorithms in the unmanned vehicle navigation algorithm.
In the prior art, RGB-D training data is collected from a scene by a real color camera and a depth camera. For example, the RGB-D data collection method of Matterport company comprises the following steps: the camera set comprising 3 color cameras and 3 depth cameras is fixed by using a tripod, three groups of cameras are respectively distributed above, in the middle and below, for each panorama, the camera set on the tripod is rotated to 6 different directions along the vertical direction (namely, shooting is carried out once every 60 degrees), each color camera shoots high dynamic range images in 6 directions, each depth camera continuously collects depth data when the camera set rotates, and the generated depth images are integrated and then registered with each color photo. Each panoramic image obtained finally consists of 18 color pictures, the central point being exactly the height of the shooting tripod. An example of the result of collecting data is shown in fig. 1, in which fig. 1a is RGB data and fig. 1b is depth data.
The prior art solution is suitable for collecting data of a small scene, such as RGB-D data of a certain room; the richness of the collected data is related to the angle and the position of the camera set by people; the quality of the result of the collected RGB data is related to noise introduced by an optical camera or a digital camera for collecting the RGB data; the quality of the acquired depth data results is related to the depth algorithm and accuracy of the depth camera.
The defects of the prior art are as follows:
the collected RGB-D data is not rich enough; generating a large amount of RGB-D data of a large scene with long time consumption; limited by the capabilities of color cameras and depth cameras, the quality of the acquired RGB-D data is poor.
Disclosure of Invention
The embodiment of the application provides a method and a device for generating RGB-D data based on cloud computing, and the method and the device are mainly used for providing a scheme capable of quickly generating rich and high-quality RGB-D data.
In one aspect, an embodiment of the present application provides a cloud computing-based RGB-D data generation method, where the method includes: establishing a virtual 3D scene model; determining respective viewing positions of a virtual camera in the virtual 3D scene model and respective viewing directions at each viewing position; the virtual camera collects RGB data and collects depth data at each viewing position and direction; and generating RGB-D data according to the RGB data and the depth data.
In another aspect, an embodiment of the present application provides an apparatus for generating RGB-D data based on cloud computing, where the apparatus includes: the scene establishing module is used for establishing a virtual 3D scene model; a view determination module for determining respective view positions and respective view directions of a virtual camera at each view position in the virtual 3D scene model; the data acquisition module is used for executing the virtual camera to acquire RGB data and depth data at each viewing position and direction; and the data generation module is used for generating RGB-D data according to the RGB data and the depth data.
In another aspect, an embodiment of the present application provides an electronic device, where the electronic device includes: a memory, one or more processors; and one or more modules stored in the memory and configured to be executed by the one or more processors, the one or more modules including instructions for performing the steps of the above-described methods.
In another aspect, the present embodiments provide a computer program product for use in conjunction with an electronic device, the computer program product comprising a computer program embodied in a computer-readable storage medium, the computer program comprising instructions for causing the electronic device to perform the steps of the above-described method.
The beneficial effects of the embodiment of the application are as follows:
according to the method and the device, the massive RGB-D data can be generated quickly based on the virtual scene and the virtual camera, and the data are richer and the quality is better.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, are incorporated in and constitute a part of this disclosure. The exemplary embodiments and descriptions thereof are provided to explain the present application and do not constitute an undue limitation on the present application. Wherein:
FIG. 1a shows a schematic diagram of the results of collecting RGB data based on the prior art;
FIG. 1b shows a schematic diagram of the results of acquiring depth data based on the prior art;
fig. 2 is a schematic flowchart illustrating a cloud computing-based RGB-D data generation method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram illustrating a cloud computing-based RGB-D data generating apparatus according to a second embodiment of the present application;
fig. 4 shows a schematic structural diagram of an electronic device in a third embodiment of the present application.
Detailed Description
In order to make the technical solutions and advantages of the present application more apparent, the following further detailed description of the exemplary embodiments of the present application with reference to the accompanying drawings makes it clear that the described embodiments are only a part of the embodiments of the present application, and not an exhaustive list of all embodiments. And the embodiments and features of the embodiments in the present description may be combined with each other without conflict.
The inventor notices in the process of invention that: in the existing scheme for generating RGB-D data based on a color camera and a depth camera, the collected RGB-D data is not rich enough; generating a large amount of RGB-D data of a large scene with long time consumption; limited by the capabilities of color cameras and depth cameras, the quality of the acquired RGB-D data is poor.
In order to overcome the defects, the application provides a cloud computing-based RGB-D data generation method, which includes the steps that a virtual camera collects RGB data and depth data of each viewing position and direction in a virtual 3D scene model, and RGB-D data are generated based on the collected data. According to the method and the device, the massive RGB-D data can be generated quickly based on the virtual scene and the virtual camera, and the data are richer and the quality is better.
The essence of the technical solution of the embodiments of the present invention is further clarified by specific examples below.
The first embodiment is as follows:
fig. 2 is a schematic flow chart of a cloud computing-based RGB-D data generation method in an embodiment of the present application, and as shown in fig. 2, the cloud computing-based RGB-D data generation method includes:
step 101, establishing a virtual 3D scene model;
step 103, determining each view position and each view direction of a virtual camera at each view position in the virtual 3D scene model;
step 105, the virtual camera collects RGB data and collects depth data at each view finding position and direction;
and 107, generating RGB-D data according to the RGB data and the depth data.
In step 101, a virtual 3D scene model is built.
One or more 3D object models can be placed or built at any position in the virtual 3D scene model to form the 3D scene model. The scene model is not required to be built by an actual object, is not limited by space, and is not influenced by various external factors such as weather, personnel or vehicles in the scene, so that a huge and complex virtual 3D scene model such as a city model can be built.
In step 103, respective viewing positions of the virtual camera in the virtual 3D scene model and respective viewing directions at each viewing position are determined.
The virtual camera is a camera model which can move freely in the virtual 3D scene, the camera model has camera internal parameters similar to a real camera, and the setting of different camera parameters can lead the image and depth data and the like acquired by the virtual camera to have different sizes, different accuracies and the like.
The respective viewing positions and the viewing direction at each viewing position may be input by a user or may be automatically generated in combination with the virtual 3D scene. For example, the trajectory of the movement of the virtual camera in the virtual 3D scene, and the direction angle or angles to be framed at each location point of the trajectory, may be specified by the operator; it can also be determined by the system according to the size of the virtual 3D scene and the density of the objects that the position points at regular intervals in space and without virtual objects are the viewing positions, each position has 6 mutually perpendicular viewing directions, etc. A large number of viewing positions can be set for the large virtual 3D scene model, and a large number of viewing directions can be set for each viewing position, so that a large amount of related data can be acquired.
Because the operator does not need to move the physical camera to each position and adjust the view angle of the physical camera to acquire data, the whole process can be automatically executed through a program, so that data of more positions and more direction angles can be acquired in a short time, and the setting of the positions and the directions is more accurate.
In step 105, the virtual camera acquires RGB data and depth data at the respective viewing positions and directions.
The virtual camera sequentially adjusts the view direction at each set view position to acquire data, including simultaneously acquiring RGB data and depth data.
The acquired RGB data are acquired at the current position and in the current direction and are acquired after the virtual 3D scene model is subjected to image rendering, and due to the development of image rendering technology in recent years, the rendered RGB images are very fine and vivid, and the rendering process only needs 10-30 milliseconds generally. Meanwhile, the virtual camera acquires the RGB data by adopting an image rendering method, so that noise caused by electronic elements and optical elements in the real camera is avoided, the RGB data is not interfered by the noise, and the RGB data is more accurate.
The acquired depth data is the depth data of the virtual 3D scene model acquired at the current position and in the current direction, and the position and the direction of the virtual camera are known, so that the position of each point in the virtual 3D scene model is known, and the current depth data can be obtained through calculation. The depth data acquisition of the virtual camera can reach the minimum value and the maximum value supported by a computer and is far higher than the depth data acquisition range of the real depth camera by 20-300 cm, and meanwhile, the depth data can be subjected to difference operation without environmental interference, so that no hole exists in the acquisition of the depth data.
In step 107, RGB-D data is generated from the RGB data and the depth data.
The RGB data and depth data acquired in each direction at each position in step 105 are stored in association to obtain RGB-D data at the position in the direction, and when there are a large number of viewing positions and viewing directions, a data set containing a large amount of RGB-D data can be generated and applied to experiments or evaluations of three-dimensional reconstruction or three-dimensional positioning, or some deep learning algorithms provide training data, etc.
In some embodiments, the method further includes step 1021, determining a memory space associated with a depth cache of the virtual camera, where the depth cache is used to store depth data collected by the virtual camera, and in step 105, collecting depth data to the depth cache; and in step 107, generating RGB-D data according to the mapping data of the RGB data and the depth data, where the mapping data of the depth data is data in a memory space associated with the depth cache to which the depth data collected to the depth cache is mapped.
The depth buffer memory, called depth buffer memory for short, is a real-time direct image obtained by the virtual camera processing the image depth coordinate acquisition, each memory cell of the depth buffer memory corresponds to one pixel acquired by the virtual camera, and the whole depth buffer memory corresponds to one frame of depth image data. And applying for the memory space associated with the depth cache, wherein the depth data is mapped to the memory space from hardware while the virtual camera collects the depth data, namely the mapping data of the depth data, so that the system can generate and store the RGB-D data more quickly according to the RGB data and the mapping data of the depth data.
In some embodiments, the method further includes a step 1022 of determining a memory space associated with a frame buffer of the virtual camera, where the frame buffer is used for storing RGB data collected by the virtual camera; in the step 105, collecting RGB data to the frame buffer; and in step 107, generating RGB-D data according to the mapping data of the RGB data and the depth data, where the mapping data of the RGB data is data in a memory space associated with the frame buffer to which the RGB data collected to the frame buffer is mapped.
The frame buffer memory, frame buffer for short, is a real-time direct image of the RGB image collected by the virtual camera, each storage unit of the frame buffer memory corresponds to one pixel collected by the virtual camera, and the whole frame buffer memory corresponds to one frame of RGB image data. And applying for a frame buffer memory space associated with the frame buffer memory, wherein the virtual camera collects RGB data, and simultaneously the RGB data is mapped to the memory space from hardware, namely mapping data of RBG data, so that the system can generate and store RGB-D data more quickly according to the depth data and the mapping data of the RGB data.
In some embodiments, further comprising a step 104 of determining at least one set of lighting conditions of the virtual 3D scene model; in step 105, the virtual camera acquires RGB data and acquires depth data at the respective viewing positions and directions and under the respective sets of lighting conditions.
The set of illumination conditions may include one or a combination of several conditions of an illumination angle, intensity, color, and light source type, and multiple sets of different illumination conditions may be set for the same virtual 3D scene model. The order of the steps 104 and 103 is not limited, i.e. at least one set of lighting conditions is determined before the step 105.
In step 105, when RGB data and depth data are acquired at each set viewing position and viewing direction, the virtual 3D scene model needs to be under a preset lighting condition. When multiple sets of illumination conditions are preset, multiple sets of illumination conditions can be adjusted in each view finding direction at each view finding position to acquire multiple sets of RGB data of the view finding position and the view finding direction, and the RGB-D data are associated with the depth data of the view finding position and the view finding direction respectively to generate RGB-D data. Of course, after all the data of the viewing position and the viewing direction are acquired under a set of lighting conditions, the lighting conditions may be adjusted to acquire all the data of the viewing position and the viewing direction again.
After the setting of multiple groups of illumination conditions is added, richer RGB data can be collected under the same virtual 3D scene model, and then richer RGB-D data can be generated.
In some embodiments, further comprising:
step 108, determining semantic information of the virtual 3D scene model;
step 109, determining semantic information of a scene model corresponding to the RGB-D data acquired by the virtual camera according to the depth data acquired by the virtual camera and the position and direction of the virtual camera when acquiring the depth data.
The sequence of the step 108 and the step 101-107 is not limited, and the step 101 can be usually completed in the step 101, that is, when the virtual 3D scene model is established, which 3D object models are selected and the placing or building positions of the 3D object models are known, and the semantics of the 3D object models are also known, so that the semantic information of the 3D object models to which each point in the virtual 3D scene model belongs can be determined.
Because the viewing position and direction of the virtual camera are known and the internal parameters of the virtual camera are also known when the depth data are collected in the virtual 3D scene model, in step 109, the coordinate points corresponding to the pixels in the current depth data can be reversely calculated according to the collected depth data, which 3D object models in the virtual 3D scene model correspond to the coordinate points are known, the semantic information corresponding to the depth data collected by the virtual camera at the current position and direction is further determined, and the semantic information of the scene model corresponding to the current RGB-D data can be obtained after the depth data are associated with the RGB data.
Taking the coordinates of a pixel screen of the virtual camera as (u, v) as an example, the depth data of each pixel can be acquired from the acquired depth data, and if the internal parameters of the virtual camera are known, the 3D position of a certain pixel in the coordinate system of the virtual camera is known, and meanwhile, the rotation and translation of the camera relative to the 3D object model can also be determined according to the view finding position and the direction of the virtual camera, so that the position of the pixel in the virtual 3D scene model can be solved, and further semantic information can be obtained. The calculation formula is as follows:
Figure BDA0001549686700000081
wherein z iscFor the scaling factor, K is the virtual camera internal parameter, and R and T are the rotation and translation of the camera relative to the virtual 3D scene model, respectively. x is the number ofw,ywAnd zwSolving the coordinate of a certain pixel point in the virtual 3D scene model to obtain (x)w,yw,zw) Then the corresponding semantic information can be obtained.
According to the method and the device, a large amount of calculation is needed when the RGB-D data of a large-scale scene are generated, the cloud server with high calculation capacity can complete relevant calculation, and the generated data set is stored in the cloud storage for each terminal to take.
According to the method and the device, massive RGB-D data can be generated quickly based on the virtual scene and the virtual camera, and the data are richer and the quality is better. The depth data and/or the RGB data can be taken more quickly by mapping the depth cache and/or the frame cache to the memory space. And a large amount of rich RGB-D data can be further generated by matching with different illumination conditions. Meanwhile, the virtual 3D scene model is self-built, so that all semantic information in the scene model can be obtained, and further, the semantic information corresponding to each RGB-D data can be determined when the RGB-D data are generated, and a data set with richer contents is obtained.
Example two:
based on the same inventive concept, the embodiment of the application also provides a cloud computing-based RGB-D data generation device, and because the problem solving principles of these devices are similar to the cloud computing-based RGB-D data generation method, the implementation of these devices can refer to the implementation of the method, and repeated details are not repeated. As shown in fig. 3, the cloud computing-based RGB-D data generating apparatus 200 includes:
a scene establishing module 201, configured to establish a virtual 3D scene model;
a view determination module 202 for determining respective view positions and respective view directions of a virtual camera at each view position in the virtual 3D scene model;
a data acquisition module 203, configured to execute the virtual camera to acquire RGB data and acquire depth data at each viewing position and direction;
and a data generating module 204, configured to generate RGB-D data according to the RGB data and the depth data.
In some embodiments, the apparatus 200 further comprises:
a model semantic module 205, configured to determine semantic information of the virtual 3D scene model;
and a semantic determining module 206, configured to determine semantic information of a scene model corresponding to the RGB-D data acquired by the virtual camera according to the depth data acquired by the virtual camera and the position and direction of the virtual camera when the depth data is acquired.
In some embodiments, the apparatus 200 further comprises: a cache association module 207, configured to determine a memory space associated with a depth cache of the virtual camera, where the depth cache is used to store depth data acquired by the virtual camera;
the acquiring the depth data comprises acquiring the depth data to the depth cache;
the data generating module 204 is specifically configured to generate RGB-D data according to the mapping data of the RGB data and the depth data, where the mapping data of the depth data is data in a memory space associated with the depth cache to which the depth data acquired to the depth cache is mapped.
In some embodiments, the apparatus 200 further comprises: a buffer association module 207, configured to determine a memory space associated with a frame buffer of the virtual camera, where the frame buffer is used to store RGB data acquired by the virtual camera;
the collecting the RGB data comprises collecting the RGB data to the frame buffer;
the data generating module 204 is specifically configured to generate RGB-D data according to the mapping data of the RGB data and the depth data, where the mapping data of the RGB data is data obtained by mapping the RGB data collected to the frame buffer to a memory space associated with the frame buffer.
In some embodiments, the apparatus 200 further comprises:
an illumination determination module 208 for determining at least one set of illumination conditions of the virtual 3D scene model;
the data acquisition module 203 is configured to execute the virtual camera to acquire RGB data and acquire depth data at each viewing position and direction and under each set of lighting conditions.
Example three:
based on the same inventive concept, the embodiment of the application also provides the electronic device, and as the principle of the electronic device is similar to that of the RGB-D data generation method based on cloud computing, the implementation of the method can be referred to, and repeated details are not repeated. As shown in fig. 4, the electronic device 300 includes: memory 301, one or more processors 302; and one or more modules stored in the memory and configured to be executed by the one or more processors, the one or more modules including instructions for performing the steps of any of the above-described methods.
Example four:
based on the same inventive concept, the present application also provides a computer program product for use in conjunction with an electronic device, the computer program product comprising a computer program embedded in a computer-readable storage medium, the computer program comprising instructions for causing the electronic device to perform the steps of any of the above-described methods.
For convenience of description, each part of the above-described apparatus is separately described as functionally divided into various modules. Of course, the functionality of the various modules or units may be implemented in the same one or more pieces of software or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.

Claims (6)

1. A RGB-D data generation method based on cloud computing is characterized by comprising the following steps:
establishing a virtual 3D scene model;
determining respective viewing positions of a virtual camera in the virtual 3D scene model and respective viewing directions at each viewing position;
determining at least one set of lighting conditions of the virtual 3D scene model;
the virtual camera collects RGB data and collects depth data at each viewing position and direction and under each group of illumination conditions;
generating RGB-D data according to the RGB data and the depth data;
wherein generating RGB-D data from the RGB data and depth data further comprises:
determining a memory space associated with a depth cache of the virtual camera, wherein the depth cache is used for storing depth data acquired by the virtual camera, and generating RGB-D data according to mapping data of the RGB data and the depth data, and the mapping data of the depth data is data of the depth data acquired to the depth cache and mapped to the memory space associated with the depth cache; or
Determining a memory space associated with a frame cache of the virtual camera, wherein the frame cache is used for storing RGB data acquired by the virtual camera, and generating RGB-D data according to mapping data of the RGB data and the depth data, and the mapping data of the RGB data is data of the RGB data acquired to the frame cache and mapped to the memory space associated with the frame cache.
2. The method of claim 1, further comprising:
determining semantic information of the virtual 3D scene model;
and determining semantic information of a scene model corresponding to the RGB-D data acquired by the virtual camera according to the depth data acquired by the virtual camera and the position and the direction of the virtual camera when the depth data is acquired.
3. An apparatus for generating RGB-D data based on cloud computing, the apparatus comprising:
the scene establishing module is used for establishing a virtual 3D scene model;
a view determination module for determining respective view positions and respective view directions of a virtual camera at each view position in the virtual 3D scene model;
an illumination determination module for determining at least one set of lighting conditions of the virtual 3D scene model;
the data acquisition module is used for executing the virtual camera to acquire RGB data and depth data under each framing position and direction and each group of illumination conditions;
the data generation module is used for generating RGB-D data according to the RGB data and the depth data;
the cache association module is used for determining a memory space associated with a depth cache of the virtual camera, the depth cache is used for storing depth data acquired by the virtual camera, RGB-D data is generated according to the RGB data and mapping data of the depth data, and the mapping data of the depth data is data of the depth data acquired to the depth cache which is mapped to the memory space associated with the depth cache; or the buffer association module is configured to determine a memory space associated with a frame buffer of the virtual camera, where the frame buffer is configured to store RGB data acquired by the virtual camera, and generate RGB-D data according to mapping data of the RGB data and the depth data, where the mapping data of the RGB data is data in the memory space associated with the frame buffer to which the RGB data acquired by the frame buffer is mapped.
4. The apparatus of claim 3, further comprising:
the model semantic module is used for determining semantic information of the virtual 3D scene model;
and the semantic determining module is used for determining semantic information of a scene model corresponding to the RGB-D data acquired by the virtual camera according to the depth data acquired by the virtual camera and the position and the direction of the virtual camera when the depth data is acquired.
5. An electronic device, characterized in that the electronic device comprises:
a memory, one or more processors; and one or more modules stored in the memory and configured to be executed by the one or more processors, the one or more modules comprising instructions for performing the steps of the method of any of claims 1-2.
6. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 2.
CN201810041680.XA 2018-01-16 2018-01-16 RGB-D data generation method and device based on cloud computing Active CN108399634B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810041680.XA CN108399634B (en) 2018-01-16 2018-01-16 RGB-D data generation method and device based on cloud computing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810041680.XA CN108399634B (en) 2018-01-16 2018-01-16 RGB-D data generation method and device based on cloud computing

Publications (2)

Publication Number Publication Date
CN108399634A CN108399634A (en) 2018-08-14
CN108399634B true CN108399634B (en) 2020-10-16

Family

ID=63094933

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810041680.XA Active CN108399634B (en) 2018-01-16 2018-01-16 RGB-D data generation method and device based on cloud computing

Country Status (1)

Country Link
CN (1) CN108399634B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109522840B (en) * 2018-11-16 2023-05-30 孙睿 Expressway vehicle flow density monitoring and calculating system and method
CN111701238B (en) * 2020-06-24 2022-04-26 腾讯科技(深圳)有限公司 Virtual picture volume display method, device, equipment and storage medium
CN112308910B (en) * 2020-10-10 2024-04-05 达闼机器人股份有限公司 Data generation method, device and storage medium
CN112802183A (en) * 2021-01-20 2021-05-14 深圳市日出印像数字科技有限公司 Method and device for reconstructing three-dimensional virtual scene and electronic equipment
CN113648654A (en) * 2021-09-03 2021-11-16 网易(杭州)网络有限公司 Game picture processing method, device, equipment, storage medium and program product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226838A (en) * 2013-04-10 2013-07-31 福州林景行信息技术有限公司 Real-time spatial positioning method for mobile monitoring target in geographical scene
CN103500467A (en) * 2013-10-21 2014-01-08 深圳市易尚展示股份有限公司 Constructive method of image-based three-dimensional model
CN104504671A (en) * 2014-12-12 2015-04-08 浙江大学 Method for generating virtual-real fusion image for stereo display
WO2015199470A1 (en) * 2014-06-25 2015-12-30 한국과학기술원 Apparatus and method for estimating hand position utilizing head mounted color depth camera, and bare hand interaction system using same
CN107481307A (en) * 2017-07-05 2017-12-15 国网山东省电力公司泰安供电公司 A kind of method of Fast rendering three-dimensional scenic

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226838A (en) * 2013-04-10 2013-07-31 福州林景行信息技术有限公司 Real-time spatial positioning method for mobile monitoring target in geographical scene
CN103500467A (en) * 2013-10-21 2014-01-08 深圳市易尚展示股份有限公司 Constructive method of image-based three-dimensional model
WO2015199470A1 (en) * 2014-06-25 2015-12-30 한국과학기술원 Apparatus and method for estimating hand position utilizing head mounted color depth camera, and bare hand interaction system using same
CN104504671A (en) * 2014-12-12 2015-04-08 浙江大学 Method for generating virtual-real fusion image for stereo display
CN107481307A (en) * 2017-07-05 2017-12-15 国网山东省电力公司泰安供电公司 A kind of method of Fast rendering three-dimensional scenic

Also Published As

Publication number Publication date
CN108399634A (en) 2018-08-14

Similar Documents

Publication Publication Date Title
CN108399634B (en) RGB-D data generation method and device based on cloud computing
CN103945210B (en) A kind of multi-cam image pickup method realizing shallow Deep Canvas
CN111028155B (en) Parallax image splicing method based on multiple pairs of binocular cameras
WO2019062619A1 (en) Method, apparatus and system for automatically labeling target object within image
CN111062873A (en) Parallax image splicing and visualization method based on multiple pairs of binocular cameras
US20210090279A1 (en) Depth Determination for Images Captured with a Moving Camera and Representing Moving Features
CN104424640B (en) The method and apparatus for carrying out blurring treatment to image
US20170308998A1 (en) Motion Image Compensation Method and Device, Display Device
CN108038886B (en) Binocular camera system calibration method and device and automobile
US9253415B2 (en) Simulating tracking shots from image sequences
US20240046557A1 (en) Method, device, and non-transitory computer-readable storage medium for reconstructing a three-dimensional model
CN206563985U (en) 3-D imaging system
WO2021249401A1 (en) Model generation method and apparatus, image perspective determining method and apparatus, device, and medium
US20170064284A1 (en) Producing three-dimensional representation based on images of a person
JP7479729B2 (en) Three-dimensional representation method and device
CN109934873B (en) Method, device and equipment for acquiring marked image
US11922568B2 (en) Finite aperture omni-directional stereo light transport
CN111080776A (en) Processing method and system for human body action three-dimensional data acquisition and reproduction
CN109166176B (en) Three-dimensional face image generation method and device
US20200105056A1 (en) Dense reconstruction for narrow baseline motion observations
CN111292234A (en) Panoramic image generation method and device
EP4150560B1 (en) Single image 3d photography with soft-layering and depth-aware inpainting
CN112562057B (en) Three-dimensional reconstruction system and method
CN117252914A (en) Training method and device of depth estimation network, electronic equipment and storage medium
CN112634439B (en) 3D information display method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant