CN108399634A - The RGB-D data creation methods and device calculated based on high in the clouds - Google Patents
The RGB-D data creation methods and device calculated based on high in the clouds Download PDFInfo
- Publication number
- CN108399634A CN108399634A CN201810041680.XA CN201810041680A CN108399634A CN 108399634 A CN108399634 A CN 108399634A CN 201810041680 A CN201810041680 A CN 201810041680A CN 108399634 A CN108399634 A CN 108399634A
- Authority
- CN
- China
- Prior art keywords
- data
- rgb
- depth
- acquisition
- virtual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
Abstract
Based on high in the clouds calculate RGB D data creation methods and device, the method includes:Establish virtual 3D model of place;Determine each view direction of the virtual camera at each find a view position and the position of each finding a view in the virtual 3D model of place;The virtual camera is in each position and direction acquisition RGB data and sampling depth data of finding a view;RGB D data are generated according to the RGB data and depth data.In the application, the generation of magnanimity RGB D data can be rapidly completed based on virtual scene and virtual camera, and data are more rich, quality is more excellent.
Description
Technical field
This application involves three-dimensional live technical fields, more particularly to based on high in the clouds calculate RGB-D data creation methods and
Device.
Background technology
RGB-D (RGB-Depth, color depth) training data refers generally to collect using color camera and depth camera
While include color of image and depth data.These data can be used for three-dimensional reconstruction and either the experiment of three-dimensional localization or comment
Estimate, certain deep learning algorithms can also be given to provide training data, such as give deep learning algorithm in unmanned vehicle navigation algorithm
Data are provided.
In the prior art, RGB-D training datas by true color camera and depth camera to some scene into line number
According to acquisition to complete to collect.As the RGB-D methods of data capture of Matterport companies are:The use of tripod fixation it include 3
The phase unit of color camera and 3 depth cameras, three groups of cameras are respectively distributed to upper, middle and lower side, for each panorama, by this three
Phase unit on tripod rotates to 6 different directions (i.e. every 60 degree of shootings are primary), each color camera along vertical direction
High dynamic range images are shot in 6 directions, each depth camera continuous collecting depth data when phase unit rotates integrates life
At depth image after be registrated with every width photochrome.Finally obtained each panoramic picture is by 18 color image groups
At central point is exactly the height for shooting tripod.The result of gathered data is illustrated as shown in Figure 1, wherein Fig. 1 a are RGB numbers
According to Fig. 1 b are depth data.
The scheme of the prior art is suitble to acquire the data of small scene, such as the RGB-D data in some room;Gathered data
Abundant degree is related to the camera angle of artificial settings and position;The light of the RGB data outcome quality of acquisition and acquisition RGB data
It is related to learn the noise that camera or digital camera introduce;The depth data outcome quality of acquisition and the depth algorithm of depth camera and
Precision is related.
The deficiencies in the prior art are:
The RGB-D data of acquisition are not abundant enough;Time-consuming when a large amount of RGB-D data of generation large scene;It is limited to coloured silk
The RGB-D qualities of data of the ability of form and aspect machine and depth camera, acquisition are poor.
Invention content
The embodiment of the present application proposes the RGB-D data creation methods and device calculated based on high in the clouds, is mainly used for providing
A kind of scheme that can quickly generate abundant high quality RGB-D data.
In one aspect, the embodiment of the present application provides a kind of RGB-D data creation methods calculated based on high in the clouds, special
Sign is, the method includes:Establish virtual 3D model of place;Determine that virtual camera is each in the virtual 3D model of place
Each view direction at a find a view position and position of each finding a view;The virtual camera is in each position and direction of finding a view
Acquire RGB data and sampling depth data;RGB-D data are generated according to the RGB data and depth data.
On the other hand, the embodiment of the present application provides a kind of RGB-D data generating devices calculated based on high in the clouds,
It is characterized in that, described device includes:Scene establishes module, for establishing virtual 3D model of place;It finds a view determining module, for true
Determine each view direction of the virtual camera at each find a view position and the position of each finding a view in the virtual 3D model of place;
Data acquisition module, for executing the virtual camera in each find a view position and direction acquisition RGB data and the acquisition
Depth data;Data generation module, for generating RGB-D data according to the RGB data and depth data.
On the other hand, the embodiment of the present application provides a kind of electronic equipment, which is characterized in that the electronic equipment packet
It includes:Memory, one or more processors;And one or more modules, one or more of modules are stored in described
It in memory, and is configured to be executed by one or more of processors, one or more of modules include for executing
The instruction of each step in the above method.
On the other hand, the embodiment of the present application provides a kind of computer program production being used in combination with electronic equipment
Product, the computer program product include the computer program being embedded in computer-readable storage medium, the computer
Program includes the instruction for making the electronic equipment execute each step in the above method.
The embodiment of the present application has the beneficial effect that:
In the application, the generation of magnanimity RGB-D data, and number can be rapidly completed based on virtual scene and virtual camera
According to more rich, quality is more excellent.
Description of the drawings
Attached drawing described herein is used for providing further understanding of the present application, constitutes part of this application.This Shen
Illustrative embodiments and their description please do not constitute the improper restriction to the application for explaining the application.Wherein:
Fig. 1 a show the result schematic diagram that RGB data is acquired based on the prior art;
Fig. 1 b show the result schematic diagram based on prior art sampling depth data;
Fig. 2 shows the flow diagrams of the RGB-D data creation methods calculated based on high in the clouds in the embodiment of the present application one;
Fig. 3 shows the structural schematic diagram of the RGB-D data generating devices calculated based on high in the clouds in the embodiment of the present application two;
Fig. 4 shows the structural schematic diagram of electronic equipment in the embodiment of the present application three.
Specific implementation mode
In order to which the technical solution and advantage that make the application are more clearly understood, below in conjunction with attached drawing to the exemplary of the application
Embodiment is described in more detail, it is clear that and described embodiment is only a part of the embodiment of the application, rather than
The exhaustion of all embodiments.And in the absence of conflict, the feature in the embodiment and embodiment in this explanation can be mutual
It is combined.
Inventor notices during invention:It is existing that RGB-D data are generated based on color camera and depth camera
The RGB-D data of scheme, acquisition are not abundant enough;Time-consuming when a large amount of RGB-D data of generation large scene;It is limited to colored phase
The RGB-D qualities of data of the ability of machine and depth camera, acquisition are poor.
Against the above deficiency, this application provides a kind of RGB-D data creation methods calculated based on high in the clouds, by virtualphase
Machine acquires the RGB data and depth data of each position and direction of finding a view in virtual 3D model of place, and based on acquisition
Data generate RGB-D data.In the application, magnanimity RGB-D data can be rapidly completed based on virtual scene and virtual camera
It generates, and data are more rich, quality is more excellent.
Below by way of specific example, the essence for embodiment technical solution that the present invention is furture elucidated.
Embodiment one:
Fig. 2 shows the RGB-D data creation method flow diagrams calculated based on high in the clouds in the embodiment of the present application one, such as
Shown in Fig. 2, the RGB-D data creation methods calculated based on high in the clouds include:
Step 101, virtual 3D model of place is established;
Step 103, each find a view position and each find a view position of the virtual camera in the virtual 3D model of place is determined
Set each view direction at place;
Step 105, the virtual camera is in each position and direction acquisition RGB data and sampling depth number of finding a view
According to;
Step 107, RGB-D data are generated according to the RGB data and depth data.
In a step 101, virtual 3D model of place is established.
One or more 3D object models can be put or be built in any position in virtual 3D model of place, form 3D
Model of place.The model of place is built without actual object, is not limited by space, also not the personnel in by weather, scene or
The various ectocines such as vehicle, therefore very huge and complicated virtual 3D model of place, such as city model can be built.
In step 103, it determines that each in the virtual 3D model of place of virtual camera finds a view position and each to take
Each view direction of Jing Weizhichu.
The virtual camera be can in the virtual 3D scenes free-moving camera model, similar to true phase
Machine has camera internal parameter, and different camera parameters, which is arranged, can be such that image and depth data of virtual camera acquisition etc. has not
Same size and precision etc..
View direction at each find a view position and the position of each finding a view can be input by user, can also be knot
Close what virtual 3D scenarios automatically generated.For example, can specify virtual camera in the virtual 3D scenes by operating personnel
Motion track, and need one or more orientation angle found a view on each position point of track;It can also be according to void
The size of quasi- 3D scenes and the dense degree of object are determined by system in space at a certain distance and without dummy object
Location point be to find a view position, each position has front, back, left, right, up and lower 6 mutually perpendicular view directions etc..To large size
Very more positions of finding a view can be arranged in virtual 3D model of place, and very more find a view can also be arranged for each position of finding a view
Direction, to collect the related data of magnanimity.
It is whole because moving entity camera to each position without operating personnel and adjusting the progress data acquisition of its angle coverage
A process can automatically be executed by program, it is possible to obtain in a short time more multiposition, more direction angle data,
And the setting of position and direction is more accurate.
In step 105, the virtual camera is in each position and direction acquisition RGB data and depth number of finding a view
According to.
Virtual camera is sequentially adjusted in view direction in the position of finding a view of each setting and carries out the acquisition of data, including adopts simultaneously
Collect RGB data and depth data.
The RGB data wherein acquired is in current location, when what front direction acquired carries out the virtual 3D model of place
The RGB data obtained after image rendering has benefited from the development of graphics rendering techniques in recent years, and the RGB image rendered is very
It is fine true to nature, and render process usually only needs 10-30 milliseconds.Meanwhile virtual camera is acquired using the method for image rendering
RGB data avoids the noise that electronic component and optical element are brought in real camera, makes RGB data not by these noises
Interference, more accurately.
The depth data wherein acquired is the depth in current location, the virtual 3D model of place acquired when front direction
Data, since the position and direction of virtual camera are it is known that virtually therefore the various point locations in 3D model of place are it is known that can calculate
Obtain current depth data.Since the depth data acquisition of virtual camera can reach the minimum value and maximum of computer supported
Value is far above 20-300 centimetres of depth data acquisition range of real depth camera, while can be poor to depth data progress
It is worth operation, not by environmental disturbances, therefore the acquisition of depth data will be without cavity.
In step 107, RGB-D data are generated according to the RGB data and depth data.
The RGB data of each position all directions acquisition in step 105 is associated with preservation with depth data and can be obtained this
The RGB-D data of the position direction can be generated when position and view direction are largely found a view in presence comprising magnanimity RGB-D
The data set of data, the experiment that can be applied to three-dimensional reconstruction either three-dimensional localization is either assessed or certain deep learnings
Algorithm provides training data etc..
In some embodiments, further include step 1021, determine associated interior with the depth buffer of the virtual camera
Space is deposited, the depth buffer is used to store the depth data of the virtual camera acquisition, and in the step 105, acquisition is deep
Degrees of data is to depth buffer;And it in step 107, is generated according to the RGB data and the mapping data of the depth data
The mapping data of RGB-D data, the depth data are that described acquire to the depth data of depth buffer maps to the depth
Data in the memory headroom of Cache associativity.
Depth buffered memory, abbreviation depth buffer are that virtual camera processing picture depth coordinate collects depth number
According to real-time direct image, each storage unit of depth buffer corresponds to a pixel of virtual camera acquisition, and entire depth is slow
Deposit a corresponding frame depth image data.For its associated memory headroom of depth buffer application, in virtual camera sampling depth data
While, depth data will be mapped to the memory headroom, i.e. the mapping data of depth data from hardware, so as to system according to
RGB data and the mapping data of the depth data generate faster and storage RGB-D data.
In some embodiments, further include step 1022, determine the associated memory of frame buffer with the virtual camera
Space, the frame buffer are used to store the RGB data of the virtual camera acquisition;In the step 105, RGB data is acquired
To the frame buffer;And in step 107, RGB-D is generated according to the mapping data of the RGB data and the depth data
Data, the mapping data of the RGB data be acquisition to the frame buffer RGB data map to the frame buffer it is associated in
Deposit the data in space.
Frame buffer memory, abbreviation frame buffer are the RGB image direct image in real time of virtual camera acquisition, frame buffer
Each storage unit correspond to a pixel of virtual camera acquisition, entire frame buffer corresponds to a frame rgb image data.It is slow for frame
Its associated memory headroom of application is deposited, while virtual camera acquires RGB data, RGB data will be mapped to from hardware
The memory headroom, i.e. the mapping data of RBG data, so as to system according to the mapping data of depth data and the RGB data more
Fast generation and storage RGB-D data.
In some embodiments, further include step 104, determine at least one set of illumination item of the virtual 3D model of place
Part;In the step 105, the virtual camera it is described it is each find a view under position and direction and each group illumination condition, adopt
Collect RGB data and sampling depth data.
One group of illumination condition may include one or several kinds of items in lighting angle, intensity, color and light source type
The combination of part can set multigroup different illumination condition for same virtual 3D model of place.The step 104 and step 103
Sequence it is unlimited, i.e., at least one set of illumination condition is determined before step 105.
In step 105, in find a view position and view direction progress RGB data and the depth data acquisition of each setting
When, it needs that virtual 3D model of place is enabled to be under preset illumination condition.It, can be each when having preset multigroup illumination condition
Multigroup RGB that multigroup illumination condition acquires find a view position and the view direction is adjusted under each view direction at position of finding a view
Data, and be associated with the depth data of find a view position and the view direction generate RGB-D data respectively.It can certainly be at one group
After the data for having acquired whole find a view position and view directions under illumination condition, illumination condition is adjusted, these is resurveyed and takes
All data of scape position and view direction.
After the setting for increasing multigroup illumination condition, it can be acquired under same virtual 3D model of place abundanter
RGB data, and then generate more rich RGB-D data.
In some embodiments, further include:
Step 108, the semantic information of the virtual 3D model of place is determined;
Step 109, the virtual camera according to the depth data of virtual camera acquisition and when acquiring the depth data
Position and direction, determine the semantic information of the corresponding model of place of RGB-D data of virtual camera acquisition.
The step 108 and the sequence of above-mentioned steps 101-107 are unlimited, can usually complete in a step 101, that is, are building
When founding virtual 3D model of place, it is to have selected the position of putting or build of which 3D object model and these 3D object models
Know, and the semanteme of these 3D object models is also known, thus may determine that in virtual 3D model of place belonging to each point
3D object models semantic information.
Because in virtual 3D model of place when sampling depth data, the position and direction of finding a view residing for virtual camera are
Know, and the inner parameter of virtual camera is also known, therefore in step 109, it can be anti-according to the depth data of acquisition
To the corresponding each coordinate points of each pixel extrapolated in current depth data, know that each coordinate points correspond to virtual 3D model of place
Which of 3D object models, and then determine the corresponding semantic letter of depth data that virtual camera acquires under current location and direction
Breath will can be obtained the semantic information of the corresponding model of place of current RGB-D data after depth data and RGB data association.
With virtual camera pixel screen coordinate be (u, v) for, the depth of each pixel can be obtained in the depth data of acquisition
Degrees of data, virtual camera inner parameter is it is known that then in the coordinate system of virtual camera, known to the positions 3D of certain pixel, same to phase
Machine is determined relative to the rotation of 3D object models with the position and direction that can also find a view according to virtual camera are translated, therefore can be solved
Position of the pixel in virtual 3D model of place, and then obtain semantic information.Calculation formula is as follows:
Wherein zcFor zoom factor, K is virtual camera inner parameter, and R and T are respectively camera relative virtual 3D model of place
Rotation and translation.xw, ywAnd zwFor coordinate of certain pixel in virtual 3D model of place, solution obtains (xw, yw, zw) be afterwards
Its corresponding semantic information can be obtained.
The application needs larger calculation amount when generating the RGB-D data of large scale scene, can be stronger by computing capability
Cloud server complete correlation computations, and the data set of generation is preserved into memory beyond the clouds, is taken for each terminal.
The generation of magnanimity RGB-D data, and number can be rapidly completed in the application based on virtual scene and virtual camera
According to more rich, quality is more excellent.Mapping implementation depth data and/or RGB based on depth buffer and/or frame buffer to memory headroom
Taking for data can faster obtain RGB-D data.Coordinate different illumination conditions, can further generate a large amount of abundant
RGB-D data.Simultaneously as virtual 3D model of place is self-built, therefore it can know all semantemes in model of place
Information, and then can determine the corresponding semantic information of each RGB-D data when generating RGB-D data, it is more rich to obtain content
Data set.
Embodiment two:
Based on same inventive concept, a kind of RGB-D data life calculated based on high in the clouds is additionally provided in the embodiment of the present application
At device, since the principle that these devices solve the problems, such as is similar to the RGB-D data creation methods calculated based on high in the clouds, this
The implementation of a little devices may refer to the implementation of method, and overlaps will not be repeated.As shown in figure 3, it is described based on high in the clouds calculate
RGB-D data generating devices 200 include:
Scene establishes module 201, for establishing virtual 3D model of place;
Determining module of finding a view 202, for determining each find a view position of the virtual camera in the virtual 3D model of place
With each view direction at each position of finding a view;
Data acquisition module 203 acquires RGB numbers for executing the virtual camera in each position and direction of finding a view
According to this and sampling depth data;
Data generation module 204, for generating RGB-D data according to the RGB data and depth data.
In some embodiments, described device 200 further includes:
Model semantics module 205, the semantic information for determining the virtual 3D model of place;
Semantic determining module 206, when the depth data for being used to be acquired according to the virtual camera is with the depth data is acquired
The position and direction of the virtual camera determine the semanteme of the corresponding model of place of RGB-D data of the virtual camera acquisition
Information.
In some embodiments, described device 200 further includes:Cache associativity module 207, for determining and described virtual
The associated memory headroom of depth buffer of camera, the depth buffer are used to store the depth data of the virtual camera acquisition;
The sampling depth data include sampling depth data to the depth buffer;
The data generation module 204 is specifically used for, and is given birth to according to the RGB data and the mapping data of the depth data
At RGB-D data, the mapping data of the depth data are that described acquire to the depth data of depth buffer maps to the depth
Spend the data in the memory headroom of Cache associativity.
In some embodiments, described device 200 further includes:Cache associativity module 207, for determining and described virtual
The associated memory headroom of frame buffer of camera, the frame buffer are used to store the RGB data of the virtual camera acquisition;
The acquisition RGB data includes acquisition RGB data to the frame buffer;
The data generation module 204 is specifically used for, and is given birth to according to the mapping data of the RGB data and the depth data
At RGB-D data, the mapping data of the RGB data are that described acquire to the RGB data of frame buffer maps to the frame buffer
Data in associated memory headroom.
In some embodiments, described device 200 further includes:
Illumination determining module 208, at least one set of illumination condition for determining the virtual 3D model of place;
The data acquisition module 203, for execute the virtual camera it is described it is each find a view position and direction and
Under each group illumination condition, RGB data and sampling depth data are acquired.
Embodiment three:
Based on same inventive concept, a kind of electronic equipment is additionally provided in the embodiment of the present application, due to its principle be based on
The RGB-D data creation methods that high in the clouds calculates are similar, therefore its implementation may refer to the implementation of method, and it is no longer superfluous to repeat place
It states.As shown in figure 4, the electronic equipment 300 includes:Memory 301, one or more processors 302;And it is one or more
Module, one or more of modules are stored in the memory, and are configured to by one or more of processors
It executes, one or more of modules include the instruction for executing each step in any above method.
Example IV:
Based on same inventive concept, the embodiment of the present application also provides a kind of computer journeys being used in combination with electronic equipment
Sequence product, the computer program product include the computer program being embedded in computer-readable storage medium, the meter
Calculation machine program includes the instruction for making the electronic equipment execute each step in any above method.
For convenience of description, each section of apparatus described above is divided into various modules with function and describes respectively.Certainly, exist
Implement each module or the function of unit can be realized in same or multiple softwares or hardware when the application.
It should be understood by those skilled in the art that, embodiments herein can be provided as method, system or computer program
Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the application
Apply the form of example.Moreover, the application can be used in one or more wherein include computer usable program code computer
The computer program production implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.)
The form of product.
The application is with reference to method, the flow of equipment (system) and computer program product according to the embodiment of the present application
Figure and/or block diagram describe.It should be understood that can be realized by computer program instructions every first-class in flowchart and/or the block diagram
The combination of flow and/or box in journey and/or box and flowchart and/or the block diagram.These computer programs can be provided
Instruct the processor of all-purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce
A raw machine so that the instruction executed by computer or the processor of other programmable data processing devices is generated for real
The device for the function of being specified in present one flow of flow chart or one box of multiple flows and/or block diagram or multiple boxes.
These computer program instructions, which may also be stored in, can guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works so that instruction generation stored in the computer readable memory includes referring to
Enable the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one box of block diagram or
The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device so that count
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, in computer or
The instruction executed on other programmable devices is provided for realizing in one flow of flow chart or multiple flows and/or block diagram one
The step of function of being specified in a box or multiple boxes.
Although the preferred embodiment of the application has been described, created once a person skilled in the art knows basic
Property concept, then additional changes and modifications may be made to these embodiments.So it includes excellent that the following claims are intended to be interpreted as
It selects embodiment and falls into all change and modification of the application range.
Claims (12)
1. a kind of RGB-D data creation methods calculated based on high in the clouds, which is characterized in that the method includes:
Establish virtual 3D model of place;
Determine that each in the virtual 3D model of place of virtual camera finds a view at position and position of each finding a view each takes
Scape direction;
The virtual camera is in each position and direction acquisition RGB data and sampling depth data of finding a view;
RGB-D data are generated according to the RGB data and depth data.
2. the method as described in claim 1, which is characterized in that further include:
Determine the semantic information of the virtual 3D model of place;
The position and direction of virtual camera when according to the depth data of virtual camera acquisition and acquiring the depth data,
Determine the semantic information of the corresponding model of place of RGB-D data of the virtual camera acquisition.
3. the method as described in any in claims 1 or 2, which is characterized in that further include:
Determine the associated memory headroom of depth buffer with the virtual camera, the depth buffer is for storing the virtualphase
The depth data of machine acquisition;
The sampling depth data include sampling depth data to the depth buffer;
It is described that RGB-D data are generated according to the RGB data and depth data, including:
RGB-D data, the mapping number of the depth data are generated according to the RGB data and the mapping data of the depth data
The data in the associated memory headroom of the depth buffer are mapped to according to the depth data for acquisition to the depth buffer.
4. the method as described in any in claims 1 or 2, which is characterized in that further include:
Determine that the associated memory headroom of frame buffer with the virtual camera, the frame buffer are adopted for storing the virtual camera
The RGB data of collection;
The acquisition RGB data includes acquisition RGB data to the frame buffer;
It is described that RGB-D data are generated according to the RGB data and depth data, including:
RGB-D data, the mapping number of the RGB data are generated according to the mapping data of the RGB data and the depth data
The data in the associated memory headroom of the frame buffer are mapped to according to the RGB data for acquisition to the frame buffer.
5. the method as described in any in claims 1 or 2, which is characterized in that further include:
Determine at least one set of illumination condition of the virtual 3D model of place;
The virtual camera acquires RGB data and sampling depth data in each position and direction of finding a view, including:
The virtual camera it is described it is each find a view under position and direction and each group illumination condition, acquire and RGB data and adopt
Collect depth data.
6. a kind of RGB-D data generating devices calculated based on high in the clouds, which is characterized in that described device includes:
Scene establishes module, for establishing virtual 3D model of place;
It finds a view determining module, position and is each taken for determining that each in the virtual 3D model of place of virtual camera finds a view
Each view direction of Jing Weizhichu;
Data acquisition module, for execute the virtual camera it is described it is each find a view position and direction acquisition RGB data and
Sampling depth data;
Data generation module, for generating RGB-D data according to the RGB data and depth data.
7. device as claimed in claim 6, which is characterized in that further include:
Model semantics module, the semantic information for determining the virtual 3D model of place;
It is described virtual when semantic determining module, the depth data for being used to be acquired according to the virtual camera and the acquisition depth data
The position and direction of camera determine the semantic information of the corresponding model of place of RGB-D data of the virtual camera acquisition.
8. the device as described in any in claim 6 or 7, which is characterized in that further include:
Cache associativity module, for determining the associated memory headroom of depth buffer with the virtual camera, the depth buffer
Depth data for storing the virtual camera acquisition;
The sampling depth data include sampling depth data to the depth buffer;
The data generation module is specifically used for, and RGB-D is generated according to the RGB data and the mapping data of the depth data
The mapping data of data, the depth data are that described acquire to the depth data of depth buffer maps to the depth buffer pass
Data in the memory headroom of connection.
9. the device as described in any in claim 6 or 7, which is characterized in that further include:
Cache associativity module, for determining that the associated memory headroom of frame buffer with the virtual camera, the frame buffer are used for
Store the RGB data of the virtual camera acquisition;
The acquisition RGB data includes acquisition RGB data to the frame buffer;
The data generation module is specifically used for, and RGB-D is generated according to the mapping data of the RGB data and the depth data
Data, the mapping data of the RGB data be acquisition to the frame buffer RGB data map to the frame buffer it is associated in
Deposit the data in space.
10. the device as described in any in claim 6 or 7, which is characterized in that further include:
Illumination determining module, at least one set of illumination condition for determining the virtual 3D model of place;
The data acquisition module, for executing the virtual camera in each find a view position and direction and each group illumination
Under the conditions of, acquire RGB data and sampling depth data.
11. a kind of electronic equipment, which is characterized in that the electronic equipment includes:
Memory, one or more processors;And one or more modules, one or more of modules are stored in described
It in memory, and is configured to be executed by one or more of processors, one or more of modules include for executing
In claim 1 to 5 in any the method each step instruction.
12. a kind of computer program product being used in combination with electronic equipment, the computer program product includes being embedded in meter
Computer program in the readable storage medium of calculation machine, the computer program include for making the electronic equipment perform claim
It is required that the instruction of each step in 1 to 5 in any the method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810041680.XA CN108399634B (en) | 2018-01-16 | 2018-01-16 | RGB-D data generation method and device based on cloud computing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810041680.XA CN108399634B (en) | 2018-01-16 | 2018-01-16 | RGB-D data generation method and device based on cloud computing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108399634A true CN108399634A (en) | 2018-08-14 |
CN108399634B CN108399634B (en) | 2020-10-16 |
Family
ID=63094933
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810041680.XA Active CN108399634B (en) | 2018-01-16 | 2018-01-16 | RGB-D data generation method and device based on cloud computing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108399634B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109522840A (en) * | 2018-11-16 | 2019-03-26 | 孙睿 | A kind of expressway vehicle density monitoring calculation system and method |
CN112308910A (en) * | 2020-10-10 | 2021-02-02 | 达闼机器人有限公司 | Data generation method and device and storage medium |
CN112802183A (en) * | 2021-01-20 | 2021-05-14 | 深圳市日出印像数字科技有限公司 | Method and device for reconstructing three-dimensional virtual scene and electronic equipment |
CN113658318A (en) * | 2020-04-28 | 2021-11-16 | 阿里巴巴集团控股有限公司 | Data processing method and system, training data generation method and electronic equipment |
CN113648654A (en) * | 2021-09-03 | 2021-11-16 | 网易(杭州)网络有限公司 | Game picture processing method, device, equipment, storage medium and program product |
WO2021258994A1 (en) * | 2020-06-24 | 2021-12-30 | 腾讯科技(深圳)有限公司 | Method and apparatus for displaying virtual scene, and device and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103226838A (en) * | 2013-04-10 | 2013-07-31 | 福州林景行信息技术有限公司 | Real-time spatial positioning method for mobile monitoring target in geographical scene |
CN103500467A (en) * | 2013-10-21 | 2014-01-08 | 深圳市易尚展示股份有限公司 | Constructive method of image-based three-dimensional model |
CN104504671A (en) * | 2014-12-12 | 2015-04-08 | 浙江大学 | Method for generating virtual-real fusion image for stereo display |
WO2015199470A1 (en) * | 2014-06-25 | 2015-12-30 | 한국과학기술원 | Apparatus and method for estimating hand position utilizing head mounted color depth camera, and bare hand interaction system using same |
CN107481307A (en) * | 2017-07-05 | 2017-12-15 | 国网山东省电力公司泰安供电公司 | A kind of method of Fast rendering three-dimensional scenic |
-
2018
- 2018-01-16 CN CN201810041680.XA patent/CN108399634B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103226838A (en) * | 2013-04-10 | 2013-07-31 | 福州林景行信息技术有限公司 | Real-time spatial positioning method for mobile monitoring target in geographical scene |
CN103500467A (en) * | 2013-10-21 | 2014-01-08 | 深圳市易尚展示股份有限公司 | Constructive method of image-based three-dimensional model |
WO2015199470A1 (en) * | 2014-06-25 | 2015-12-30 | 한국과학기술원 | Apparatus and method for estimating hand position utilizing head mounted color depth camera, and bare hand interaction system using same |
CN104504671A (en) * | 2014-12-12 | 2015-04-08 | 浙江大学 | Method for generating virtual-real fusion image for stereo display |
CN107481307A (en) * | 2017-07-05 | 2017-12-15 | 国网山东省电力公司泰安供电公司 | A kind of method of Fast rendering three-dimensional scenic |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109522840A (en) * | 2018-11-16 | 2019-03-26 | 孙睿 | A kind of expressway vehicle density monitoring calculation system and method |
CN109522840B (en) * | 2018-11-16 | 2023-05-30 | 孙睿 | Expressway vehicle flow density monitoring and calculating system and method |
CN113658318A (en) * | 2020-04-28 | 2021-11-16 | 阿里巴巴集团控股有限公司 | Data processing method and system, training data generation method and electronic equipment |
WO2021258994A1 (en) * | 2020-06-24 | 2021-12-30 | 腾讯科技(深圳)有限公司 | Method and apparatus for displaying virtual scene, and device and storage medium |
CN112308910A (en) * | 2020-10-10 | 2021-02-02 | 达闼机器人有限公司 | Data generation method and device and storage medium |
WO2022073415A1 (en) * | 2020-10-10 | 2022-04-14 | 达闼机器人有限公司 | Data generation method and apparatus, and storage medium |
CN112308910B (en) * | 2020-10-10 | 2024-04-05 | 达闼机器人股份有限公司 | Data generation method, device and storage medium |
CN112802183A (en) * | 2021-01-20 | 2021-05-14 | 深圳市日出印像数字科技有限公司 | Method and device for reconstructing three-dimensional virtual scene and electronic equipment |
CN113648654A (en) * | 2021-09-03 | 2021-11-16 | 网易(杭州)网络有限公司 | Game picture processing method, device, equipment, storage medium and program product |
Also Published As
Publication number | Publication date |
---|---|
CN108399634B (en) | 2020-10-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108399634A (en) | The RGB-D data creation methods and device calculated based on high in the clouds | |
CN107690672B (en) | Training data generation method and device and image semantic segmentation method thereof | |
Martull et al. | Realistic CG stereo image dataset with ground truth disparity maps | |
CN110874864A (en) | Method, device, electronic equipment and system for obtaining three-dimensional model of object | |
CN104330074B (en) | Intelligent surveying and mapping platform and realizing method thereof | |
US6930685B1 (en) | Image processing method and apparatus | |
CN108401461A (en) | Three-dimensional mapping method, device and system, cloud platform, electronic equipment and computer program product | |
CN108537879A (en) | Reconstructing three-dimensional model system and method | |
CN103530907B (en) | Complicated three-dimensional model drawing method based on images | |
CN109870118B (en) | Point cloud collection method for green plant time sequence model | |
CN109801365A (en) | A kind of three-dimensional modeling data acquisition device and its three-dimensional modeling method | |
CN111275015A (en) | Unmanned aerial vehicle-based power line inspection electric tower detection and identification method and system | |
CN108665541A (en) | A kind of ground drawing generating method and device and robot based on laser sensor | |
CN109934935A (en) | A kind of clothes are exposed the false with human body image, match the method and system of migration | |
CN113763231A (en) | Model generation method, image perspective determination device, image perspective determination equipment and medium | |
CN116506993A (en) | Light control method and storage medium | |
CN112734824A (en) | Three-dimensional reconstruction method based on generalized luminosity stereo model | |
CN116958396A (en) | Image relighting method and device and readable storage medium | |
CN108655571A (en) | A kind of digital-control laser engraving machine, control system and control method, computer | |
CN116612256B (en) | NeRF-based real-time remote three-dimensional live-action model browsing method | |
Kunert et al. | An efficient diminished reality approach using real-time surface reconstruction | |
CN106157321A (en) | True point source position based on plane surface high dynamic range images measuring method | |
CN108182727B (en) | Phase unwrapping method based on multi-viewpoint geometric consistency | |
CN108346183A (en) | A kind of method and system for AR origin reference locations | |
CN104915980A (en) | Moving object multi-view light and shadow synthesizing method based on sparse light field elements |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |