CN109166170A - Method and apparatus for rendering augmented reality scene - Google Patents
Method and apparatus for rendering augmented reality scene Download PDFInfo
- Publication number
- CN109166170A CN109166170A CN201810955286.7A CN201810955286A CN109166170A CN 109166170 A CN109166170 A CN 109166170A CN 201810955286 A CN201810955286 A CN 201810955286A CN 109166170 A CN109166170 A CN 109166170A
- Authority
- CN
- China
- Prior art keywords
- light source
- scene
- image
- source parameters
- parameters
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/506—Illumination models
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the present application discloses the method and apparatus for rendering augmented reality scene.One specific embodiment of this method includes: to obtain at least one image comprising target scene;It will be at least one image input light source parameters prediction model trained in advance, obtain the light source parameters of light source in target scene, wherein, light source parameters prediction model is used to characterize the corresponding relationship at least one image comprising scene and scene between the light source parameters of light source;Based on obtained light source parameters, the augmented reality scene after destination virtual object to be added to target scene is rendered using augmented reality.This embodiment improves the degrees of fusion of virtual objects and reality scene.
Description
Technical field
The invention relates to field of computer technology, and in particular to for rendering the method and dress of augmented reality scene
It sets.
Background technique
Augmented reality (Augmented Reality, AR) technology mainly passes through camera and sensor positioning reality is empty
Between, virtual objects are placed in three dimensions, the technology that the three-dimensional space after placing later to virtual objects is rendered.It is existing
The method that some is used to render the lighting effect in AR scene, which is specifically included that, carries out AR scene by customized Lighting information
Rendering;Using the light image of real world, keep virtual objects truer by environment mapping mode.
Summary of the invention
The embodiment of the present application proposes the method and apparatus for rendering augmented reality scene.
In a first aspect, the embodiment of the present application provides a kind of method for rendering augmented reality scene, comprising: obtain packet
At least one image containing target scene;By at least one image input light source parameters prediction model trained in advance, obtain
The light source parameters of light source in target scene, wherein light source parameters prediction model is for characterizing at least one image comprising scene
Corresponding relationship between the light source parameters of light source in scene;Based on obtained light source parameters, augmented reality pair is utilized
Augmented reality scene after destination virtual object to be added to target scene renders.
In some embodiments, light source parameters prediction model includes light source type identification model and at least one light source parameters
Predict submodel, light source parameters predict that submodel is corresponding with light source type, and light source type identification model includes field for characterizing
Corresponding relationship at least one image and scene of scape between the light source type of light source, light source parameters predict that submodel is used for table
Corresponding relationship at least one image of sign comprising scene and scene between the light source parameters of light source.
In some embodiments, by least one image input light source parameters prediction model trained in advance, mesh is obtained
Mark the light source parameters of light source in scene, comprising: at least one image is inputted in light source type identification model, target scene is obtained
The light source type of middle light source;At least one image is inputted into light source parameters corresponding with obtained light source type and predicts submodule
In type, the light source parameters of light source in target scene are obtained.
In some embodiments, submodel is trained as follows obtains for light source parameters prediction: obtaining the first instruction
Practice sample set, wherein the first training sample includes at least one first sample image and for characterizing at least one first sample
The markup information of the light source parameters of light source in the scene that this image is included, the scene that at least one first sample image is included
The light source type of middle light source is default light source type;By at least one of the first training sample in the first training sample set
First sample image is defeated as it is expected using markup information corresponding at least one first sample image of input as input
Out, training obtains the corresponding light source parameters prediction submodel of default light source type.
In some embodiments, at least one first sample image is that at least one that intercept out from Sample video is continuous
Image frame, the light source type of light source is default light source type in the scene that Sample video is included.
In some embodiments, light source type includes at least one of the following: indoor daylight, indoor light, outdoor daylight, room
Outer light.
In some embodiments, light source parameters prediction model is trained as follows obtains: obtaining the second training
Sample set, wherein the second training sample includes at least one second sample image and for characterizing at least one second sample
The markup information of the light source parameters of light source in the scene that image is included;By the second training sample in the second training sample set
At least one of the second sample image as input, by markup information corresponding at least one second sample image of input
As desired output, training obtains light source parameters prediction model.
Second aspect, the embodiment of the present application provide a kind of for rendering the device of augmented reality scene, comprising: obtain single
Member is configured to obtain at least one image comprising target scene;Input unit is configured to input at least one image
In advance in trained light source parameters prediction model, the light source parameters of light source in target scene are obtained, wherein light source parameters predict mould
Type is used to characterize the corresponding relationship at least one image comprising scene and scene between the light source parameters of light source;Rendering is single
Member is configured to be added to target field to by destination virtual object using augmented reality based on obtained light source parameters
Augmented reality scene after scape is rendered.
In some embodiments, light source parameters prediction model includes light source type identification model and at least one light source parameters
Predict submodel, light source parameters predict that submodel is corresponding with light source type, and light source type identification model includes field for characterizing
Corresponding relationship at least one image and scene of scape between the light source type of light source, light source parameters predict that submodel is used for table
Corresponding relationship at least one image of sign comprising scene and scene between the light source parameters of light source.
In some embodiments, input unit is further used for that at least one image is inputted instruction in advance as follows
In experienced light source parameters prediction model, the light source parameters of light source in target scene are obtained: at least one image is inputted into light source class
In type identification model, the light source type of light source in target scene is obtained;By the input of at least one image and obtained light source class
In the corresponding light source parameters prediction submodel of type, the light source parameters of light source in target scene are obtained.
In some embodiments, submodel is trained as follows obtains for light source parameters prediction: obtaining the first instruction
Practice sample set, wherein the first training sample includes at least one first sample image and for characterizing at least one first sample
The markup information of the light source parameters of light source in the scene that this image is included, the scene that at least one first sample image is included
The light source type of middle light source is default light source type;By at least one of the first training sample in the first training sample set
First sample image is defeated as it is expected using markup information corresponding at least one first sample image of input as input
Out, training obtains the corresponding light source parameters prediction submodel of default light source type.
In some embodiments, at least one first sample image is that at least one that intercept out from Sample video is continuous
Image frame, the light source type of light source is default light source type in the scene that Sample video is included.
In some embodiments, light source type includes at least one of the following: indoor daylight, indoor light, outdoor daylight, room
Outer light.
In some embodiments, light source parameters prediction model is trained as follows obtains: obtaining the second training
Sample set, wherein the second training sample includes at least one second sample image and for characterizing at least one second sample
The markup information of the light source parameters of light source in the scene that image is included;By the second training sample in the second training sample set
At least one of the second sample image as input, by markup information corresponding at least one second sample image of input
As desired output, training obtains light source parameters prediction model.
The third aspect, the embodiment of the present application provide a kind of electronic equipment, comprising: one or more processors;Storage dress
It sets, for storing one or more programs, when one or more programs are executed by one or more processors, so that one or more
A processor realizes the method such as any embodiment in the method for rendering augmented reality scene.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable medium, are stored thereon with computer program, should
The method such as any embodiment in the method for rendering augmented reality scene is realized when program is executed by processor.
Method and apparatus provided by the embodiments of the present application for rendering augmented reality scene include by what be will acquire
At least one image of target scene is input in light source parameters prediction model trained in advance and obtains light in above-mentioned target scene
The light source parameters in source.Later, obtained light source parameters can be based on, are added using augmented reality to by destination virtual object
Augmented reality scene after being added to above-mentioned target scene is rendered.So as to accurately determine out light source in target scene
Light source parameters improve the degrees of fusion of virtual objects and reality scene.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that one embodiment of the application can be applied to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the method for rendering augmented reality scene of the application;
Fig. 3 is the schematic diagram according to an application scenarios of the method for rendering augmented reality scene of the application;
Fig. 4 is the flow chart according to another embodiment of the method for rendering augmented reality scene of the application;
Fig. 5 is the structural schematic diagram according to one embodiment of the device for rendering augmented reality scene of the application;
Fig. 6 is adapted for the structural schematic diagram for the computer system for realizing the electronic equipment of the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the application for rendering the method for augmented reality scene or for rendering augmented reality
The exemplary system architecture 100 of the device of scene.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 to provide the medium of communication link.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101,102,103 and be interacted by network 104 with server 105, to receive or send out
Send message (for example, sending at least one image comprising target scene) etc..It can be equipped on terminal device 101,102,103
Various telecommunication customer end applications, such as the application of photography and vedio recording class, image processing class application, the application of augmented reality class, three-dimensional animation
Render class application etc..
Terminal device 101,102,103 can be hardware, be also possible to software.When terminal device 101,102,103 is hard
When part, the various electronic equipments of information exchange, including but not limited to smart phone, plate are can be with display screen and supported
Computer, pocket computer on knee etc..When terminal device 101,102,103 is software, may be mounted at above-mentioned cited
In electronic equipment.Its may be implemented into multiple softwares or software module (such as provide multiple softwares of Distributed Services or
Software module), single software or software module also may be implemented into.It is not specifically limited herein.
Server 105 can be to provide the server of various services, such as to the packet that terminal device 101,102,103 uploads
The image processing server that at least one image containing target scene is handled.Image processing server can be to getting
At least one image etc. carries out the processing such as analyzing, and processing result (such as augmented reality scene after rendering) is fed back to terminal
Equipment.For example, the light source that at least one image input comprising target scene that server 105 can will acquire is trained in advance
In parametric prediction model, the light source parameters of light source in above-mentioned target scene are obtained;Later, above-mentioned light source parameters are based on, increasing is utilized
Strong reality technology renders the augmented reality scene after destination virtual object to be added to above-mentioned target scene.
It should be noted that server can be hardware, it is also possible to software.When server is hardware, may be implemented
At the distributed server cluster that multiple servers form, individual server also may be implemented into.It, can when server is software
It, can also be with to be implemented as multiple softwares or software module (such as providing multiple softwares of Distributed Services or software module)
It is implemented as single software or software module.It is not specifically limited herein.
It should be noted that for rendering the method for augmented reality scene generally by servicing provided by the embodiment of the present application
Device 105 executes, and correspondingly, the device for rendering augmented reality scene is generally positioned in server 105.
It should be pointed out that the local of server 105 can also directly store at least one image comprising target scene,
Server 105 can directly acquire local at least one image comprising target scene and carry out image procossing, at this point, exemplary
Terminal device 101,102,103 and network 104 can be not present in system architecture 100.
It may also be noted that image processing class application and enhancing can also be equipped in terminal device 101,102,103
Real class application, terminal device 101,102,103 can also be applied based on image processing class application and augmented reality class at least
One image carries out light source parameters analysis and augmented reality scene rendering, at this point, the method for rendering augmented reality scene
It can be executed by terminal device 101,102,103, correspondingly, the device for rendering augmented reality scene also can be set in end
In end equipment 101,102,103.At this point, server 105 and network 104 can be not present in exemplary system architecture 100.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need
It wants, can have any number of terminal device, network and server.
With continued reference to Fig. 2, one embodiment of the method for rendering augmented reality scene according to the application is shown
Process 200.The method for being used to render augmented reality scene, comprising the following steps:
Step 201, at least one image comprising target scene is obtained.
In the present embodiment, for rendering executing subject (such as the server shown in FIG. 1 of the method for augmented reality scene
105) available at least one image comprising target scene.Scene can refer to specific scene in life.Lead in scene
It often may include specific object.For example, scene of handling official business as composed by desk, computer, phone.Target scene can be pre-
If real-life real scene, be also possible to the real scene specified by user.Above-mentioned executing subject can will be virtual
Object is added in target scene to obtain augmented reality scene.At least one above-mentioned image can be according to the suitable of time passage
It include at least one image of target scene taken by sequence, for example, it may be being cut from the video comprising target scene
The one group of continuous image frame got.
Step 202, it by least one image input light source parameters prediction model trained in advance, obtains in target scene
The light source parameters of light source.
In the present embodiment, at least one image got in step 201 can be input to pre- by above-mentioned executing subject
First in trained light source parameters prediction model, the light source parameters of light source in above-mentioned target scene are obtained.Wherein, above-mentioned light source parameters
Prediction model can be used for characterizing the corresponding pass between at least one image and the light source parameters of light source in scene comprising scene
System.The light source parameters that can characterize light source at least one image comprising scene and scene can be trained in several ways
Between corresponding relationship light source parameters prediction model.
As an example, above-mentioned light source parameters prediction model may include characteristic extraction part and mapping table.Its
In, characteristic extraction part, which can be used for extracting feature from least one image, generates feature vector, for example, characteristic extraction part
It can be convolutional neural networks, deep neural network etc..Mapping table can be technical staff and be based on to a large amount of feature
The statistics of vector sum light source parameters and pre-establish, to be stored with multiple feature vectors corresponding with the corresponding relationship of light source parameters
Relation table.In this way, above-mentioned light source parameters prediction model can be first using getting in characteristic extraction part extraction step 201
The feature of at least one image, to generate target feature vector.It later, will be in the target feature vector and mapping table
Multiple feature vectors are successively compared, if some feature vector in mapping table is identical as target feature vector or phase
Seemingly, then using the corresponding light source parameters of this feature vector in mapping table as at least one figure got in step 201
As the light source parameters of light source in indicated target scene.
Herein, light source parameters can include but is not limited at least one of following: the incidence of the location information, light source of light source
Angle, light source colour and the intensity of light source.The intensity of light source is referred to as luminous intensity, refers to the strong light of light source in given directions
Degree.
In some optional implementations of the present embodiment, above-mentioned light source parameters prediction model be can be by following
The training of one training step obtains:
Firstly, the executing subject of the first training step can be from locally or remotely from the execution with the first training step
Other electronic equipments of major network connection obtain the second training sample set.Wherein, each second training sample may include
At least one second sample image and for characterizing light source in the scene that at least one second sample image is included light source ginseng
Several markup informations.For example, can be by manually demarcating the light source parameters of light source in the scene that the second sample image is included.
It should be noted that the executing subject of the first training step can be with the method for rendering augmented reality scene
Executing subject is same or different.If identical, the executing subject of the first training step can obtain light source parameters in training
The model parameter of trained light source parameters prediction model is stored in local after prediction model.If it is different, then the first training
The executing subject of step can be after training obtains light source parameters prediction model by the mould of trained light source parameters prediction model
Shape parameter is sent to the executing subject of the method for rendering augmented reality scene.
Later, the second training in the second training sample set that the executing subject of the first training step can will acquire
The second sample image of at least one of sample inputs initial light source parameters prediction model, obtains second sample image and is wrapped
The light source parameters of light source in the scene contained, it is pre- using the markup information in second training sample as above-mentioned initial light source parameters
The desired output for surveying model obtains light source parameters prediction model using machine learning method training.It specifically, can be first with
Preset loss function calculates the difference between the markup information in obtained light source parameters and second training sample, example
Such as, the markup information in obtained light source parameters and second training sample can be calculated as loss function using L2 norm
Between difference.It is then possible to adjust initial light source parameters prediction model, and pre- meeting based on resulting difference is calculated
If training termination condition in the case where, terminate training.For example, the training termination condition here preset at can include but is not limited to
At least one of below: the training time is more than preset duration;Frequency of training is more than preset times;Resulting difference is calculated less than default
Discrepancy threshold.It should be noted that above-mentioned initial light source parameters prediction model can be neural network, such as convolutional Neural net
Network, deep neural network etc..
Here it is possible to using various implementations based on the mark in light source parameters generated and second training sample
The model parameter of the initial light source parameters prediction model of discrepancy adjustment between information.For example, BP (Back can be used
Propagation, backpropagation) algorithm or SGD (Stochastic Gradient Descent, stochastic gradient descent) calculate
Method adjusts the model parameter of initial light source parameters prediction model.
Step 203, obtained light source parameters are based on, are added to mesh to by destination virtual object using augmented reality
Augmented reality scene after mark scene is rendered.
In the present embodiment, above-mentioned executing subject can be based on obtained light source parameters, utilize augmented reality pair
Augmented reality scene after destination virtual object to be added to target scene renders.Augmented reality is that one kind calculates in real time
The position of camera image and angle and the technology for adding respective image, the target of this technology is on the screen virtual world
It covers in real world and is interacted.Destination virtual object may include preset virtual objects, for example, the cartoon that user specifies
Personage.
Herein, rendering refers to the process for converting three-dimensional scenic to two dimensional image.It in practical applications, generally can be with
Using game engine or directly using the figure API of bottom (Application Programming Interface, application
Program Interfaces) carry out augmented reality scene rendering.As an example, can join obtained light source parameters as input
Number, using game engine, (for example, Unity 3D, one can to create 3 D video game, building visualization, real-time three-dimensional dynamic
The multi-platform comprehensive development of games tool of the types interaction contents such as picture) it is added to above-mentioned target field to by destination virtual object
Augmented reality scene after scape carries out illumination render.
With continued reference to the application scenarios that Fig. 3, Fig. 3 are according to the method for rendering augmented reality scene of the present embodiment
One schematic diagram.In the application scenarios of Fig. 3, the executing subject 301 for rendering augmented reality scene can obtain packet first
At least one image 302 containing target scene, above-mentioned target scene can be specified by user by desk, computer and phone
Composed office scene.Later, the executing subject 301 for rendering augmented reality scene can will include target scene
At least one image 302 be input in advance trained light source parameters prediction model 303, obtain the light of light source in target scene
Source parameter 304, as an example, the light source parameters 304 of light source may include the location information of light source, light source colour in target scene
And the intensity of light source.Finally, the executing subject 301 for rendering augmented reality scene can be by light source in obtained target scene
Light source parameters 304 as input parameter, using preset game engine to by destination virtual object (for example, cartoon figure's " rice
Mouse ") it is added to the augmented reality scene progress illumination render after above-mentioned target scene, thus the augmented reality after being rendered
Scene 305.
The method provided by the above embodiment of the application can accurately determine out the light source parameters of light source in target scene,
Improve the degrees of fusion of virtual objects and reality scene.
With further reference to Fig. 4, it illustrates the processes of another embodiment of the method for rendering augmented reality scene
400.This is used to render the process 400 of the method for augmented reality scene, comprising the following steps:
Step 401, at least one image comprising target scene is obtained.
In the present embodiment, the operation of step 401 and the operation of step 201 are essentially identical, and details are not described herein.
Step 402, at least one image is inputted in light source type identification model, obtains the light source of light source in target scene
Type.
In the present embodiment, above-mentioned light source parameters prediction model may include light source type identification model.Wherein, above-mentioned light
Source Type identification model can be used for characterizing at least one image comprising scene and scene between the light source type of light source
Corresponding relationship.
In the present embodiment, at least one image got in step 401 can be input to pre- by above-mentioned executing subject
First in trained light source type identification model, the light source type of light source in above-mentioned target scene is obtained.It can be in several ways
Train the light that can characterize the corresponding relationship at least one image comprising scene and scene between the light source type of light source
Source Type identification model.
As an example, above-mentioned light source type identification model may include characteristic extraction part and mapping table.Its
In, characteristic extraction part, which can be used for extracting feature from least one image, generates feature vector, for example, characteristic extraction part
It can be convolutional neural networks, deep neural network etc..Mapping table can be technical staff and be based on to a large amount of feature
The statistics of vector sum light source type and pre-establish, to be stored with multiple feature vectors corresponding with the corresponding relationship of light source type
Relation table.In this way, above-mentioned light source type identification model can be first using getting in characteristic extraction part extraction step 401
The feature of at least one image, to generate target feature vector.It later, will be in the target feature vector and mapping table
Multiple feature vectors are successively compared, if some feature vector in mapping table is identical as target feature vector or phase
Seemingly, then using the corresponding light source type of this feature vector in mapping table as at least one figure got in step 401
As the light source type of light source in indicated target scene.
As another example, light source type identification model can be obtained by the training of following second training step:
Firstly, the executing subject of the second training step can be from locally or remotely from the execution with the second training step
Other electronic equipments of major network connection obtain third training sample set.Wherein, each third training sample may include
At least one third sample image and light source class for characterizing light source in the scene that at least one third sample image is included
The markup information of type.For example, can be by manually demarcating the light source type of light source in the scene that third sample image is included.
Later, the third training in the third training sample set that the executing subject of the second training step can will acquire
At least one third sample image in sample inputs initial light source type identification model, obtains the third sample image and is wrapped
The light source type of light source in the scene contained is known using the markup information in the third training sample as the initial light source type
The desired output of other model obtains light source type identification model using machine learning method training.It specifically, can be first with
Preset loss function calculates the difference between the markup information in obtained light source type and the third training sample, example
Such as, the markup information in obtained light source type and the third training sample can be calculated as loss function using L2 norm
Between difference.It is then possible to adjust initial light source type identification model, and pre- meeting based on resulting difference is calculated
If training termination condition in the case where, terminate training.For example, the training termination condition here preset at can include but is not limited to
At least one of below: the training time is more than preset duration;Frequency of training is more than preset times;Resulting difference is calculated less than default
Discrepancy threshold.It should be noted that above-mentioned initial light source type identification model can be neural network, such as convolutional Neural net
Network, deep neural network etc..
It herein, can be using various implementations based on the mark in light source type generated and the third training sample
Infuse the model parameter of the initial light source type identification model of discrepancy adjustment between information.For example, can using BP algorithm or
SGD algorithm adjusts the model parameter of initial light source type identification model.
In the present embodiment, light source type can include but is not limited at least one of following: environment light, directional light, point light
Source and spotlight.Environment light refers to the light in specific environment intraoral illumination.Directional light is also known as direction light (Directional
It Light), is one group of parallel light that do not decay, the effect of similar sunlight.Point light source is referred to from a point to surrounding
The luminous light source of space uniform.Spotlight is the light being polymerized to using optically focused camera lens or reflecting mirror etc..
In some optional implementations of the present embodiment, above-mentioned light source type may include at least one of following: room
Interior daylight, indoor light, outdoor daylight, outdoor light.Light can also include but is not limited at least one of following: fluorescent lamp,
LED (Light-Emitting Diode Light, light emitting diode) lamp and incandescent lamp.
Step 403, at least one image is inputted into light source parameters corresponding with obtained light source type and predicts submodule
In type, the light source parameters of light source in target scene are obtained.
In the present embodiment, above-mentioned light source parameters prediction model may include at least one light source parameters prediction submodel.
Wherein, a light source parameters prediction submodel can be corresponding with a kind of light source type.Light source parameters prediction submodel can be used
Corresponding relationship at least one image of characterization comprising scene and scene between the light source parameters of light source.
Usually, the quantity of the light source parameters prediction submodel in above-mentioned at least one light source parameters prediction submodel takes
Certainly in the quantity of light source type.
In the present embodiment, at least one image got in step 401 can be inputted and be walked by above-mentioned executing subject
In rapid 402 in the corresponding light source parameters prediction submodel of obtained light source type, light source in above-mentioned target scene is obtained
Light source parameters.The light that can characterize light source at least one image comprising scene and scene can be trained in several ways
The light source parameters of corresponding relationship between the parameter of source predict submodel.
As an example, submodule is predicted for the light source parameters at least one above-mentioned light source parameters prediction submodel
Type, the light source parameters predict that submodel may include characteristic extraction part and mapping table.Wherein, characteristic extraction part can be with
For from least one image, (light source parameters of light source to be that the light source parameters predict the corresponding light of submodel in the scene for being included
At least one image of source parameter) in extract feature generate feature vector, for example, characteristic extraction part can be convolutional Neural net
Network, deep neural network etc..Mapping table can be technical staff based on to a large amount of feature vector and light source parameters
Count and pre-establish, be stored with the mapping table of the corresponding relationship of multiple feature vectors and light source parameters.In this way, above-mentioned
Light source parameters predict that submodel can be corresponding extremely using light source type obtained in characteristic extraction part extraction step 402 first
The feature of a few image, to generate target feature vector.It later, will be more in the target feature vector and mapping table
A feature vector is successively compared, if some feature vector in mapping table is identical as target feature vector or phase
Seemingly, then corresponding using the corresponding light source parameters of this feature vector in mapping table as light source type obtained in step 402
At least one image indicated by target scene light source light source parameters.
In some optional implementations of the present embodiment, for the light at least one light source parameters prediction submodel
Source parameter prediction submodel, light source parameters prediction submodel can be to be obtained by the training of following third training step:
Firstly, the executing subject of third training step can be from locally or remotely from the execution with third training step
Other electronic equipments of major network connection obtain the first training sample set.Wherein, each first training sample may include
At least one first sample image and for characterizing light source in the scene that at least one first sample image is included light source ginseng
Several markup informations.The light source type of light source is default light source type in the scene that at least one first sample image is included,
Wherein, presetting light source type is light source type corresponding with light source parameters prediction submodel.It can be by manually demarcating the first sample
The light source parameters of light source in the scene that this image is included.
It should be noted that the executing subject of third training step can be with the method for rendering augmented reality scene
Executing subject is same or different.If identical, the executing subject of third training step can obtain light source parameters in training
The model parameter of trained light source parameters prediction submodel is stored in local after prediction submodel.If it is different, then third
Trained light source parameters can be predicted son after training obtains light source parameters prediction submodel by the executing subject of training step
The model parameter of model is sent to the executing subject of the method for rendering augmented reality scene.
Later, the first training in the first training sample set that the executing subject of third training step can will acquire
At least one first sample image in sample inputs initial light source parameters prediction submodel, obtains the first sample image institute
The light source parameters of light source in the scene for including, using the markup information in first training sample as above-mentioned initial light source parameters
The desired output for predicting submodel obtains light source parameters prediction submodel using machine learning method training.It specifically, can be first
The difference between the markup information in obtained light source parameters and first training sample is calculated first with preset loss function
It is different, for example, the mark in obtained light source parameters and first training sample can be calculated as loss function using L2 norm
Infuse the difference between information.It is then possible to initial light source parameters prediction submodel is adjusted based on resulting difference is calculated, and
In the case where meeting preset trained termination condition, terminate training.For example, the training termination condition here preset at may include
But be not limited at least one of following: the training time is more than preset duration;Frequency of training is more than preset times;Calculate resulting difference
Less than default discrepancy threshold.It should be noted that above-mentioned initial light source parameters prediction submodel can be neural network, such as
Convolutional neural networks, deep neural network etc..
Here it is possible to using various implementations based on the mark in light source parameters generated and first training sample
The model parameter of the initial light source parameters prediction submodel of discrepancy adjustment between information.For example, can using BP algorithm or
SGD algorithm adjusts the model parameter of initial light source parameters prediction submodel.
In some optional implementations of the present embodiment, at least one above-mentioned first sample image be can be from sample
The continuous image frame of at least one intercepted out in video, herein, the light of light source in the scene that above-mentioned Sample video is included
Source Type is usually above-mentioned default light source type, and the light source parameters that as training obtains predict the corresponding light source type of submodel.
Step 404, obtained light source parameters are based on, are added to mesh to by destination virtual object using augmented reality
Augmented reality scene after mark scene is rendered.
In the present embodiment, the operation of step 404 and the operation of step 203 are essentially identical, and details are not described herein.
Figure 4, it is seen that being used to render augmented reality field in the present embodiment compared with the corresponding embodiment of Fig. 2
The process 400 of the method for scape increases the light source type of light source in the target scene that at least one determining image is included, later
At least one above-mentioned image is input in light source parameters prediction submodel corresponding with the light source type determined and obtains mesh
The step of marking the light source parameters of light source in scene.As a result, the present embodiment description scheme can by comprising target scene at least
One image inputs in light source parameters prediction submodel corresponding with the light source type of light source in target scene and obtains target scene
The light source parameters of middle light source, to further improve the accuracy of determined light source parameters.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides one kind to increase for rendering
One embodiment of the device of strong reality scene, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, device tool
Body can be applied in various electronic equipments.
As shown in figure 5, the present embodiment for render the device 500 of augmented reality scene include: acquiring unit 501, it is defeated
Enter unit 502 and rendering unit 503.Wherein, acquiring unit 501 is configured to obtain at least one figure comprising target scene
Picture;Input unit 502 is configured to input at least one image in light source parameters prediction model trained in advance, obtains target
The light source parameters of light source in scene, wherein light source parameters prediction model is used to characterize at least one image comprising scene and field
Corresponding relationship in scape between the light source parameters of light source;Rendering unit 503 is configured to utilize based on obtained light source parameters
Augmented reality renders the augmented reality scene after destination virtual object to be added to target scene.
In the present embodiment, for rendering acquiring unit 501,502 and of input unit of the device 500 of augmented reality scene
The specific processing of rendering unit 503 can be with reference to step 201, step 202 and the step 203 in Fig. 2 corresponding embodiment.
In some optional implementations of the present embodiment, above-mentioned light source parameters prediction model may include light source type
Identification model.Wherein, above-mentioned light source type identification model can be used for characterizing at least one image comprising scene and scene
Corresponding relationship between the light source type of light source.Above-mentioned light source parameters prediction model can also include that at least one light source parameters is pre-
Survey submodel.Wherein, a light source parameters prediction submodel can be corresponding with a kind of light source type.Light source parameters predict submodule
Type can be used for characterizing the corresponding relationship at least one image comprising scene and scene between the light source parameters of light source.
In some optional implementations of the present embodiment, above-mentioned input unit 502 can be further used for according to such as
Under type inputs at least one above-mentioned image in light source parameters prediction model trained in advance, obtains light in above-mentioned target scene
The light source parameters in source: firstly, at least one image that above-mentioned input unit 502 can will acquire is input to light trained in advance
In Source Type identification model, the light source type of light source in above-mentioned target scene is obtained.Light source type can include but is not limited to
It is at least one of lower: environment light, directional light, point light source and spotlight.Environment light refers to the light in specific environment intraoral illumination.It is flat
Row light is also known as direction light, is one group of parallel light that do not decay, the effect of similar sunlight.Point light source is referred to from one
Light source of a point to surrounding space uniformly light-emitting.Spotlight is the light being polymerized to using optically focused camera lens or reflecting mirror etc..Later, above-mentioned
It is pre- that at least one image that input unit 502 can will acquire inputs light source parameters corresponding with obtained light source type
It surveys in submodel, obtains the light source parameters of light source in above-mentioned target scene.
In some optional implementations of the present embodiment, for the light at least one light source parameters prediction submodel
Source parameter prediction submodel, light source parameters prediction submodel can be to be obtained by the training of following third training step:
Firstly, the executing subject of third training step can be from locally or remotely from the execution with third training step
Other electronic equipments of major network connection obtain the first training sample set.Wherein, each first training sample may include
At least one first sample image and for characterizing light source in the scene that at least one first sample image is included light source ginseng
Several markup informations.The light source type of light source is default light source type in the scene that at least one first sample image is included,
Wherein, presetting light source type is light source type corresponding with light source parameters prediction submodel.It can be by manually demarcating the first sample
The light source parameters of light source in the scene that this image is included.
It should be noted that the executing subject of third training step can be with the method for rendering augmented reality scene
Executing subject is same or different.If identical, the executing subject of third training step can obtain light source parameters in training
The model parameter of trained light source parameters prediction submodel is stored in local after prediction submodel.If it is different, then third
Trained light source parameters can be predicted son after training obtains light source parameters prediction submodel by the executing subject of training step
The model parameter of model is sent to the executing subject of the method for rendering augmented reality scene.
Later, the first training in the first training sample set that the executing subject of third training step can will acquire
At least one first sample image in sample inputs initial light source parameters prediction submodel, obtains the first sample image institute
The light source parameters of light source in the scene for including, using the markup information in first training sample as above-mentioned initial light source parameters
The desired output for predicting submodel obtains light source parameters prediction submodel using machine learning method training.It specifically, can be first
The difference between the markup information in obtained light source parameters and first training sample is calculated first with preset loss function
It is different, for example, the mark in obtained light source parameters and first training sample can be calculated as loss function using L2 norm
Infuse the difference between information.It is then possible to initial light source parameters prediction submodel is adjusted based on resulting difference is calculated, and
In the case where meeting preset trained termination condition, terminate training.For example, the training termination condition here preset at may include
But be not limited at least one of following: the training time is more than preset duration;Frequency of training is more than preset times;Calculate resulting difference
Less than default discrepancy threshold.It should be noted that above-mentioned initial light source parameters prediction submodel can be neural network, such as
Convolutional neural networks, deep neural network etc..
Here it is possible to using various implementations based on the mark in light source parameters generated and first training sample
The model parameter of the initial light source parameters prediction submodel of discrepancy adjustment between information.For example, can using BP algorithm or
SGD algorithm adjusts the model parameter of initial light source parameters prediction submodel.
In some optional implementations of the present embodiment, at least one above-mentioned first sample image be can be from sample
The continuous image frame of at least one intercepted out in video, herein, the light of light source in the scene that above-mentioned Sample video is included
Source Type is usually above-mentioned default light source type, and the light source parameters that as training obtains predict the corresponding light source type of submodel.
In some optional implementations of the present embodiment, above-mentioned light source type may include at least one of following: room
Interior daylight, indoor light, outdoor daylight, outdoor light.Light can also include but is not limited at least one of following: fluorescent lamp,
LED light and incandescent lamp.
In some optional implementations of the present embodiment, above-mentioned light source parameters prediction model be can be by following
The training of one training step obtains:
Firstly, the executing subject of the first training step can be from locally or remotely from the execution with the first training step
Other electronic equipments of major network connection obtain the second training sample set.Wherein, each second training sample may include
At least one second sample image and for characterizing light source in the scene that at least one second sample image is included light source ginseng
Several markup informations.For example, can be by manually demarcating the light source parameters of light source in the scene that the second sample image is included.
It should be noted that the executing subject of the first training step can be with the method for rendering augmented reality scene
Executing subject is same or different.If identical, the executing subject of the first training step can obtain light source parameters in training
The model parameter of trained light source parameters prediction model is stored in local after prediction model.If it is different, then the first training
The executing subject of step can be after training obtains light source parameters prediction model by the mould of trained light source parameters prediction model
Shape parameter is sent to the executing subject of the method for rendering augmented reality scene.
Later, the second training in the second training sample set that the executing subject of the first training step can will acquire
The second sample image of at least one of sample inputs initial light source parameters prediction model, obtains second sample image and is wrapped
The light source parameters of light source in the scene contained, it is pre- using the markup information in second training sample as above-mentioned initial light source parameters
The desired output for surveying model obtains light source parameters prediction model using machine learning method training.It specifically, can be first with
Preset loss function calculates the difference between the markup information in obtained light source parameters and second training sample, example
Such as, the markup information in obtained light source parameters and second training sample can be calculated as loss function using L2 norm
Between difference.It is then possible to adjust initial light source parameters prediction model, and pre- meeting based on resulting difference is calculated
If training termination condition in the case where, terminate training.For example, the training termination condition here preset at can include but is not limited to
At least one of below: the training time is more than preset duration;Frequency of training is more than preset times;Resulting difference is calculated less than default
Discrepancy threshold.It should be noted that above-mentioned initial light source parameters prediction model can be neural network, such as convolutional Neural net
Network, deep neural network etc..
Here it is possible to using various implementations based on the mark in light source parameters generated and second training sample
The model parameter of the initial light source parameters prediction model of discrepancy adjustment between information.For example, BP algorithm or SGD can be used
Algorithm adjusts the model parameter of initial light source parameters prediction model.
Below with reference to Fig. 6, it illustrates the electronic equipment (clothes of example as shown in figure 1 for being suitable for being used to realize the embodiment of the present invention
Be engaged in device 105) computer system 600 structural schematic diagram.Electronic equipment shown in Fig. 6 is only an example, should not be to this
The function and use scope for applying for embodiment bring any restrictions.
As shown in fig. 6, computer system 600 includes central processing unit (CPU) 601, it can be read-only according to being stored in
Program in memory (ROM) 602 or be loaded into the program in random access storage device (RAM) 603 from storage section 608 and
Execute various movements appropriate and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data.
CPU 601, ROM 602 and RAM 603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to always
Line 604.
I/O interface 605 is connected to lower component: the importation 606 including keyboard, mouse etc.;Including such as liquid crystal
Show the output par, c 607 of device (LCD) and loudspeaker etc.;Storage section 608 including hard disk etc.;And including such as LAN card,
The communications portion 609 of the network interface card of modem etc..Communications portion 609 executes communication via the network of such as internet
Processing.Driver 610 is also connected to I/O interface 605 as needed.Detachable media 611, such as disk, CD, magneto-optic disk,
Semiconductor memory etc. is mounted on as needed on driver 610, in order to from the computer program read thereon according to need
It is mounted into storage section 608.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communications portion 609, and/or from detachable media
611 are mounted.When the computer program is executed by central processing unit (CPU) 601, limited in execution the present processes
Above-mentioned function.It should be noted that the above-mentioned computer-readable medium of the application can be computer-readable signal media or
Computer readable storage medium either the two any combination.Computer readable storage medium for example can be --- but
Be not limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.
The more specific example of computer readable storage medium can include but is not limited to: have one or more conducting wires electrical connection,
Portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only deposit
Reservoir (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory
Part or above-mentioned any appropriate combination.In this application, computer readable storage medium, which can be, any include or stores
The tangible medium of program, the program can be commanded execution system, device or device use or in connection.And
In the application, computer-readable signal media may include in a base band or the data as the propagation of carrier wave a part are believed
Number, wherein carrying computer-readable program code.The data-signal of this propagation can take various forms, including but not
It is limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer
Any computer-readable medium other than readable storage medium storing program for executing, the computer-readable medium can send, propagate or transmit use
In by the use of instruction execution system, device or device or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to: wireless, electric wire, optical cable, RF etc., Huo Zheshang
Any appropriate combination stated.
Flow chart and block diagram in attached drawing are illustrated according to the system of various embodiments of the invention, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use
The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present invention can be realized by way of software, can also be by hard
The mode of part is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor packet
Include acquiring unit, input unit and rendering unit.Wherein, the title of these units is not constituted under certain conditions to the unit
The restriction of itself.For example, acquiring unit is also described as " obtaining the unit of at least one image comprising target scene ".
As on the other hand, present invention also provides a kind of computer-readable medium, which be can be
Included in device described in above-described embodiment;It is also possible to individualism, and without in the supplying device.Above-mentioned calculating
Machine readable medium carries one or more program, when said one or multiple programs are executed by the device, so that should
Device: at least one image comprising target scene is obtained;By at least one image input light source parameters prediction trained in advance
In model, the light source parameters of light source in target scene are obtained, wherein light source parameters prediction model is for characterizing comprising scene extremely
Corresponding relationship in a few image and scene between the light source parameters of light source;Based on obtained light source parameters, enhancing is utilized
Reality technology renders the augmented reality scene after destination virtual object to be added to target scene.
Above description is only presently preferred embodiments of the present invention and the explanation to institute's application technology principle.Those skilled in the art
Member is it should be appreciated that invention scope involved in the present invention, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed in the present invention
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (16)
1. a kind of method for rendering augmented reality scene, comprising:
Obtain at least one image comprising target scene;
By at least one described image input light source parameters prediction model trained in advance, light source in the target scene is obtained
Light source parameters, wherein the light source parameters prediction model be used for characterize comprising scene at least one image and scene in light
Corresponding relationship between the light source parameters in source;
Based on obtained light source parameters, using augmented reality to destination virtual object is added to the target scene after
Augmented reality scene rendered.
2. according to the method described in claim 1, wherein, the light source parameters prediction model include light source type identification model and
At least one light source parameters predicts submodel, and light source parameters predict that submodel is corresponding with light source type, and the light source type is known
Other model is used to characterize the corresponding relationship at least one image comprising scene and scene between the light source type of light source, light source
Parameter prediction submodel is used to characterize corresponding between at least one image and the light source parameters of light source in scene comprising scene
Relationship.
3. according to the method described in claim 2, wherein, the light source that the input of at least one described image is trained in advance is joined
In number prediction model, the light source parameters of light source in the target scene are obtained, comprising:
At least one described image is inputted in the light source type identification model, the light source of light source in the target scene is obtained
Type;
At least one described image is inputted in light source parameters prediction submodel corresponding with obtained light source type, is obtained
The light source parameters of light source in the target scene.
4. according to the method described in claim 3, wherein, light source parameters predict that submodel is trained as follows obtains
:
Obtain the first training sample set, wherein the first training sample includes at least one first sample image and for characterizing
The markup information of the light source parameters of light source in the scene that at least one first sample image is included, at least one first sample figure
The light source type of light source is default light source type in the scene that picture is included;
It, will using at least one first sample image in the first training sample in the first training sample set as input
Markup information corresponding at least one first sample image of input obtains the default light source class as desired output, training
The corresponding light source parameters of type predict submodel.
5. according to the method described in claim 4, wherein, at least one described first sample image is intercepted from Sample video
The continuous image frame of out at least one, the light source type of light source is the default light in the scene that the Sample video is included
Source Type.
6. according to the method described in claim 2, wherein, the light source type includes at least one of the following: indoor daylight, interior
Light, outdoor daylight, outdoor light.
7. method described in one of -6 according to claim 1, wherein the light source parameters prediction model is to instruct as follows
It gets:
Obtain the second training sample set, wherein the second training sample includes at least one second sample image and for characterizing
The markup information of the light source parameters of light source in the scene that at least one second sample image is included;
It, will using the second sample image of at least one of the second training sample in the second training sample set as input
For markup information corresponding at least one second sample image of input as desired output, training obtains light source parameters prediction mould
Type.
8. a kind of for rendering the device of augmented reality scene, comprising:
Acquiring unit is configured to obtain at least one image comprising target scene;
Input unit is configured to input at least one described image in light source parameters prediction model trained in advance, obtain
The light source parameters of light source in the target scene, wherein the light source parameters prediction model is for characterizing comprising scene at least
Corresponding relationship in one image and scene between the light source parameters of light source;
Rendering unit is configured to add using augmented reality to by destination virtual object based on obtained light source parameters
Augmented reality scene after being added to the target scene is rendered.
9. device according to claim 8, wherein the light source parameters prediction model include light source type identification model and
At least one light source parameters predicts submodel, and light source parameters predict that submodel is corresponding with light source type, and the light source type is known
Other model is used to characterize the corresponding relationship at least one image comprising scene and scene between the light source type of light source, light source
Parameter prediction submodel is used to characterize corresponding between at least one image and the light source parameters of light source in scene comprising scene
Relationship.
10. device according to claim 9, wherein the input unit is further used for as follows will be described
In at least one image input light source parameters prediction model trained in advance, the light source ginseng of light source in the target scene is obtained
Number:
At least one described image is inputted in the light source type identification model, the light source of light source in the target scene is obtained
Type;
At least one described image is inputted in light source parameters prediction submodel corresponding with obtained light source type, is obtained
The light source parameters of light source in the target scene.
11. device according to claim 10, wherein light source parameters predict that submodel is trained as follows obtains
:
Obtain the first training sample set, wherein the first training sample includes at least one first sample image and for characterizing
The markup information of the light source parameters of light source in the scene that at least one first sample image is included, at least one first sample figure
The light source type of light source is default light source type in the scene that picture is included;
It, will using at least one first sample image in the first training sample in the first training sample set as input
Markup information corresponding at least one first sample image of input obtains the default light source class as desired output, training
The corresponding light source parameters of type predict submodel.
12. device according to claim 11, wherein at least one described first sample image is cut from Sample video
At least one the continuous image frame taken out, the light source type of light source is described default in the scene that the Sample video is included
Light source type.
13. device according to claim 9, wherein the light source type includes at least one of the following: indoor daylight, room
Interior light, outdoor daylight, outdoor light.
14. the device according to one of claim 8-13, wherein the light source parameters prediction model is as follows
What training obtained:
Obtain the second training sample set, wherein the second training sample includes at least one second sample image and for characterizing
The markup information of the light source parameters of light source in the scene that at least one second sample image is included;
It, will using the second sample image of at least one of the second training sample in the second training sample set as input
For markup information corresponding at least one second sample image of input as desired output, training obtains light source parameters prediction mould
Type.
15. a kind of electronic equipment, comprising:
One or more processors;
Storage device is stored thereon with one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors are real
The now method as described in any in claim 1-7.
16. a kind of computer-readable medium, is stored thereon with computer program, wherein the realization when program is executed by processor
Method as described in any in claim 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810955286.7A CN109166170A (en) | 2018-08-21 | 2018-08-21 | Method and apparatus for rendering augmented reality scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810955286.7A CN109166170A (en) | 2018-08-21 | 2018-08-21 | Method and apparatus for rendering augmented reality scene |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109166170A true CN109166170A (en) | 2019-01-08 |
Family
ID=64896310
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810955286.7A Pending CN109166170A (en) | 2018-08-21 | 2018-08-21 | Method and apparatus for rendering augmented reality scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109166170A (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109919244A (en) * | 2019-03-18 | 2019-06-21 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating scene Recognition model |
CN110288692A (en) * | 2019-05-17 | 2019-09-27 | 腾讯科技(深圳)有限公司 | Irradiation rendering method and device, storage medium and electronic device |
CN110310224A (en) * | 2019-07-04 | 2019-10-08 | 北京字节跳动网络技术有限公司 | Light efficiency rendering method and device |
CN110428388A (en) * | 2019-07-11 | 2019-11-08 | 阿里巴巴集团控股有限公司 | A kind of image-data generating method and device |
CN110490960A (en) * | 2019-07-11 | 2019-11-22 | 阿里巴巴集团控股有限公司 | A kind of composograph generation method and device |
CN110648387A (en) * | 2019-03-04 | 2020-01-03 | 完美世界(北京)软件科技发展有限公司 | Method and device for managing link between light source and model in 3D scene |
CN111144491A (en) * | 2019-12-26 | 2020-05-12 | 南京旷云科技有限公司 | Image processing method, device and electronic system |
CN111325984A (en) * | 2020-03-18 | 2020-06-23 | 北京百度网讯科技有限公司 | Sample data acquisition method and device and electronic equipment |
CN111553972A (en) * | 2020-04-27 | 2020-08-18 | 北京百度网讯科技有限公司 | Method, apparatus, device and storage medium for rendering augmented reality data |
CN111814812A (en) * | 2019-04-09 | 2020-10-23 | Oppo广东移动通信有限公司 | Modeling method, modeling device, storage medium, electronic device and scene recognition method |
CN112150563A (en) * | 2019-06-28 | 2020-12-29 | 浙江宇视科技有限公司 | Light source color determining method and device, storage medium and electronic equipment |
CN112631415A (en) * | 2020-12-31 | 2021-04-09 | Oppo(重庆)智能科技有限公司 | CPU frequency adjusting method, device, electronic equipment and storage medium |
WO2022156150A1 (en) * | 2021-01-19 | 2022-07-28 | 浙江商汤科技开发有限公司 | Image processing method and apparatus, electronic device, storage medium, and computer program |
CN117082359A (en) * | 2023-10-16 | 2023-11-17 | 荣耀终端有限公司 | Image processing method and related equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102930358A (en) * | 2012-11-28 | 2013-02-13 | 江西九江供电公司 | Neural network prediction method for generated output of photovoltaic power station |
CN106028016A (en) * | 2016-06-20 | 2016-10-12 | 联想(北京)有限公司 | Information processing method and electronic device |
CN107464244A (en) * | 2017-03-09 | 2017-12-12 | 广东顺德中山大学卡内基梅隆大学国际联合研究院 | A kind of image irradiation method of estimation based on neutral net |
CN107665508A (en) * | 2016-07-29 | 2018-02-06 | 成都理想境界科技有限公司 | Realize the method and system of augmented reality |
CN108305328A (en) * | 2018-02-08 | 2018-07-20 | 网易(杭州)网络有限公司 | Dummy object rendering intent, system, medium and computing device |
-
2018
- 2018-08-21 CN CN201810955286.7A patent/CN109166170A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102930358A (en) * | 2012-11-28 | 2013-02-13 | 江西九江供电公司 | Neural network prediction method for generated output of photovoltaic power station |
CN106028016A (en) * | 2016-06-20 | 2016-10-12 | 联想(北京)有限公司 | Information processing method and electronic device |
CN107665508A (en) * | 2016-07-29 | 2018-02-06 | 成都理想境界科技有限公司 | Realize the method and system of augmented reality |
CN107464244A (en) * | 2017-03-09 | 2017-12-12 | 广东顺德中山大学卡内基梅隆大学国际联合研究院 | A kind of image irradiation method of estimation based on neutral net |
CN108305328A (en) * | 2018-02-08 | 2018-07-20 | 网易(杭州)网络有限公司 | Dummy object rendering intent, system, medium and computing device |
Non-Patent Citations (1)
Title |
---|
中国建筑科学研究院PKPM软件研究所: "《三维建筑设计软件APM操作指南与实例》", 31 March 2006, 中国建材工业出版社 * |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110648387B (en) * | 2019-03-04 | 2024-01-05 | 完美世界(北京)软件科技发展有限公司 | Link management method and device used between light source and model in 3D scene |
CN110648387A (en) * | 2019-03-04 | 2020-01-03 | 完美世界(北京)软件科技发展有限公司 | Method and device for managing link between light source and model in 3D scene |
CN109919244A (en) * | 2019-03-18 | 2019-06-21 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating scene Recognition model |
CN111814812A (en) * | 2019-04-09 | 2020-10-23 | Oppo广东移动通信有限公司 | Modeling method, modeling device, storage medium, electronic device and scene recognition method |
CN110288692A (en) * | 2019-05-17 | 2019-09-27 | 腾讯科技(深圳)有限公司 | Irradiation rendering method and device, storage medium and electronic device |
US11915364B2 (en) | 2019-05-17 | 2024-02-27 | Tencent Technology (Shenzhen) Company Limited | Illumination rendering method and apparatus, storage medium, and electronic device |
US11600040B2 (en) | 2019-05-17 | 2023-03-07 | Tencent Technology (Shenzhen) Company Limited | Illumination rendering method and apparatus, storage medium, and electronic device |
CN112150563A (en) * | 2019-06-28 | 2020-12-29 | 浙江宇视科技有限公司 | Light source color determining method and device, storage medium and electronic equipment |
CN112150563B (en) * | 2019-06-28 | 2024-03-26 | 浙江宇视科技有限公司 | Method and device for determining light source color, storage medium and electronic equipment |
CN110310224A (en) * | 2019-07-04 | 2019-10-08 | 北京字节跳动网络技术有限公司 | Light efficiency rendering method and device |
CN110428388A (en) * | 2019-07-11 | 2019-11-08 | 阿里巴巴集团控股有限公司 | A kind of image-data generating method and device |
CN110490960A (en) * | 2019-07-11 | 2019-11-22 | 阿里巴巴集团控股有限公司 | A kind of composograph generation method and device |
CN110490960B (en) * | 2019-07-11 | 2023-04-07 | 创新先进技术有限公司 | Synthetic image generation method and device |
CN110428388B (en) * | 2019-07-11 | 2023-08-08 | 创新先进技术有限公司 | Image data generation method and device |
CN111144491A (en) * | 2019-12-26 | 2020-05-12 | 南京旷云科技有限公司 | Image processing method, device and electronic system |
CN111325984A (en) * | 2020-03-18 | 2020-06-23 | 北京百度网讯科技有限公司 | Sample data acquisition method and device and electronic equipment |
CN111553972A (en) * | 2020-04-27 | 2020-08-18 | 北京百度网讯科技有限公司 | Method, apparatus, device and storage medium for rendering augmented reality data |
CN111553972B (en) * | 2020-04-27 | 2023-06-30 | 北京百度网讯科技有限公司 | Method, apparatus, device and storage medium for rendering augmented reality data |
CN112631415A (en) * | 2020-12-31 | 2021-04-09 | Oppo(重庆)智能科技有限公司 | CPU frequency adjusting method, device, electronic equipment and storage medium |
WO2022156150A1 (en) * | 2021-01-19 | 2022-07-28 | 浙江商汤科技开发有限公司 | Image processing method and apparatus, electronic device, storage medium, and computer program |
CN117082359A (en) * | 2023-10-16 | 2023-11-17 | 荣耀终端有限公司 | Image processing method and related equipment |
CN117082359B (en) * | 2023-10-16 | 2024-04-19 | 荣耀终端有限公司 | Image processing method and related equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109166170A (en) | Method and apparatus for rendering augmented reality scene | |
CN109191514A (en) | Method and apparatus for generating depth detection model | |
CN108830235A (en) | Method and apparatus for generating information | |
CN108492364A (en) | The method and apparatus for generating model for generating image | |
CN109410253B (en) | For generating method, apparatus, electronic equipment and the computer-readable medium of information | |
CN110113538A (en) | Intelligent capture apparatus, intelligent control method and device | |
CN108280413A (en) | Face identification method and device | |
CN110516678A (en) | Image processing method and device | |
CN109115221A (en) | Indoor positioning, air navigation aid and device, computer-readable medium and electronic equipment | |
CN109767485A (en) | Image processing method and device | |
CN108509892A (en) | Method and apparatus for generating near-infrared image | |
CN109255337A (en) | Face critical point detection method and apparatus | |
CN108986049A (en) | Method and apparatus for handling image | |
US11443450B2 (en) | Analyzing screen coverage of a target object | |
CN110033423A (en) | Method and apparatus for handling image | |
CN108960110A (en) | Method and apparatus for generating information | |
CN108648061A (en) | image generating method and device | |
CN108510466A (en) | Method and apparatus for verifying face | |
CN108648226B (en) | Method and apparatus for generating information | |
CN110310299A (en) | Method and apparatus for training light stream network and handling image | |
CN109495767A (en) | Method and apparatus for output information | |
US10079966B2 (en) | Systems and techniques for capturing images for use in determining reflectance properties of physical objects | |
CN108510556A (en) | Method and apparatus for handling image | |
CN110110696A (en) | Method and apparatus for handling information | |
CN108985178A (en) | Method and apparatus for generating information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190108 |