CN107871339A - The rendering intent and device of virtual objects color effect in video - Google Patents

The rendering intent and device of virtual objects color effect in video Download PDF

Info

Publication number
CN107871339A
CN107871339A CN201711090151.0A CN201711090151A CN107871339A CN 107871339 A CN107871339 A CN 107871339A CN 201711090151 A CN201711090151 A CN 201711090151A CN 107871339 A CN107871339 A CN 107871339A
Authority
CN
China
Prior art keywords
video
virtual objects
scene
video scene
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711090151.0A
Other languages
Chinese (zh)
Other versions
CN107871339B (en
Inventor
休·伊恩·罗伊
李建亿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pacific Future Technology (shenzhen) Co Ltd
Original Assignee
Pacific Future Technology (shenzhen) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pacific Future Technology (shenzhen) Co Ltd filed Critical Pacific Future Technology (shenzhen) Co Ltd
Priority to CN201711090151.0A priority Critical patent/CN107871339B/en
Publication of CN107871339A publication Critical patent/CN107871339A/en
Application granted granted Critical
Publication of CN107871339B publication Critical patent/CN107871339B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the present invention provides the rendering intent and device of virtual objects color effect in a kind of video, belongs to augmented reality field.Methods described includes:Identify the scene identity of currently playing video;Corresponding with scene identity virtual objects and baked mapping are searched, wherein, the baked mapping is generated previously according to the attribute information of the video scene;The virtual objects are rendered according to the baked mapping.The embodiment of the present invention realizes merging for virtual objects color effect and video scene, and the sense of reality to user is stronger, is rendered using the mode of baked mapping, reduces rendering efficiency, saves cpu resource.

Description

The rendering intent and device of virtual objects color effect in video
Technical field
The present invention relates to a kind of side of rendering of virtual objects color effect in augmented reality field, more particularly to video Method and device.
Background technology
Augmented reality (Augmented Reality, abbreviation AR) is by by hardware and software equipment, giving birth to computer Into the technology that is added in reality scene of virtual objects.User can perceive by using AR equipment in real world The presence of virtual objects, such as:When user uses wear-type AR equipment, true environment is gathered by the camera device in equipment Data, then by virtual effect caused by computer together with the true environment data fusion.Specific application scenarios are various Change, for example in the family of oneself, the AR helmets can be worn and blend virtual finishing effect with real domestic environment.It is true On, the VR helmets that the above-mentioned AR helmets can be common with the market use similar design structure, special with smart mobile phone Camera lens is exactly VR equipment when playing complete virtual screen, when using smart mobile phone as when camera device, virtual reality fusion equipment, being exactly AR equipment, difference are only in that the software preset module of smart mobile phone.
With the fast development of video technique, virtual reality fusion scene and its lighting effect generation technique based on video turn into The development trend of augmented reality.However, AR equipment is to obtain the necessary equipment of video information, existing AR equipment is present The defects of following software and hardware:
Because virtual objects are all to be generated beforehand through computer, can not obtain the information of true environment in video, because The effect of shadow of this virtual objects can not be blended with the true environment in video and matched, and easily give a kind of false sense of user Feel, greatly reduce the verisimilitude of virtual objects lighting effect.
The existing AR helmets, the installation and taking-up inconvenience of mobile phone, mobile phone surface is easily scratched in installation and taking-up, and Clamping plate compresses mobile phone battery cover for a long time, is unfavorable for mobile phone radiating, needs setting complicated for the mobile phone of different screen size, thickness Structure carry out adaptability regulation, also the dynamics that clamp mobile phone can not be adjusted for the structure, and also be unfavorable for mobile phone Radiating, and easily occur phenomena such as shake, rock in use, the feeling of immersion of user in use is influenceed, very User may be extremely caused to produce the senses of discomfort such as dizziness.
The content of the invention
The rendering intent and device of virtual objects color effect in video provided in an embodiment of the present invention, at least to solve One of above mentioned problem in correlation technique.
On the one hand the embodiment of the present invention provides a kind of rendering intent of virtual objects color effect in video, including:
Identify the scene identity of currently playing video;
Corresponding with the scene identity virtual objects and baked mapping are searched, wherein, the baked mapping is advance root Generated according to the attribute information of the video scene;
The virtual objects are rendered according to the baked mapping.
Alternatively, methods described also includes:The video scene that the video includes is identified, records the field of the video scene Scape identifies.
Alternatively, methods described also includes:Obtain virtual objects corresponding to the video scene;Analyze the video scene In attribute information, baking is carried out to the virtual objects according to the attribute information and rendered, it is corresponding to generate the video scene Baked mapping.
Alternatively, the attribute information includes light source information and color information, the category analyzed in the video scene Property information, carrying out baking to the virtual objects according to the attribute information renders, including:Determine corresponding to the video scene Light source information;The virtual objects corresponding color information in the video scene is determined according to the light source information.
Alternatively, it is described that the virtual objects corresponding color in the video scene is determined according to the light source information Information, including:According to the position of the light source information and the virtual objects in the video scene, it is determined that in the video The destination object of the virtual objects color effect is influenceed in scene;Obtain the color letter of each pixel on the destination object Breath, the virtual objects corresponding color information in the video scene is determined according to the color information.
Optionally, methods described is applied to the AR helmets, and the ar helmets include clamping part, camera lens part and wear portion,
The clamping part includes base, substrate and inside casing, and the substrate and the inside casing are installed on the base, institute The side that inside casing is positioned close to the camera lens part is stated, the substrate is arranged far from the side of the camera lens part, the substrate On be provided with clamping device, the clamping device includes mounting hole, installation lid, the first bolt, guide sleeve and guide finger, the peace Capping, the first bolt, guide sleeve and guide finger are arranged in the mounting hole, and the mounting hole includes adjacent first paragraph and the Two sections, the internal diameter of the first paragraph is less than the internal diameter of the second segment, and the end cap is arranged on the outer end of the second segment, institute State second segment and regulation ring is installed close to the end of the first paragraph, the inner of the guide sleeve is provided with mutually fits with the regulation ring Defining flange with simultaneously limition orientation set shift motion, the installation are covered with axis hole, and first bolt passes through the axis hole Covered installed in the installation, the outer end of first bolt is connected with first and screws part, the inner end of first bolt It is threadedly coupled with the inner end of the guide sleeve in the mounting hole, the outer end of the guide sleeve, which is provided with, compresses mobile phone Compression end, the outer wall of the guide sleeve are provided with the groove being adapted with the guide finger, described guide finger one end along its horizontal direction On the inwall of the mounting hole, the other end is arranged in the groove;
Wherein, mobile phone is provided with camera lens part, the mobile phone obtains the video of real scene by the camera device carried Information simultaneously plays the video, identifies the scene identity of currently playing video, searches virtual objects corresponding with the scene identity And baked mapping, wherein, the baked mapping is generated previously according to the attribute information of the video scene, according to the baking Roasting textures render to the virtual objects.
Optionally, the clamping part of the AR helmets is slidably matched with the camera lens part, and the camera lens part is provided with an installing plate, The clamping part is arranged on the installing plate, and the installing plate is provided with multiple rollers along its width uniform intervals, described Clamping part has the locking mechanism for locking the guide sleeve and the roller.
Optionally, the locking mechanism of the AR helmets includes returning spring and symmetrical on guide sleeve and be arranged on institute The sleeve and swivel nut below guide sleeve are stated, the inner top of the sleeve and swivel nut has and the outer wall for leading lower cartridge The first tight lock part that size is adapted, the inner bottom of the sleeve and swivel nut have and the roller size size phase Second tight lock part of adaptation, the sleeve inner end are provided with the first spring groove, and described swivel nut the inner has a second spring groove, described time One end of multiple spring is installed in first spring groove, and the other end is arranged in second spring groove, is pacified in the sleeve and swivel nut Equipped with the second bolt, the sleeve and the swivel nut pass through second bolt and the locking screw being adapted with second bolt Parent phase connects, and at least one end of second bolt screws part provided with second.
Optionally, the compression end of the AR helmets is extended with multiple support bars, and the end of the support bar is provided with and mobile phone The strong point that rear shell is connected, mini-fan is installed on the support bar, the mini-fan is provided with touching switch, the branch Stay is provided with least one through hole, and the actuator made of marmem, the actuator are provided with the through hole One end is connected with the touching switch, and the other end offsets with mobile phone battery cover, and the actuator reaches pre- in mobile phone battery cover temperature Morphology of martensite is in during alert value, and the mini-fan is opened by the touching switch, the actuator is in mobile phone battery cover Temperature is in formula volume morphing difficult to understand when being less than early warning value, the mini-fan is closed;
The substrate is provided with the groove for screwing part with described first and being adapted, and described first, which screws part, is located at the groove It is interior.
The another aspect of the embodiment of the present invention provides a kind of rendering device of virtual objects color effect in video, bag Include:
Identification module, for identifying the scene identity of currently playing video;
Searching modul, for searching corresponding with the scene identity virtual objects and baked mapping, wherein, the baking Textures are generated previously according to the attribute information of the video scene;
Rendering module, for being rendered according to the baked mapping to the virtual objects.
Alternatively, described device also includes:Logging modle, the video scene included for identifying the video, record institute State the scene identity of video scene.
Alternatively, described device also includes:Acquisition module, for obtaining virtual objects corresponding to the video scene;Point Module is analysed, for analyzing the attribute information in the video scene, the virtual objects are dried according to the attribute information Roasting renders, and generates baked mapping corresponding to the video scene.
Alternatively, the attribute information includes light source information and color information, and the analysis module is used for, it is determined that described regard Light source information corresponding to frequency scene;The virtual objects corresponding color in the video scene is determined according to the light source information Multimedia message.
Alternatively, the analysis module is additionally operable to, according to the light source information and the virtual objects in the video field Position in scape, it is determined that influenceing the destination object of the virtual objects color effect in the video scene;Obtain the mesh Mark object on each pixel color information, according to the color information determine the virtual objects in the video scene it is right The color information answered.
The another aspect of the embodiment of the present invention provides a kind of electronic equipment, including:At least one processor;And with institute State the memory of at least one processor communication connection;Wherein,
The memory storage has can be by the instruction of at least one computing device, and the instruction is by described at least one Individual computing device so that at least one processor be able to carry out in any of the above-described video of the embodiment of the present invention it is virtual right As the rendering intent of color effect.
From above technical scheme, the side of rendering of virtual objects color effect in video provided in an embodiment of the present invention Method, device and electronic equipment, by the scene identity for identifying currently playing video;Search corresponding with the scene identity virtual Object and baked mapping, wherein, the baked mapping is generated previously according to the attribute information of the video scene;According to institute Baked mapping is stated to render the virtual objects.The embodiment of the present invention realizes virtual objects color effect and video scene Fusion, the sense of reality to user is stronger, is rendered using the mode of baked mapping, reduces rendering efficiency, saves CPU moneys Source.The AR helmets its mechanical structure that this method is based on simultaneously can preferably be picked and placeed mobile phone and be more conducive to by well-designed Mobile phone radiates, and is not easy shake, phenomena such as rocking occur during use, enhancing user feeling of immersion in use and true True feeling.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is the required accompanying drawing used in technology description to be briefly described, it should be apparent that, drawings in the following description are only this Some embodiments described in inventive embodiments, for those of ordinary skill in the art, it can also be obtained according to these accompanying drawings Obtain other accompanying drawings.
Fig. 1 is the rendering intent flow chart of virtual objects color effect in the video that one embodiment of the invention provides;
Fig. 2 is the rendering intent flow chart of virtual objects color effect in the video that one embodiment of the invention provides;
Fig. 3 is the rendering device structure chart of virtual objects color effect in the video that one embodiment of the invention provides;
Fig. 4 is the rendering device structure chart of virtual objects color effect in the video that one embodiment of the invention provides;
Fig. 5 is the electronics for the rendering intent for performing virtual objects color effect in the video that the inventive method embodiment provides The hardware architecture diagram of equipment;
Fig. 6 is the structural representation for the AR helmets that one embodiment of the invention provides;
Fig. 7 is the structural representation of the clamping device for the AR helmets that one embodiment of the invention provides;
Fig. 8 is the structural representation of the locking mechanism for the AR helmets that one embodiment of the invention provides;
Fig. 9 is the structural representation of the support bar for the AR helmets that one embodiment of the invention provides.
Embodiment
In order that those skilled in the art more fully understand the technical scheme in the embodiment of the present invention, below in conjunction with the present invention Accompanying drawing in embodiment, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described reality It is only part of the embodiment of the embodiment of the present invention to apply example, rather than whole embodiments.Based on the implementation in the embodiment of the present invention Example, the every other embodiment that those of ordinary skill in the art are obtained, it should all belong to the scope that the embodiment of the present invention is protected.
The executive agent of the embodiment of the present invention is electronic equipment, and the electronic equipment includes but is not limited to mobile phone, flat board electricity Brain, wear-type AR equipment, AR glasses.In order to preferably be illustrated to subsequent embodiment, first the application scenarios of the present invention are entered Row illustrates.When user watches video file by using electronic equipment, the basis of video file true content is being presented On, the virtual objects of subscriber computer generation are also presented to, the virtual objects and true content coexist in same frame video pictures In, show the augmented reality environment that virtual objects and true content combine together to user from sense organ and experience effect.
Below in conjunction with the accompanying drawings, some embodiments of the present invention are elaborated.It is following in the case where not conflicting Feature in embodiment and embodiment can be combined with each other.
Fig. 1 is the rendering intent flow chart of virtual objects color effect in video provided in an embodiment of the present invention.Such as Fig. 1 institutes Show, the rendering intent of virtual objects color effect, is specifically included in video provided in an embodiment of the present invention:
S101, identify the scene identity of currently playing video.
The rendering intent of virtual objects color effect is applied to the field of augmented reality in the video that the embodiment of the present invention proposes Jing Zhong, can be to the color effect of the virtual objects occurred in video under the scene when playing video file in this scenario Rendered.Wherein, the virtual objects need to simulate to obtain by augmented reality electronic equipment;User is by the electronics Equipment can experience augmented reality effect corresponding to video.
So-called video scene typically refers to a camera lens and is continuously shot acquired video content, and it has continuity, and Video content is substantially the same.When virtual objects are in same video scene, because video content is substantially the same, the video field The color effect of virtual objects corresponding to scape on multi-frame video frame will not also change substantially.Therefore, can be according to same The attribute information of video scene generates the color effect of virtual objects under the video scene, so as to avoid carries out color frame by frame The determination of information, improves efficiency.Specifically, attribute information can include the light source information and color information of video scene.
, it is necessary to identify the video scene that the video includes and record the scene mark of the video scene before this step Know.
Specifically, can be by the way that the frame of video of adjacent default frame in video be compared, obtain that the video includes regards Frequency scene.As the optional embodiment of the present embodiment, the first frame using the first frame of video as the first scene, using first frame as starting point Two frames of the adjacent predetermined interval of selecting video successively, obtained in each frame of video using feature point extraction algorithm included by the frame Different objects on characteristic point, the characteristic point can be the pixel for having certain feature, such as edge in the picture Angle point, crosspoint, or there is in the certain field of pixel the pixel of certain statistical nature, feature point extraction algorithm includes Such as SIFT or SURF algorithm, characteristic point have the characteristic vector of the multidimensional of sign this feature property.Calculate two frame of video Euclidean distance between the characteristic vector of characteristic point and the magnitude relationship of default threshold value, it is then two special if less than the predetermined threshold value Point matching is levied, it is on the contrary then mismatch., can be using the second frame in two frames as the first video when two frame video images mismatch The tail frame of scene, by the first frame of the second video scene in two frames.The frame of video matched again is used as using the first frame of the second scene Starting point, continue the Feature Points Matching between different frame, so that it is determined that the first frame of the tail frame of the second scene and the 3rd scene.The Two scene head frames are to all frames (including the second scene head frames and second scene tail frame) between the second scene tail frame, and as second Frame of video corresponding to scene.After completing to compare using the method, you can determine the video scene that the video includes.
Can be that each video scene determines a scene identity and remembered after the video scene that video includes is identified Record, unique mark of the scene identity as its corresponding video scene.In this step, it can be identified with training in advance video scene Model, currently playing video input is obtained to corresponding video scene and the scene identity of the video scene into the model.
S102, search corresponding with the scene identity virtual objects and baked mapping.
Wherein, the baked mapping is generated previously according to the attribute information of the video scene.Before this step, Need the baked mapping for generating virtual objects in each video scene, when light source is radiated in video scene, understand virtual right The surface of elephant forms reflective (including reflective color) and shade, by bakee can by these it is reflective with Shading Rendering into textures Form, when video playback, directly baked mapping is covered on corresponding virtual objects, without in video display process In the Lighting information in video scene is calculated in real time to obtain the color effect of virtual objects.Wherein, baked mapping is Object when virtual objects receive illumination in video scene can wrap to the texture maps of its caused color effect, its generating mode Include:Obtain virtual objects corresponding to video scene;The attribute information in video scene is analyzed, according to attribute information to virtual objects Carry out baking to render, generate baked mapping corresponding to video scene.
Specifically, virtual objects be user by electronic equipment it can be seen that the object being superimposed upon in video scene, can Including material picture (such as people, animal, article image in kind), special efficacy (such as haze effect, steaming effect and movement locus effect Fruit etc.) and the virtual content such as natural phenomena (such as rain, snow, rainbow and sun aperture) or replace in video scene Some portion of people, animal, article, information etc., virtual objects can be it is static can also be it is dynamic, the present invention is not done herein Limit.Virtual objects corresponding with video scene can be the virtual objects that the feature with video scene in itself matches, and also may be used Think that video scene is engaged embodied virtual objects with surrounding scenes.It is alternatively possible to pre-set different video scene The position of corresponding virtual objects and the virtual objects in existing video scene.
It should be noted that attribute information includes but is not limited to the light source information and color information of video scene.This step In, it is first determined light source information corresponding to video scene, determine that virtual objects correspond in video scene further according to light source information Color information, baking is finally carried out to virtual objects according to light source information and color information and rendered.
Specifically, light source information includes intensity of illumination and light source position etc., by analyzing video scene picture, obtains Place, geographical position, current season and time residing for video scene etc. is taken, the object occurred in video scene can be analyzed, Determine place corresponding to the object.For example, the place for determining video scene is interior, now searches in video scene and be used as light source Destination object (such as motion picture screen in lamp, cinema, it is dark in luminous object lamp), the destination object is in video scene In position be light source position, determine the intensity of illumination of light source further according to the bright-dark degree of video scene, by light source position and Intensity of illumination is as Lighting information;The place of video scene is determined as outdoor, the Lighting information of outdoor environment by the sun illumination Parameter determines, because in addition to it regional disparity be present in the illumination of the sun, its running track Various Seasonal or one day at different moments It is different with height relative to the angle that ground light shines, it is therefore desirable to the light of the sun is determined according to latitude information and current time information According to parameter.
Alternatively, determine that its corresponding geographical position (such as goes out in video scene according to the object occurred in video scene Existing Beijing Station, it is determined that reality scene is located at Beijing) and current season (such as pass through the scenery in video scene and/or people The clothing information of thing is speculated), the longitude and latitude in video scene location is determined by the geographical position, furthermore, it is possible to again Current temporal information is determined according to the bright-dark degree of video scene picture, so as to which the illumination parameter of the sun (height be calculated Angle and azimuth), including a variety of, the present invention is not limited herein for elevation angle and azimuthal computational methods.Specifically, the sun Light source information can include intensity of illumination, sun optical position etc..
As the optional embodiment of the present embodiment, according to corresponding to light source information determines virtual objects in video scene Color information, including:According to the position of the light source information and the virtual objects in the video scene, it is determined that described The destination object of the virtual objects color effect is influenceed in video scene;Obtain the color of each pixel on the destination object Information, the virtual objects corresponding color information in the video scene is determined according to the color information.
Specifically, position of the light source in video scene is being obtained, it may be determined that light source is with respect to each in video field The incident direction of object in scape, when light source is radiated on above-mentioned object according to incident direction, can be formed on its surface it is reflective and Shade, it is if the reflection light of some object is just past virtual objects, i.e., virtual at this after the position of virtual objects is determined Subject surface forms indirect light photograph, then the color of the object can influence the color effect on virtual objects surface, for example, such as The billboard of arnotto color forms reflective on virtual objects surface, then virtual objects surface can form the color effect of anti-feux rouges, Therefore using the object as the destination object for influenceing virtual objects color effect.
After destination object is determined, if the color of the destination object is pure color, i.e., the color of each pixel on destination object Multimedia message is identical, can directly according to the color information of any pixel point be defined as the plan object in video scene corresponding to Color.Alternatively, color information can be rgb value or gray value, and the present invention does not limit herein.
If the color of the destination object is not pure color, i.e., the color information of each pixel is to differ on destination object , the color information of each pixel can be weighted, it is virtual right that color corresponding to the value of weighted calculation is defined as As the corresponding color in video scene.Specifically, when color information is rgb value, each pixel on destination object is obtained respectively The rgb value of point;When color information is gray value, the gray value of each pixel on destination object is obtained respectively.Then again will be each The color parameter of pixel is weighted, and obtains the value of weighted calculation, the value of the weighted calculation be virtual objects regarding Corresponding color information in frequency scene.
After the Lighting information of video scene and color information has been obtained, it can be obtained according to the Lighting information of video scene Destination object reflexes to position and the intensity of reflected light of the reflected light on virtual objects surface, and statistics reflection optical position was so that it is determined that should The indirect reference that destination object is formed is used as color rendering position, general in the position on virtual objects surface and scope, using its position Its scope renders area as color, and the intensity of reflected light is rendered into intensity as color.Will according to these color spatial cues The color is coloured on virtual objects, so as to which the baking completed to virtual objects renders, is dried corresponding to generation video scene Roast textures.
S103, the virtual objects are rendered according to the baked mapping.
, can should after the baked mapping of virtual objects corresponding with the scene identity is found by step S102 Baked mapping is attached on virtual objects, so as to complete that virtual objects are rendered.Due to including virtual objects in baked mapping Color effect, it is not necessary to the Lighting information in each video scene is calculated in real time in playing process, greatly improved Rendering efficiency, has saved cpu resource.
The embodiment of the present invention is by identifying the scene identity of currently playing video;Search corresponding with the scene identity empty Intend object and baked mapping, wherein, the baked mapping is generated previously according to the attribute information of the video scene;According to The baked mapping renders to the virtual objects.The embodiment of the present invention realizes virtual objects color effect and video field The fusion of scape, the sense of reality to user is stronger, is rendered using the mode of baked mapping, reduces rendering efficiency, saves CPU Resource.
Fig. 2 is the rendering intent flow chart of virtual objects color effect in video provided in an embodiment of the present invention.Such as Fig. 2 institutes Show, the present embodiment is the specific implementation of embodiment illustrated in fig. 1, therefore repeats no more each step in embodiment illustrated in fig. 1 Concrete methods of realizing and beneficial effect, the rendering intent of virtual objects color effect in video provided in an embodiment of the present invention, tool Body includes:
S201, the video scene that video includes is identified, record the scene identity of the video scene.
S202, obtain virtual objects corresponding to the video scene.
S203, the attribute information in the video scene is analyzed, the virtual objects are carried out according to the attribute information Baking renders, and generates baked mapping corresponding to the video scene.
It should be noted that attribute information includes but is not limited to the light source information and color information of video scene.This step In, it is first determined light source information corresponding to video scene, determine that virtual objects correspond in video scene further according to light source information Color information, baking is finally carried out to virtual objects according to light source information and color information and rendered.
S204, identify the scene identity of currently playing video.
S205, search corresponding with the scene identity virtual objects and baked mapping.
Wherein, the baked mapping is generated previously according to the attribute information of the video scene.
S206, the virtual objects are rendered according to the baked mapping.
The embodiment of the present invention is by identifying the scene identity of currently playing video;Search corresponding with the scene identity empty Intend object and baked mapping, wherein, the baked mapping is generated previously according to the attribute information of the video scene;According to The baked mapping renders to the virtual objects.The embodiment of the present invention realizes virtual objects color effect and video field The fusion of scape, the sense of reality to user is stronger, is rendered using the mode of baked mapping, reduces rendering efficiency, saves CPU Resource.
Fig. 3 is the rendering device structure chart of virtual objects color effect in video provided in an embodiment of the present invention.Such as Fig. 3 institutes Show, the device specifically includes:Identification module 1000, searching modul 2000 and rendering module 3000.
The identification module 1000, for identifying the scene identity of currently playing video;The searching modul 2000, is used for Corresponding with the scene identity virtual objects and baked mapping are searched, wherein, the baked mapping is regarded previously according to described The attribute information generation of frequency scene;The rendering module 3000, for being entered according to the baked mapping to the virtual objects Row renders.
The rendering device of virtual objects color effect is specifically used for performing shown in Fig. 1 in video provided in an embodiment of the present invention The methods described, its realization principle, method and function and usage etc. that embodiment provides is similar with embodiment illustrated in fig. 1, herein no longer Repeat.
Fig. 4 is the rendering device structure chart of virtual objects color effect in video provided in an embodiment of the present invention.Such as Fig. 4 institutes Show, the device specifically includes:Identification module 1000, searching modul 2000 and rendering module 3000.
The identification module 1000, for identifying the scene identity of currently playing video;The searching modul 2000, is used for Corresponding with the scene identity virtual objects and baked mapping are searched, wherein, the baked mapping is regarded previously according to described The attribute information generation of frequency scene;The rendering module 3000, for being entered according to the baked mapping to the virtual objects Row renders.
Alternatively, described device also includes:Logging modle 4000.
The logging modle 4000, the video scene included for identifying the video, record the field of the video scene Scape identifies.
Alternatively, described device also includes:Acquisition module 5000 and analysis module 6000.
The acquisition module 5000, for obtaining virtual objects corresponding to the video scene;The analysis module 6000, For analyzing the attribute information in the video scene, baking is carried out to the virtual objects according to the attribute information and rendered, Generate baked mapping corresponding to the video scene.
Alternatively, the attribute information includes light source information and color information, and the analysis module 6000 is used for, and determines institute State light source information corresponding to video scene;Determine that the virtual objects correspond in the video scene according to the light source information Color information.
Alternatively, the analysis module 6000 is additionally operable to, and is regarded according to the light source information and the virtual objects described Position in frequency scene, it is determined that influenceing the destination object of the virtual objects color effect in the video scene;Obtain institute The color information of each pixel on destination object is stated, determines the virtual objects in the video scene according to the color information In corresponding color information.
In video provided in an embodiment of the present invention the rendering device of virtual objects color effect be specifically used for perform Fig. 1 and/ Or methods described, its realization principle, method and function and usage etc. and the reality shown in Fig. 1 and/or Fig. 2 of embodiment illustrated in fig. 2 offer It is similar to apply example, will not be repeated here.
The rendering device of virtual objects color effect can be used as wherein one in the video of these above-mentioned embodiment of the present invention Individual software or hardware function units, are independently arranged in above-mentioned electronic equipment, can also be as integrating within a processor its Middle One function module, perform the rendering intent of virtual objects color effect in the video of the embodiment of the present invention.
Fig. 5 is the electronics for the rendering intent for performing virtual objects color effect in the video that the inventive method embodiment provides The hardware architecture diagram of equipment.According to Fig. 5, the electronic equipment includes:
One or more processors 5100 and memory 5200, in Fig. 5 by taking a processor 5100 as an example.
Performing the equipment of the rendering intent of virtual objects color effect in described video can also include:Input unit 5300 and output device 5300.
Processor 5100, memory 5200, input unit 5300 and output device 5400 can by bus or other Mode connects, in Fig. 5 exemplified by being connected by bus.
Memory 5200 is used as a kind of non-volatile computer readable storage medium storing program for executing, available for storage non-volatile software journey Sequence, non-volatile computer executable program and module, such as virtual objects color in the video in the embodiment of the present invention Programmed instruction/module corresponding to the rendering intent of effect.Processor 5100 is stored in non-easy in memory 5200 by operation The property lost software program, instruction and module, various function application and data processing so as to execute server, i.e., described in realization The rendering intent of virtual objects color effect in video.
Memory 5200 can include storing program area and storage data field, wherein, storing program area can store operation system Application program required for system, at least one function;Storage data field can be stored in the video provided according to embodiments of the present invention The rendering device of virtual objects color effect uses created data etc..In addition, memory 5200 can include at a high speed with Machine accesses memory 5200, can also include nonvolatile memory 5200, a for example, at least magnetic disk storage 5200, dodge Memory device or other non-volatile solid state memories 5200.In certain embodiments, memory 5200 it is optional including relative to The remotely located memory 5200 of processor, these remote memories 5200 can by network connection into the video it is virtual The rendering device of object color effect.The example of above-mentioned network includes but is not limited to internet, intranet, LAN, shifting Dynamic communication network and combinations thereof.
Input unit 5300 can receive the numeral or character information of input, and produce and imitated with virtual objects color in video The key signals input that the user of the rendering device of fruit is set and function control is relevant.Input unit 5300 may include to press module Etc. equipment.
One or more of modules are stored in the memory 5200, when by one or more of processors During 5100 execution, the rendering intent of virtual objects color effect in the video is performed.
The electronic equipment of the embodiment of the present invention exists in a variety of forms, includes but is not limited to:
(1) mobile communication equipment:The characteristics of this kind equipment is that possess mobile communication function, and to provide speech, data Communicate as main target.This Terminal Type includes:Smart mobile phone (such as iPhone), multimedia handset, feature mobile phone, and it is low Hold mobile phone etc..
(2) super mobile personal computer equipment:This kind equipment belongs to the category of personal computer, there is calculating and processing work( Can, typically also possess mobile Internet access characteristic.This Terminal Type includes:PDA, MID and UMPC equipment etc., such as iPad.
(3) portable entertainment device:This kind equipment can show and play content of multimedia.The kind equipment includes:Audio, Video player (such as iPod), handheld device, e-book, and intelligent toy and portable car-mounted navigation equipment.
(4) server:The equipment for providing the service of calculating, the composition of server include processor 1010, hard disk, internal memory, are Bus of uniting etc., server is similar with general computer architecture, but due to needing to provide highly reliable service, therefore handling Ability, stability, reliability, security, scalability, manageability etc. require higher.
(5) other electronic installations with data interaction function.
Device embodiment described above is only schematical, wherein the module illustrated as separating component can To be or may not be physically separate, it can be as the part that module is shown or may not be physics mould Block, you can with positioned at a place, or can also be distributed on multiple mixed-media network modules mixed-medias.It can be selected according to the actual needs In some or all of module realize the purpose of this embodiment scheme.Those of ordinary skill in the art are not paying creativeness Work in the case of, you can to understand and implement.
Storage medium is deposited the embodiments of the invention provide a kind of non-transient computer is readable, the computer-readable storage medium is deposited Computer executable instructions are contained, wherein, when the computer executable instructions are performed by electronic equipment, set the electronics The rendering intent of virtual objects color effect in the standby upper video performed in above-mentioned any means embodiment.
The embodiments of the invention provide a kind of computer program product, wherein, the computer program product includes storage Computer program on non-transient computer readable storage medium storing program for executing, the computer program include programmed instruction, wherein, work as institute When stating programmed instruction and being performed by electronic equipment, perform the electronic equipment virtual in the video in above-mentioned any means embodiment The rendering intent of object color effect.
Through the above description of the embodiments, those skilled in the art can be understood that each embodiment can Realized by the mode of software plus required general hardware platform, naturally it is also possible to pass through hardware.Based on such understanding, on The part that technical scheme substantially in other words contributes to prior art is stated to embody in the form of software product, should Computer software product can store in a computer-readable storage medium, the computer readable recording medium storing program for performing include be used for The readable form storage of computer (such as computer) or any mechanism of transmission information.For example, machine readable media is included only Read memory (ROM), random access memory (RAM), magnetic disk storage medium, optical storage media, flash medium, electricity, light, Sound or the transmitting signal of other forms (for example, carrier wave, infrared signal, data signal etc.) etc., the computer software product includes Some instructions are each to cause a computer equipment (can be personal computer, server, or network equipment etc.) execution Method described in some parts of individual embodiment or embodiment.
In another embodiment, Fig. 6 is provided a kind of renders as virtual objects color effect in above-mentioned video The AR helmets of the execution equipment of method, the AR helmets include clamping part 1, camera lens part 2 and wear portion 3, wherein, clamping part 1 includes Base 101, substrate 102 and inside casing 103, substrate 102 and inside casing 103 are vertically mounted on base 101, and substrate 102 is tabular Structure, inside casing 103 are the frame structure being adapted with camera lens part, and substrate 102 and inside casing 103 are located at the front and rear of base 101, i.e. Inside casing 103 is positioned close to the side of camera lens part 2, and substrate 102 is arranged far from the electronic equipments such as the side of camera lens part 2, mobile phone Between substrate 102 and inside casing 103.
With reference to shown in accompanying drawing 7 and 8, the another of the present embodiment thes improvement is that:It is provided with the substrate 101 for clamping The clamping device 4 of mobile phone, clamping device 4 include mounting hole 401, installation lid 402, the first bolt 403, guide sleeve 404 and are oriented to The structures such as pin 405, mounting hole 401 have the second end of first end and close inside casing away from inside casing 401, specifically, installation Hole 401 includes adjacent first paragraph and second segment, and the internal diameter of first paragraph is less than the internal diameter of second segment, and end cap 402 is arranged on second On the outer end of section, second segment is provided with regulation ring 407 close to the end of first paragraph, and the inner of guide sleeve 404 is provided with and regulation ring 407 are adapted the defining flange 408 of simultaneously limition orientation set shift motion.
Installation lid 402 is installed, installation lid 402 is provided with axis hole 4021, and the first bolt 403 passes through axis hole 4021 in first end On installation lid 402, the outer end of the first bolt 403 is connected with first and screws part 406, the inner end of the first bolt 403 It is threadedly coupled with the inner end of the guide sleeve 404 in mounting hole 401, the outer end of guide sleeve 404, which is provided with, compresses mobile phone Compression end 4041, the outer wall of guide sleeve 404 is provided with the groove being adapted with guide finger 405 along its horizontal direction and (do not show in figure Go out), the one end of guide finger 405 is arranged on the inwall of mounting hole 401, and the other end is arranged in groove.Screwed when user rotates first During part 406, the first screw rod 403 is driven to rotate, and then to the trend of the rotation of guide sleeve 404 and front/rear displacement, and due to guide finger Presence guide sleeve is only had to front or rear displacement, compression end 4041 is thus compressed into mobile phone and inside casing 103, the process is not The slow output of compression end can be only realized, compression dynamics is adjustable, and can avoid the damage to mobile phone battery cover, passes through support The point structure at end fixes mobile phone, and effect is fixed better than the clamping plate or face-piece of prior art, does not influence the heat dispersion of mobile phone, and The structure adaptability is strong, suitable for various screen sizes and the mobile phone of thickness.
It has been found that part mobile phone is not provided with switching broadcasting program in AR scenes and scales the function of sound, therefore Mobile phone can only be taken out the switching played out and sound, picture by most of user when needing aforesaid operations from clamping device Regulation, therefore clamping part 1 and camera lens part 2 are designed as being slidably matched by applicant, and an installation is provided with specifically on camera lens part 2 Plate 201, clamping part 1 are arranged on the installing plate 201, and installing plate 201 is provided with multiple rollers along its width uniform intervals 2011, more favourable, clamping part is slidably matched with camera lens and can taken out mobile phone when needing operating handset, after the completion of operation again Clamping part pushed home is watched, it is easy to operate.
With reference to shown in accompanying drawing 8, the present embodiment is provided with the locking mechanism that can lock guide sleeve and roller also on clamping part 1 104, locking mechanism 104 can not only prevent the reset of the first bolt, and the slip for being capable of locking clamping part and camera lens part 2 is matched somebody with somebody Close.Specifically, the locking mechanism 104 of the present embodiment includes returning spring 1041 and symmetrical on guide sleeve 404 and set Put the sleeve 1042 and swivel nut 1043 below guide sleeve 404, the inner top of sleeve 1042 and swivel nut 1043 has with leading The inner bottom tool of the first tight lock part 1044 that the outer wall dimension size of lower cartridge is adapted, sleeve 1042 and swivel nut 1043 There is the second tight lock part 1045 being adapted with the size of roller 2011, sleeve 1042 is inner to be provided with the first spring groove 1046, spiral shell Set 1043 is inner to have second spring groove 1047, and one end of returning spring 1041 is installed in the first spring groove 1046, other end peace In second spring groove 1047, the second bolt 1048, sleeve 1042 and swivel nut are installed in sleeve 1042 and swivel nut 1043 1043 are connected by the second bolt 1048 and with the locking nut 1049 that the second bolt 1048 is adapted, the second bolt 1048 At least one end screws part 1050 provided with second.Set 404 can be not only directed to using locking mechanism 104 to fix, but also Can by clamping part 1 and camera lens part 2 be slidably matched lock and, realize the multifunction and simplification structure of a structure.
In addition, applicant further found that, the existing AR helmets are greatly most without mobile phone radiator structure, or the temperature by complexity The degree structure such as sensor and controller realizes the radiating of mobile phone, and the structure is not only complicated, and cost is higher, and greatly increases The volumes of the AR helmets, can not realize lightweight.Therefore applicant is improved on this basis, and with reference to figure 9, the present embodiment exists Compression end 4041 is extended with multiple support bars 5 parallel to mobile phone battery cover, and the end of support bar 5 is provided with to be connected with mobile phone battery cover The strong point 501, mini-fan 6 is installed on support bar 5, mini-fan 6 is provided with touching switch (not shown), support bar 5 are provided with least one through hole 502, and actuator 503 made of marmem, actuator 503 are provided with through hole 502 One end is connected with touching switch, and the other end offsets with mobile phone battery cover, and actuator 503 is when mobile phone battery cover temperature reaches early warning value Mini-fan is opened in morphology of martensite, and by touching switch, actuator 503 is when mobile phone battery cover temperature is less than early warning value In formula volume morphing difficult to understand, mini-fan is closed.Miniature wind is opened and closed using metamorphosis of the marmem under temperature change Fan, not only precision is higher, is advantageous to the cooling of mobile phone, avoids mobile phone from being lost, and without control structure, simplifies cooling knot Structure, reduce production cost and installing space.
Furthermore it is also possible to be provided with the groove for screwing part with first and being adapted on the substrate 101, first, which screws part 406, is located at In groove.Part will be screwed as the outer surface of substrate can be caused to be in planar structure in the groove, simplify outward appearance.
Smart mobile phone is provided with the camera lens part of the above-mentioned AR helmets, the smart mobile phone is obtained by the camera device carried The video information of real scene simultaneously plays the video, identifies the scene identity of currently playing video, searches and the scene identity Corresponding virtual objects and baked mapping, wherein, the baked mapping is the attribute information life previously according to the video scene Into, the virtual objects are rendered according to the baked mapping.
Finally it should be noted that:Above example is only to illustrate the technical scheme of the embodiment of the present invention, rather than it is limited System;Although the present invention is described in detail with reference to the foregoing embodiments, it will be understood by those within the art that:Its The technical scheme described in foregoing embodiments can still be modified, or which part technical characteristic is equal Replace;And these modifications or replacement, the essence of appropriate technical solution is departed from various embodiments of the present invention technical scheme Spirit and scope.

Claims (10)

  1. A kind of 1. rendering intent of virtual objects color effect in video, it is characterised in that including:
    Identify the scene identity of currently playing video;
    Corresponding with the scene identity virtual objects and baked mapping are searched, wherein, the baked mapping is previously according to institute State the attribute information generation of video scene;
    The virtual objects are rendered according to the baked mapping.
  2. 2. according to the method for claim 1, it is characterised in that the currently playing video of the identification scene identity it Before, methods described also includes:
    The video scene that identification video includes, record the scene identity of the video scene.
  3. 3. method according to claim 1 or 2, it is characterised in that methods described also includes:
    Obtain virtual objects corresponding to the video scene;
    The attribute information in the video scene is analyzed, carrying out baking to the virtual objects according to the attribute information renders, Generate baked mapping corresponding to the video scene.
  4. 4. according to the method for claim 3, it is characterised in that the attribute information includes light source information and color information, The virtual objects are carried out baking according to the attribute information and rendered by the attribute information in the analysis video scene, Including:
    Determine light source information corresponding to the video scene;
    The virtual objects corresponding color information in the video scene is determined according to the light source information.
  5. 5. according to the method for claim 4, it is characterised in that described that the virtual objects are determined according to the light source information The corresponding color information in the video scene, including:
    According to the position of the light source information and the virtual objects in the video scene, it is determined that in the video scene Influence the destination object of the virtual objects color effect;
    The color information of each pixel on the destination object is obtained, determines the virtual objects in institute according to the color information State corresponding color information in video scene.
  6. 6. according to the method described in claim 1-5 any one, it is characterised in that:
    Methods described is applied to the AR helmets, and the AR helmets include clamping part, camera lens part and wear portion,
    The clamping part includes base, substrate and inside casing, and the substrate and the inside casing are installed on the base, described interior Frame is positioned close to the side of the camera lens part, and the substrate is arranged far from the side of the camera lens part, setting on the substrate Clamping device is equipped with, the clamping device includes mounting hole, installation lid, the first bolt, guide sleeve and guide finger, the installation Lid, the first bolt, guide sleeve and guide finger are arranged in the mounting hole, and the mounting hole includes adjacent first paragraph and second Section, the internal diameter of the first paragraph are less than the internal diameter of the second segment, and the end cap is arranged on the outer end of the second segment, described Second segment is provided with regulation ring close to the end of the first paragraph, and the inner of the guide sleeve is provided with to be adapted with the regulation ring And the defining flange of limition orientation set shift motion, the installation are covered with axis hole, first bolt is pacified by the axis hole Covered mounted in the installation, the outer end of first bolt is connected with first and screws part, the inner end of first bolt with The inner end threaded connection of guide sleeve in the mounting hole, the outer end of the guide sleeve are provided with the pressure for compressing mobile phone Tight end, the outer wall of the guide sleeve are provided with the groove being adapted with the guide finger, guide finger one end peace along its horizontal direction On the inwall of the mounting hole, the other end is arranged in the groove;
    Wherein, mobile phone is provided with camera lens part, the mobile phone obtains the video information of real scene by the camera device carried And the video is played, the scene identity of currently playing video is identified, searches virtual objects corresponding with the scene identity and baking Textures are roasted, wherein, the baked mapping generate previously according to the attribute information of the video scene, and patch is bakeed according to described Figure renders to the virtual objects.
  7. 7. according to the method for claim 6, it is characterised in that the clamping part of the AR helmets slides with the camera lens part matches somebody with somebody Close, the camera lens part is provided with an installing plate, and the clamping part is on the installing plate, and the installing plate is along its width Uniform intervals are provided with multiple rollers, and the clamping part has the locking mechanism for locking the guide sleeve and the roller.
  8. 8. according to the method for claim 7, it is characterised in that the locking mechanism of the AR helmets includes returning spring and pass Sleeve and swivel nut symmetrical in guide sleeve and be arranged on below the guide sleeve, the inner top of the sleeve and swivel nut With the first tight lock part being adapted with the outer wall dimension size for leading lower cartridge, the sleeve and swivel nut it is inner under Portion has the second tight lock part being adapted with the roller size size, and the sleeve inner end is provided with the first spring groove, the spiral shell Set the inner has second spring groove, and one end of the returning spring is installed in first spring groove, and the other end is arranged on second In spring groove, be provided with the sleeve and swivel nut the second bolt, the sleeve and the swivel nut by second bolt and The locking nut being adapted with second bolt is connected, and at least one end of second bolt screws provided with second Part.
  9. 9. according to the method for claim 6, it is characterised in that the compression end of the AR helmets is extended with multiple support bars, The end of the support bar is provided with the strong point being connected with mobile phone battery cover, and mini-fan is provided with the support bar, described Mini-fan is provided with touching switch, and the support bar is provided with least one through hole, is provided with the through hole by shape memory Actuator made of alloy, described actuator one end are connected with the touching switch, and the other end offsets with mobile phone battery cover, described Actuator is in morphology of martensite when mobile phone battery cover temperature reaches early warning value, and described miniature by touching switch opening Fan, the actuator are in formula volume morphing difficult to understand when mobile phone battery cover temperature is less than early warning value, and the mini-fan is closed;
    The substrate is provided with the groove for screwing part with described first and being adapted, and described first, which screws part, is located in the groove.
  10. A kind of 10. rendering device of virtual objects color effect in video, it is characterised in that including:
    Identification module, for identifying the scene identity of currently playing video;
    Searching modul, for searching corresponding with the scene identity virtual objects and baked mapping, wherein, the baked mapping It is to be generated previously according to the attribute information of the video scene;
    Rendering module, for being rendered according to the baked mapping to the virtual objects.
CN201711090151.0A 2017-11-08 2017-11-08 Rendering method and device for color effect of virtual object in video Active CN107871339B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711090151.0A CN107871339B (en) 2017-11-08 2017-11-08 Rendering method and device for color effect of virtual object in video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711090151.0A CN107871339B (en) 2017-11-08 2017-11-08 Rendering method and device for color effect of virtual object in video

Publications (2)

Publication Number Publication Date
CN107871339A true CN107871339A (en) 2018-04-03
CN107871339B CN107871339B (en) 2019-12-24

Family

ID=61752673

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711090151.0A Active CN107871339B (en) 2017-11-08 2017-11-08 Rendering method and device for color effect of virtual object in video

Country Status (1)

Country Link
CN (1) CN107871339B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110166760A (en) * 2019-05-27 2019-08-23 浙江开奇科技有限公司 Image treatment method and terminal device based on panoramic video image
CN110354500A (en) * 2019-07-15 2019-10-22 网易(杭州)网络有限公司 Effect processing method, device, equipment and storage medium
CN110460892A (en) * 2018-05-08 2019-11-15 日本聚逸株式会社 Dynamic image dissemination system, dynamic image distribution method and recording medium
CN110852143A (en) * 2018-08-21 2020-02-28 脸谱公司 Interactive text effects in augmented reality environments
CN111311757A (en) * 2020-02-14 2020-06-19 惠州Tcl移动通信有限公司 Scene synthesis method and device, storage medium and mobile terminal
CN111340684A (en) * 2020-02-12 2020-06-26 网易(杭州)网络有限公司 Method and device for processing graphics in game
CN111866489A (en) * 2019-04-29 2020-10-30 浙江开奇科技有限公司 Method for realizing immersive panoramic teaching
CN111932641A (en) * 2020-09-27 2020-11-13 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN112449210A (en) * 2019-08-28 2021-03-05 北京字节跳动网络技术有限公司 Sound processing method, sound processing device, electronic equipment and computer readable storage medium
CN113110731A (en) * 2019-12-25 2021-07-13 华为技术有限公司 Method and device for generating media content
CN116245998A (en) * 2023-05-09 2023-06-09 北京百度网讯科技有限公司 Rendering map generation method and device, and model training method and device
WO2024067159A1 (en) * 2022-09-28 2024-04-04 北京字跳网络技术有限公司 Video generation method and apparatus, electronic device, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101710429A (en) * 2009-10-12 2010-05-19 湖南大学 Illumination algorithm of augmented reality system based on dynamic light map
CN204203552U (en) * 2014-10-31 2015-03-11 成都理想境界科技有限公司 With mobile terminal with the use of headset equipment
CN105405168A (en) * 2015-11-19 2016-03-16 青岛黑晶信息技术有限公司 Method and apparatus for implementing three-dimensional augmented reality
US20170045746A1 (en) * 2013-05-17 2017-02-16 Castar, Inc. Virtual reality attachment for a head mounted display
CN206301087U (en) * 2016-12-30 2017-07-04 广州邦士度眼镜有限公司 A kind of new AR intelligent glasses
CN107134005A (en) * 2017-05-04 2017-09-05 网易(杭州)网络有限公司 Illumination adaptation method, device, storage medium, processor and terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101710429A (en) * 2009-10-12 2010-05-19 湖南大学 Illumination algorithm of augmented reality system based on dynamic light map
US20170045746A1 (en) * 2013-05-17 2017-02-16 Castar, Inc. Virtual reality attachment for a head mounted display
CN204203552U (en) * 2014-10-31 2015-03-11 成都理想境界科技有限公司 With mobile terminal with the use of headset equipment
CN105405168A (en) * 2015-11-19 2016-03-16 青岛黑晶信息技术有限公司 Method and apparatus for implementing three-dimensional augmented reality
CN206301087U (en) * 2016-12-30 2017-07-04 广州邦士度眼镜有限公司 A kind of new AR intelligent glasses
CN107134005A (en) * 2017-05-04 2017-09-05 网易(杭州)网络有限公司 Illumination adaptation method, device, storage medium, processor and terminal

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110460892A (en) * 2018-05-08 2019-11-15 日本聚逸株式会社 Dynamic image dissemination system, dynamic image distribution method and recording medium
CN110460892B (en) * 2018-05-08 2022-06-14 日本聚逸株式会社 Moving image distribution system, moving image distribution method, and recording medium
CN110852143A (en) * 2018-08-21 2020-02-28 脸谱公司 Interactive text effects in augmented reality environments
CN110852143B (en) * 2018-08-21 2024-04-09 元平台公司 Interactive text effects in an augmented reality environment
CN111866489A (en) * 2019-04-29 2020-10-30 浙江开奇科技有限公司 Method for realizing immersive panoramic teaching
CN110166760A (en) * 2019-05-27 2019-08-23 浙江开奇科技有限公司 Image treatment method and terminal device based on panoramic video image
CN110354500A (en) * 2019-07-15 2019-10-22 网易(杭州)网络有限公司 Effect processing method, device, equipment and storage medium
CN112449210A (en) * 2019-08-28 2021-03-05 北京字节跳动网络技术有限公司 Sound processing method, sound processing device, electronic equipment and computer readable storage medium
CN113110731B (en) * 2019-12-25 2023-07-14 华为技术有限公司 Method and device for generating media content
CN113110731A (en) * 2019-12-25 2021-07-13 华为技术有限公司 Method and device for generating media content
CN111340684A (en) * 2020-02-12 2020-06-26 网易(杭州)网络有限公司 Method and device for processing graphics in game
CN111340684B (en) * 2020-02-12 2024-03-01 网易(杭州)网络有限公司 Method and device for processing graphics in game
CN111311757B (en) * 2020-02-14 2023-07-18 惠州Tcl移动通信有限公司 Scene synthesis method and device, storage medium and mobile terminal
CN111311757A (en) * 2020-02-14 2020-06-19 惠州Tcl移动通信有限公司 Scene synthesis method and device, storage medium and mobile terminal
CN111932641B (en) * 2020-09-27 2021-05-14 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
WO2022062577A1 (en) * 2020-09-27 2022-03-31 北京达佳互联信息技术有限公司 Image processing method and apparatus
CN111932641A (en) * 2020-09-27 2020-11-13 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
US11610364B2 (en) 2020-09-27 2023-03-21 Beijing Dajia Internet Information Technology Co., Ltd. Method, device, and storage medium for applying lighting to a rendered object in a scene
WO2024067159A1 (en) * 2022-09-28 2024-04-04 北京字跳网络技术有限公司 Video generation method and apparatus, electronic device, and storage medium
CN116245998A (en) * 2023-05-09 2023-06-09 北京百度网讯科技有限公司 Rendering map generation method and device, and model training method and device
CN116245998B (en) * 2023-05-09 2023-08-29 北京百度网讯科技有限公司 Rendering map generation method and device, and model training method and device

Also Published As

Publication number Publication date
CN107871339B (en) 2019-12-24

Similar Documents

Publication Publication Date Title
CN107871339A (en) The rendering intent and device of virtual objects color effect in video
CN107845132A (en) The rendering intent and device of virtual objects color effect
CN107749075A (en) The generation method and device of virtual objects effect of shadow in video
US10979640B2 (en) Estimating HDR lighting conditions from a single LDR digital image
CN107749076A (en) The method and apparatus that real illumination is generated in augmented reality scene
Hold-Geoffroy et al. Deep outdoor illumination estimation
US10937216B2 (en) Intelligent camera
US10957026B1 (en) Learning from estimated high-dynamic range all weather lighting parameters
CN107705353A (en) Rendering intent and device applied to the virtual objects effect of shadow of augmented reality
CN114125310B (en) Photographing method, terminal device and cloud server
CN110168616A (en) Superposition contrast control in augmented reality display
US10672104B2 (en) Method and apparatus for generating an extrapolated image based on object detection
CN103262126A (en) Image processor, lighting processor and method therefor
CN109064544A (en) The shadows and lights method, apparatus and electronic equipment of virtual objects in panoramic video
CN110288534A (en) Image processing method, device, electronic equipment and storage medium
CN108377398A (en) Based on infrared AR imaging methods, system and electronic equipment
CN109360222A (en) Image partition method, device and storage medium
WO2023116396A1 (en) Rendering display method and apparatus, computer device, and storage medium
US20160150143A1 (en) Systems and methods for estimating sky light probes for outdoor images
CN109118571A (en) Method, apparatus and electronic equipment based on light information rendering virtual objects
CN110266926A (en) Image processing method, device, mobile terminal and storage medium
CN110266955A (en) Image processing method, device, electronic equipment and storage medium
CN106101574B (en) A kind of control method, device and the mobile terminal of image enhancement reality
Salamon et al. Computational light painting using a virtual exposure
CN112995635B (en) Image white balance processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant