CN107871339A - The rendering intent and device of virtual objects color effect in video - Google Patents

The rendering intent and device of virtual objects color effect in video Download PDF

Info

Publication number
CN107871339A
CN107871339A CN201711090151.0A CN201711090151A CN107871339A CN 107871339 A CN107871339 A CN 107871339A CN 201711090151 A CN201711090151 A CN 201711090151A CN 107871339 A CN107871339 A CN 107871339A
Authority
CN
China
Prior art keywords
video
scene
virtual object
video scene
sleeve
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711090151.0A
Other languages
Chinese (zh)
Other versions
CN107871339B (en
Inventor
休·伊恩·罗伊
李建亿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pacific Future Technology Hangzhou Co ltd
Original Assignee
Pacific Future Technology (shenzhen) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pacific Future Technology (shenzhen) Co Ltd filed Critical Pacific Future Technology (shenzhen) Co Ltd
Priority to CN201711090151.0A priority Critical patent/CN107871339B/en
Publication of CN107871339A publication Critical patent/CN107871339A/en
Application granted granted Critical
Publication of CN107871339B publication Critical patent/CN107871339B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the present invention provides the rendering intent and device of virtual objects color effect in a kind of video, belongs to augmented reality field.Methods described includes:Identify the scene identity of currently playing video;Corresponding with scene identity virtual objects and baked mapping are searched, wherein, the baked mapping is generated previously according to the attribute information of the video scene;The virtual objects are rendered according to the baked mapping.The embodiment of the present invention realizes merging for virtual objects color effect and video scene, and the sense of reality to user is stronger, is rendered using the mode of baked mapping, reduces rendering efficiency, saves cpu resource.

Description

Rendering method and device for color effect of virtual object in video
Technical Field
The invention relates to the technical field of augmented reality, in particular to a method and a device for rendering a color effect of a virtual object in a video.
Background
Augmented Reality (AR) is a technology in which a virtual object generated by a computer is superimposed on a real scene by means of hardware and software devices. By using an AR device, a user can perceive the presence of a virtual object in the real world, for example: when the user adopts the head-wearing AR equipment, real environment data are collected through a camera device in the equipment, and then virtual effects generated by a computer are fused with the real environment data. Specific application scenes are diversified, for example, in the home of the user, the head-mounted AR helmet can fuse virtual decoration effects with a real home environment and the like. In fact, the AR helmet may adopt a similar design structure to a common VR helmet in the market, and when a smart phone is used in cooperation with a specially-made lens to play a complete virtual picture, the AR helmet is a VR device.
With the rapid development of video technology, virtual-real fusion scenes based on videos and illumination effect generation technology thereof become the development trend of augmented reality technology. However, the AR device is necessary equipment for acquiring video information, and the existing AR device has the following software and hardware defects:
because the virtual objects are generated in advance through a computer and cannot acquire the information of the real environment in the video, the light and shadow effect of the virtual objects cannot be fused and matched with the real environment in the video, unreal feeling is easily given to a user, and the reality of the light effect of the virtual objects is greatly reduced.
Present AR helmet, the installation of cell-phone with take out inconveniently, fish tail cell-phone surface easily when installation and take out, and splint compress tightly the cell-phone backshell for a long time, be unfavorable for the cell-phone heat dissipation, to different screen size, the cell-phone of thickness need set up complicated structure and carry out the adaptability and adjust, this structure also can't be adjusted the dynamics of pressing from both sides tight cell-phone, and also do not benefit to the heat dissipation of cell-phone, appear the shake easily in the use, rock phenomenons such as, influence the sense of immersing of user in the use, perhaps cause the user to produce uncomfortable feelings such as dizzy even.
Disclosure of Invention
The method and the device for rendering the color effect of the virtual object in the video are provided by the embodiment of the invention, and are used for solving at least one of the problems in the related art.
An embodiment of the present invention provides a method for rendering a color effect of a virtual object in a video, including:
identifying a scene identifier of a currently played video;
searching a virtual object and a baking map corresponding to the scene identifier, wherein the baking map is generated in advance according to the attribute information of the video scene;
rendering the virtual object according to the baking map.
Optionally, the method further comprises: identifying a video scene included in the video, and recording a scene identification of the video scene.
Optionally, the method further comprises: acquiring a virtual object corresponding to the video scene; analyzing attribute information in the video scene, baking and rendering the virtual object according to the attribute information, and generating a baking map corresponding to the video scene.
Optionally, the attribute information includes light source information and color information, the analyzing the attribute information in the video scene, and performing baking rendering on the virtual object according to the attribute information includes: determining light source information corresponding to the video scene; and determining corresponding color information of the virtual object in the video scene according to the light source information.
Optionally, the determining, according to the light source information, color information corresponding to the virtual object in the video scene includes: determining a target object influencing the color effect of the virtual object in the video scene according to the light source information and the position of the virtual object in the video scene; and acquiring color information of each pixel point on the target object, and determining the corresponding color information of the virtual object in the video scene according to the color information.
Optionally, the method is applied to an AR helmet, comprising a grip, a lens and a head-mount,
the clamping part comprises a base, a base plate and an inner frame, the base plate and the inner frame are both arranged on the base, the inner frame is arranged on one side close to the lens part, the base plate is arranged on one side far away from the lens part, a clamping device is arranged on the base plate and comprises an installation hole, an installation cover, a first bolt, a guide sleeve and a guide pin, the installation cover, the first bolt, the guide sleeve and the guide pin are arranged in the installation hole, the installation hole comprises a first section and a second section which are adjacent, the inner diameter of the first section is smaller than that of the second section, an end cover is arranged on the outer end of the second section, an adjusting ring is arranged at the end part of the second section close to the first section, a limit flange which is matched with the adjusting ring and limits the moving stroke of the guide sleeve is arranged at the inner end of the guide sleeve, and a shaft hole is arranged on, the first bolt is installed on the installation cover through the shaft hole, the outer end part of the first bolt is connected with a first screwing piece, the inner end part of the first bolt is in threaded connection with the inner end part of a guide sleeve installed in the installation hole, the outer end part of the guide sleeve is provided with a pressing end for pressing a mobile phone, the outer wall of the guide sleeve is provided with a groove matched with the guide pin along the horizontal direction, one end of the guide pin is installed on the inner wall of the installation hole, and the other end of the guide pin is installed in the groove;
the mobile phone acquires video information of a real scene through a camera device carried by the mobile phone, plays the video, identifies a scene identifier of the currently played video, searches a virtual object and a baking chartlet corresponding to the scene identifier, wherein the baking chartlet is generated in advance according to attribute information of the video scene, and renders the virtual object according to the baking chartlet.
Optionally, the AR helmet the clamping part with lens tip sliding fit, lens tip is equipped with a mounting panel, the clamping part is installed on the mounting panel, the mounting panel is equipped with a plurality of gyro wheels along its width direction uniform interval, the clamping part has the locking the uide bushing with the locking structure of gyro wheel.
Optionally, the locking structure of the AR helmet comprises a return spring, and a sleeve and a threaded sleeve which are bilaterally symmetric about the guide sleeve and are arranged below the guide sleeve, the upper parts of the inner ends of the sleeve and the threaded sleeve are provided with first locking parts matched with the outer wall of the lower part of the guide sleeve in size, the lower parts of the inner ends of the sleeve and the thread sleeve are provided with second locking parts matched with the size of the roller, the inner end of the sleeve is provided with a first spring groove, the inner end of the threaded sleeve is provided with a second spring groove, one end of the return spring is arranged in the first spring groove, the other end of the return spring is arranged in the second spring groove, the sleeve and the threaded sleeve are internally provided with a second bolt, the sleeve and the threaded sleeve are connected through the second bolt and a locking nut matched with the second bolt, and at least one end part of the second bolt is provided with a second screwing piece.
Optionally, the pressing end of the AR helmet extends with a plurality of support bars, the end of each support bar is provided with a support point connected with the rear shell of the mobile phone, the support bar is provided with a micro fan, the micro fan is provided with a touch switch, the support bar is provided with at least one through hole, a driving piece made of shape memory alloy is installed in the through hole, one end of the driving piece is connected with the touch switch, the other end of the driving piece abuts against the rear shell of the mobile phone, the driving piece is in a martensite state when the temperature of the rear shell of the mobile phone reaches an early warning value, the micro fan is turned on through the touch switch, the driving piece is in an austenite state when the temperature of the rear shell of the mobile phone is lower than the early warning;
the base plate is provided with a groove matched with the first screwing piece, and the first screwing piece is located in the groove.
Another aspect of the embodiments of the present invention provides a device for rendering a color effect of a virtual object in a video, including:
the identification module is used for identifying the scene identification of the currently played video;
the searching module is used for searching a virtual object and a baking map corresponding to the scene identifier, wherein the baking map is generated in advance according to the attribute information of the video scene;
and the rendering module is used for rendering the virtual object according to the baking map.
Optionally, the apparatus further comprises: and the recording module is used for identifying the video scenes included in the video and recording the scene identifiers of the video scenes.
Optionally, the apparatus further comprises: the acquisition module is used for acquiring a virtual object corresponding to the video scene; and the analysis module is used for analyzing the attribute information in the video scene, baking and rendering the virtual object according to the attribute information and generating a baking map corresponding to the video scene.
Optionally, the attribute information includes light source information and color information, and the analysis module is configured to determine light source information corresponding to the video scene; and determining corresponding color information of the virtual object in the video scene according to the light source information.
Optionally, the analysis module is further configured to determine, according to the light source information and the position of the virtual object in the video scene, a target object that affects a color effect of the virtual object in the video scene; and acquiring color information of each pixel point on the target object, and determining the corresponding color information of the virtual object in the video scene according to the color information.
Another aspect of an embodiment of the present invention provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method for rendering color effects of virtual objects in video according to any of the embodiments of the invention.
According to the technical scheme, the rendering method, the rendering device and the electronic equipment for the color effect of the virtual object in the video provided by the embodiment of the invention have the advantages that the scene identification of the currently played video is identified; searching a virtual object and a baking map corresponding to the scene identifier, wherein the baking map is generated in advance according to the attribute information of the video scene; rendering the virtual object according to the baking map. The embodiment of the invention realizes the fusion of the color effect of the virtual object and the video scene, has stronger sense of reality for users, and renders by using a baking mapping mode, thereby reducing the rendering efficiency and saving CPU resources. Meanwhile, the mechanical structure of the AR helmet based on the method is well designed, so that the mobile phone can be better taken and placed, the heat dissipation of the mobile phone is more facilitated, the phenomena of shaking, shaking and the like are not easy to occur in the using process, and the immersion and reality of a user in the using process are enhanced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present invention, and it is also possible for a person skilled in the art to obtain other drawings based on the drawings.
FIG. 1 is a flowchart of a method for rendering color effects of virtual objects in a video according to an embodiment of the present invention;
FIG. 2 is a flowchart of a method for rendering color effects of virtual objects in a video according to an embodiment of the present invention;
FIG. 3 is a block diagram of an apparatus for rendering color effects of virtual objects in a video according to an embodiment of the present invention;
FIG. 4 is a block diagram of an apparatus for rendering color effects of virtual objects in a video according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a hardware structure of an electronic device for executing a method for rendering color effects of virtual objects in a video according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an AR helmet according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a clamping device of an AR helmet according to an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of a locking structure of an AR helmet according to an embodiment of the present invention;
fig. 9 is a schematic structural view of a support bar of an AR helmet according to an embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the embodiments of the present invention, the technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments of the present invention shall fall within the scope of the protection of the embodiments of the present invention.
The execution subject of the embodiment of the invention is electronic equipment, and the electronic equipment comprises but is not limited to a mobile phone, a tablet computer, a head-mounted AR (augmented reality) device and AR glasses. In order to better explain the following embodiments, the application scenario of the present invention is explained first. When a user watches a video file by using the electronic equipment, the video file is presented to a virtual object generated by a user computer on the basis of presenting the real content of the video file, the virtual object and the real content coexist in the same frame of video picture, and an augmented reality environment integrating the virtual object and the real content is presented to the user from the aspects of sense and experience effects.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Fig. 1 is a flowchart of a rendering method for color effects of virtual objects in a video according to an embodiment of the present invention. As shown in fig. 1, a method for rendering a color effect of a virtual object in a video according to an embodiment of the present invention specifically includes:
s101, identifying a scene identification of a currently played video.
The method for rendering the color effect of the virtual object in the video, which is provided by the embodiment of the invention, is applied to an augmented reality scene, and when a video file is played in the scene, the color effect of the virtual object in the video in the scene can be rendered. Wherein the virtual object is obtained by simulation of an augmented reality electronic device; the user can experience the augmented reality effect corresponding to the video by means of the electronic equipment.
By video scene is generally meant video content acquired by one shot, which has continuity and video content is substantially the same. When the virtual objects are in the same video scene, the color effect of the virtual objects on the multi-frame video frames corresponding to the video scene is basically not changed because the video contents are approximately the same. Therefore, the color effect of the virtual object in the video scene can be generated according to the attribute information of the same video scene, so that the determination of the color information frame by frame is avoided, and the efficiency is improved. In particular, the attribute information may include light source information and color information of the video scene.
Before this step, it is necessary to identify the video scenes included in the video and record the scene identifiers of the video scenes.
Specifically, the video scene included in the video can be obtained by comparing video frames of adjacent preset frames in the video. As an optional implementation manner of this embodiment, a first frame of a video is used as a first frame of a first scene, two adjacent frames at a preset interval of the video are sequentially selected with the first frame as a starting point, a feature point extraction algorithm is used in each video frame to obtain feature points on different objects included in the frame, where the feature points may be pixel points with certain features, such as corner points and intersection points at edges in an image, or pixel points with certain statistical features in a certain field of the pixel points, the feature point extraction algorithm includes, for example, an SIFT or SURF algorithm, and the feature points have a multi-dimensional feature vector representing properties of the features. And calculating the size relation between the Euclidean distance between the feature vectors of the feature points of the two video frames and a preset threshold value, wherein if the Euclidean distance is smaller than the preset threshold value, the two feature points are matched, and otherwise, the two feature points are not matched. When the two frames of video images do not match, the second frame of the two frames may be used as the last frame of the first video scene, and the first frame of the second video scene of the two frames may be used as the first frame of the first video scene. And taking the first frame of the second scene as the starting point of the video frame to be matched again, and continuing to perform feature point matching between different frames, thereby determining the tail frame of the second scene and the first frame of the third scene. All frames (including the first scene head frame and the second scene end frame) from the first scene head frame to the second scene end frame are video frames corresponding to the second scene. After the comparison is completed by the method, the video scene included in the video can be determined.
After video scenes included in the video are identified, a scene identifier can be determined for each video scene and recorded, and the scene identifier is used as a unique identifier of the corresponding video scene. In this step, a video scene recognition model may be trained in advance, and a currently played video is input into the model to obtain a corresponding video scene and a scene identifier of the video scene.
S102, searching a virtual object and a baking map corresponding to the scene identification.
Wherein the baking map is generated in advance according to the attribute information of the video scene. Before the step, a baking map of the virtual object in each video scene needs to be generated, when a light source irradiates in the video scene, a reflection (including a reflection color) and a shadow are formed on the surface of the virtual object, the reflection and the shadow can be rendered into a map form through baking, when the video is played, the baking map is directly covered on the corresponding virtual object, and the color effect of the virtual object is obtained without calculating the illumination information in the video scene in real time in the video playing process. The baked map is a texture map of a color effect generated by an object in a video scene when the virtual object receives illumination, and the generation mode may include: acquiring a virtual object corresponding to a video scene; and analyzing attribute information in the video scene, baking and rendering the virtual object according to the attribute information, and generating a baking map corresponding to the video scene.
Specifically, the virtual object is an object superimposed in the video scene and viewable by the user through the electronic device, and may include virtual contents such as a physical image (e.g., an image of a physical object such as a person, an animal, or an article), a special effect (e.g., a smoke effect, a steam effect, a motion trajectory effect, and the like), and a natural phenomenon (e.g., rain, snow, a rainbow, a sun aperture, and the like), and may also replace a certain part of the person, the animal, the article, information, and the like in the video scene, and the virtual object may be static or dynamic, which is not limited herein. The virtual object corresponding to the video scene may be a virtual object matched with the characteristics of the video scene itself, or a virtual object embodied by the matching of the video scene and surrounding scenes. Optionally, virtual objects corresponding to different video scenes and positions of the virtual objects in the current video scene may be preset.
It should be noted that the attribute information includes, but is not limited to, light source information and color information of the video scene. In this step, light source information corresponding to the video scene is determined, color information corresponding to the virtual object in the video scene is determined according to the light source information, and finally, baking rendering is performed on the virtual object according to the light source information and the color information.
Specifically, the light source information includes illumination intensity, light source position, and the like, and by analyzing the video scene picture, the location, geographical position, current season, time, and the like of the video scene are obtained, and the object appearing in the video scene can be analyzed to determine the location corresponding to the object. For example, it is determined that a video scene is indoor, and at this time, a target object (e.g., a lamp, a movie screen in a movie theater, or a light-emitting object lamp in darkness) serving as a light source in the video scene is searched, where the position of the target object in the video scene is a light source position, and then, an illumination intensity of the light source is determined according to a brightness degree of the video scene, and the light source position and the illumination intensity are used as illumination information; the video scene is determined to be outdoor, the illumination information of the outdoor environment is determined by the illumination parameters of the sun, and the illumination parameters of the sun need to be determined according to latitude information and current time information because the illumination of the sun is different except for the difference of areas and the angles and heights of the running track relative to the ground illumination in different seasons or different moments of a day.
Optionally, the corresponding geographic location of the object appearing in the video scene is determined (for example, if a beijing station appears in the video scene, it is determined that the real scene is located in beijing) and the current season (for example, it is inferred through the clothing information of the scenery and/or people in the video scene), the longitude and latitude of the location of the video scene are determined through the geographic location, in addition, the current time information can be determined according to the brightness of the picture of the video scene, so as to calculate the illumination parameters (the elevation angle and the azimuth angle) of the sun, and the calculation methods of the elevation angle and the azimuth angle include multiple methods, which are not limited herein. Specifically, the light source information of the sun may include the illumination intensity, the position of the sunlight, and the like.
As an optional implementation manner of this embodiment, determining, according to the light source information, color information corresponding to the virtual object in the video scene includes: determining a target object influencing the color effect of the virtual object in the video scene according to the light source information and the position of the virtual object in the video scene; and acquiring color information of each pixel point on the target object, and determining the corresponding color information of the virtual object in the video scene according to the color information.
Specifically, after obtaining the position of the light source in the video scene, the incident direction of the light source relative to each object in the video scene may be determined, when the light source irradiates on the above objects according to the incident direction, a reflection light and a shadow may be formed on the surface of the above objects, after the position of the virtual object is determined, if the reflected light of a certain object just passes through the virtual object, that is, indirect illumination is formed on the surface of the virtual object, the color of the object may affect the color effect of the surface of the virtual object, for example, if a red billboard forms a reflection light on the surface of the virtual object, the surface of the virtual object may form a color effect of a reflection red light, and thus the object is taken as a target object affecting the color effect of the virtual object.
After the target object is determined, if the color of the target object is a pure color, that is, the color information of each pixel point on the target object is the same, the color corresponding to the virtual object in the video scene can be directly determined according to the color information of any pixel point. Optionally, the color information may be an RGB value or a gray value, and the present invention is not limited herein.
If the color of the target object is not a pure color, that is, the color information of each pixel point on the target object is different, the color information of each pixel point can be weighted and calculated, and the color corresponding to the weighted and calculated value is determined as the color corresponding to the virtual object in the video scene. Specifically, when the color information is an RGB value, the RGB values of the pixels on the target object are respectively obtained; and when the color information is a gray value, respectively acquiring the gray value of each pixel point on the target object. And then, carrying out weighted calculation on the color parameters of the pixel points to obtain a weighted calculation value, wherein the weighted calculation value is the corresponding color information of the virtual object in the video scene.
After the illumination information and the color information of the video scene are obtained, the position and the reflected light intensity of the reflected light of the target object reflected to the surface of the virtual object can be obtained according to the illumination information of the video scene, the reflected light position is counted, the position and the range of the indirect reflection formed by the target object on the surface of the virtual object are determined, the position is used as a color rendering position, the range is used as a color rendering area, and the intensity of the reflected light is used as the color rendering intensity. And coloring the color on the virtual object according to the color rendering information so as to finish baking rendering of the virtual object and generate a baking map corresponding to the video scene.
S103, rendering the virtual object according to the baking map.
After finding the baking map of the virtual object corresponding to the scene identifier in step S102, the baking map may be attached to the virtual object, thereby completing rendering of the virtual object. Because the baking map comprises the color effect of the virtual object, the illumination information in each video scene does not need to be calculated in real time in the playing process, the rendering efficiency is greatly improved, and the CPU resource is saved.
The embodiment of the invention identifies the scene identification of the current playing video; searching a virtual object and a baking map corresponding to the scene identifier, wherein the baking map is generated in advance according to the attribute information of the video scene; rendering the virtual object according to the baking map. The embodiment of the invention realizes the fusion of the color effect of the virtual object and the video scene, has stronger sense of reality for users, and renders by using a baking mapping mode, thereby reducing the rendering efficiency and saving CPU resources.
Fig. 2 is a flowchart of a rendering method for color effects of virtual objects in a video according to an embodiment of the present invention. As shown in fig. 2, this embodiment is a specific implementation scheme of the embodiment shown in fig. 1, and therefore details of specific implementation methods and beneficial effects of each step in the embodiment shown in fig. 1 are not repeated, and the method for rendering color effects of virtual objects in a video provided in the embodiment of the present invention specifically includes:
s201, identifying a video scene included in the video, and recording a scene identifier of the video scene.
S202, acquiring a virtual object corresponding to the video scene.
And S203, analyzing the attribute information in the video scene, baking and rendering the virtual object according to the attribute information, and generating a baking map corresponding to the video scene.
It should be noted that the attribute information includes, but is not limited to, light source information and color information of the video scene. In this step, light source information corresponding to the video scene is determined, color information corresponding to the virtual object in the video scene is determined according to the light source information, and finally, baking rendering is performed on the virtual object according to the light source information and the color information.
And S204, identifying the scene identification of the current playing video.
S205, searching a virtual object and a baking map corresponding to the scene identification.
Wherein the baking map is generated in advance according to the attribute information of the video scene.
S206, rendering the virtual object according to the baking map.
The embodiment of the invention identifies the scene identification of the current playing video; searching a virtual object and a baking map corresponding to the scene identifier, wherein the baking map is generated in advance according to the attribute information of the video scene; rendering the virtual object according to the baking map. The embodiment of the invention realizes the fusion of the color effect of the virtual object and the video scene, has stronger sense of reality for users, and renders by using a baking mapping mode, thereby reducing the rendering efficiency and saving CPU resources.
Fig. 3 is a structural diagram of a rendering apparatus for color effects of virtual objects in a video according to an embodiment of the present invention. As shown in fig. 3, the apparatus specifically includes: an identification module 1000, a lookup module 2000, and a rendering module 3000.
The identifying module 1000 is configured to identify a scene identifier of a currently played video; the searching module 2000 is configured to search for a virtual object and a baked map corresponding to the scene identifier, where the baked map is generated in advance according to attribute information of the video scene; the rendering module 3000 is configured to render the virtual object according to the baking map.
The rendering apparatus for color effects of virtual objects in a video according to an embodiment of the present invention is specifically configured to execute the method provided in the embodiment shown in fig. 1, and the implementation principle, the method, the function and the like of the method are similar to those of the embodiment shown in fig. 1, and are not described herein again.
Fig. 4 is a structural diagram of a rendering apparatus for color effects of virtual objects in a video according to an embodiment of the present invention. As shown in fig. 4, the apparatus specifically includes: an identification module 1000, a lookup module 2000, and a rendering module 3000.
The identifying module 1000 is configured to identify a scene identifier of a currently played video; the searching module 2000 is configured to search for a virtual object and a baked map corresponding to the scene identifier, where the baked map is generated in advance according to attribute information of the video scene; the rendering module 3000 is configured to render the virtual object according to the baking map.
Optionally, the apparatus further comprises: a recording module 4000.
The recording module 4000 is configured to identify a video scene included in the video, and record a scene identifier of the video scene.
Optionally, the apparatus further comprises: an acquisition module 5000 and an analysis module 6000.
The obtaining module 5000 is configured to obtain a virtual object corresponding to the video scene; the analysis module 6000 is configured to analyze attribute information in the video scene, perform baking rendering on the virtual object according to the attribute information, and generate a baking map corresponding to the video scene.
Optionally, the attribute information includes light source information and color information, and the analysis module 6000 is configured to determine light source information corresponding to the video scene; and determining corresponding color information of the virtual object in the video scene according to the light source information.
Optionally, the analysis module 6000 is further configured to determine, according to the light source information and the position of the virtual object in the video scene, a target object that affects a color effect of the virtual object in the video scene; and acquiring color information of each pixel point on the target object, and determining the corresponding color information of the virtual object in the video scene according to the color information.
The rendering apparatus for color effects of virtual objects in a video according to an embodiment of the present invention is specifically configured to execute the method provided in the embodiment shown in fig. 1 and/or fig. 2, and the implementation principle, the method, the function and the like of the rendering apparatus are similar to those of the embodiment shown in fig. 1 and/or fig. 2, and are not described herein again.
The rendering apparatus for color effects of virtual objects in videos according to embodiments of the present invention may be independently disposed in the electronic device as one of software or hardware functional units, or may be integrated in a processor as one of functional modules to execute the rendering method for color effects of virtual objects in videos according to embodiments of the present invention.
Fig. 5 is a schematic diagram of a hardware structure of an electronic device executing the method for rendering color effects of virtual objects in a video according to the embodiment of the present invention. As shown in fig. 5, the electronic device includes:
one or more processors 5100 and memory 5200, illustrated in fig. 5 as processor 5100.
The apparatus for performing the method for rendering color effects of virtual objects in a video may further include: an input device 5300 and an output device 5300.
The processor 5100, the memory 5200, the input device 5300, and the output device 5400 may be connected by a bus or other means, and the bus connection is exemplified in fig. 5.
The memory 5200, as a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the method for rendering color effects of virtual objects in video in the embodiments of the present invention. The processor 5100 executes various functional applications of the server and data processing, i.e., a rendering method of a color effect of a virtual object in the video, by running a nonvolatile software program, instructions, and modules stored in the memory 5200.
The memory 5200 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created by use of a rendering apparatus for a color effect of a virtual object in a video provided according to an embodiment of the present invention, and the like. Additionally, memory 5200 may include high speed random access memory 5200 and may also include non-volatile memory 5200, such as at least one piece of disk storage 5200, flash memory device, or other piece of non-volatile solid state memory 5200. In some embodiments, memory 5200 optionally includes memory 5200 remotely located relative to the processor, and such remote memory 5200 may be connected via a network to a rendering device for virtual object color effects in the video. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 5300 may receive input numeric or character information and generate key signal inputs related to user settings and function control of a rendering device for virtual object color effects in a video. The input device 5300 may include a pressing module or the like.
The one or more modules are stored in the memory 5200 and, when executed by the one or more processors 5100, perform a rendering method for virtual object color effects in the video.
The electronic device of embodiments of the present invention exists in a variety of forms, including but not limited to:
(1) mobile communication devices, which are characterized by mobile communication capabilities and are primarily targeted at providing voice and data communications. Such terminals include smart phones (e.g., iphones), multimedia phones, functional phones, and low-end phones, among others.
(2) The ultra-mobile personal computer equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include PDA, MID, and UMPC devices, such as ipads.
(3) Portable entertainment devices such devices may display and play multimedia content. Such devices include audio and video players (e.g., ipods), handheld game consoles, electronic books, as well as smart toys and portable car navigation devices.
(4) The server is similar to a general computer architecture, but has higher requirements on processing capability, stability, reliability, safety, expandability, manageability and the like because of the need of providing highly reliable services.
(5) And other electronic devices with data interaction functions.
The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The embodiment of the present invention provides a non-transitory computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are executed by an electronic device, the electronic device is caused to execute a method for rendering a color effect of a virtual object in a video in any method embodiment described above.
Embodiments of the present invention provide a computer program product, where the computer program product includes a computer program stored on a non-transitory computer readable storage medium, where the computer program includes program instructions, where the program instructions, when executed by an electronic device, cause the electronic device to perform a method for rendering a color effect of a virtual object in a video in any of the above-mentioned method embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions and/or portions thereof that contribute to the prior art may be embodied in the form of a software product that can be stored on a computer-readable storage medium including any mechanism for storing or transmitting information in a form readable by a computer (e.g., a computer). For example, a machine-readable medium includes Read Only Memory (ROM), Random Access Memory (RAM), magnetic disk storage media, optical storage media, flash memory storage media, electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.), and others, and the computer software product includes instructions for causing a computing device (which may be a personal computer, server, or network device, etc.) to perform the methods described in the various embodiments or portions of the embodiments.
In another embodiment, fig. 6 provides an AR helmet as an implementation device of the rendering method of the color effect of the virtual object in the video, the AR helmet includes a clamping portion 1, a lens portion 2 and a head-mounted portion 3, wherein the clamping portion 1 includes a base 101, a substrate 102 and an inner frame 103, the substrate 102 and the inner frame 103 are both vertically mounted on the base 101, the substrate 102 is a plate-shaped structure, the inner frame 103 is a frame structure adapted to the lens portion, the substrate 102 and the inner frame 103 are located in front of and behind the base 101, that is, the inner frame 103 is disposed on a side close to the lens portion 2, the substrate 102 is disposed on a side far away from the lens portion 2, and an electronic device such as a mobile phone is mounted between the substrate 102 and the inner frame 103.
Another improvement of this embodiment is shown in conjunction with fig. 7 and 8: the clamping device 4 for clamping the mobile phone is arranged on the base plate 101, the clamping device 4 comprises a mounting hole 401, a mounting cover 402, a first bolt 403, a guide sleeve 404, a guide pin 405 and other structures, the mounting hole 401 is provided with a first end far away from the inner frame 401 and a second end close to the inner frame, specifically, the mounting hole 401 comprises a first section and a second section which are adjacent, the inner diameter of the first section is smaller than that of the second section, the end cover 402 is mounted on the outer end of the second section, an adjusting ring 407 is mounted at the end part, close to the first section, of the second section, and a limiting flange 408 which is matched with the adjusting ring 407 and limits the moving stroke of the guide sleeve is arranged at the.
The first end is provided with a mounting cover 402, the mounting cover 402 is provided with a shaft hole 4021, a first bolt 403 is mounted on the mounting cover 402 through the shaft hole 4021, the outer end of the first bolt 403 is connected with a first screwing piece 406, the inner end of the first bolt 403 is in threaded connection with the inner end of a guide sleeve 404 mounted in the mounting hole 401, the outer end of the guide sleeve 404 is provided with a pressing end 4041 for pressing the mobile phone, the outer wall of the guide sleeve 404 is provided with a groove (not shown) matched with a guide pin 405 in the horizontal direction, one end of the guide pin 405 is mounted on the inner wall of the mounting hole 401, and the other end of the guide pin 405 is mounted in. When a user rotates the first screwing piece 406, the first screw rod 403 is driven to rotate, the guide sleeve 404 is driven to rotate and move forwards/backwards, the guide sleeve only has forward or backward displacement due to the existence of the guide pin, the pressing end 4041 is pressed on the mobile phone and the inner frame 103, the process can realize slow output of the pressing end, the pressing force is adjustable, damage to a rear shell of the mobile phone can be avoided, the mobile phone is fixed through a point structure of the supporting end, the effect is superior to that of fixing of a clamping plate or a face shell in the prior art, the heat dissipation performance of the mobile phone is not affected, the structure is strong in adaptability, and the mobile phone is suitable for mobile phones with various screen sizes and thicknesses.
The applicant finds that part of mobile phones are not provided with functions of switching playing programs and zooming sounds in an AR scene, so that most users can only take the mobile phones out of the clamping mechanism for playing switching and adjusting sounds and pictures when needing the operations, so that the applicant designs the clamping part 1 and the lens part 2 to be in sliding fit, specifically, the lens part 2 is provided with the mounting plate 201, the clamping part 1 is mounted on the mounting plate 201, the mounting plate 201 is provided with a plurality of rollers 2011 at uniform intervals along the width direction of the mounting plate 201, and more favorably, the clamping part and the lens are in sliding fit, so that the mobile phones can be taken out when needing to operate the mobile phones, and the clamping part is pushed back to the original position for watching after the operations are finished, and the operation is convenient and fast.
Referring to fig. 8, in this embodiment, a locking structure 104 capable of locking the guide sleeve and the roller is further disposed on the clamping portion 1, and the locking structure 104 not only can prevent the first bolt from being reset, but also can lock the sliding fit between the clamping portion and the lens portion 2. Specifically, the locking structure 104 of this embodiment includes a return spring 1041, and a sleeve 1042 and a screw sleeve 1043 which are bilaterally symmetric with respect to the guide sleeve 404 and are disposed below the guide sleeve 404, an upper portion of an inner end of the sleeve 1042 and the screw sleeve 1043 has a first locking portion 1044 which is matched with the outer wall of the lower portion of the guide sleeve in size, a lower portion of the inner end of the sleeve 1042 and the screw sleeve 1043 has a second locking portion 1045 which is matched with the roller 2011 in size, the inner end of the sleeve 1042 is provided with a first spring slot 1046, the inner end of the screw sleeve 1043 is provided with a second spring slot 1047, one end of the return spring 1041 is mounted in the first spring slot 1046, the other end of the return spring is mounted in the second spring slot 1047, a second bolt 1048 is mounted in the sleeve 1042 and the screw sleeve 1043, the sleeve 1042 and the screw sleeve 1043 are connected by the second bolt 1048 and a locking nut 1049 which is matched with the second bolt 1048, and at least one end. The locking structure 104 can fix the guide sleeve 404, and can lock the sliding fit of the clamping part 1 and the lens part 2, thereby realizing the multifunction and simplified structure of one structure.
In addition, the applicant also finds that most of the existing AR helmets do not have a mobile phone heat dissipation structure, or the heat dissipation of the mobile phone is realized through a complex temperature sensor, a complex controller and other structures, the structure is complex, the manufacturing cost is high, the size of the AR helmet is greatly increased, and the light weight cannot be realized. Therefore, the applicant improves on the basis, referring to fig. 9, in this embodiment, a plurality of supporting bars 5 parallel to the mobile phone rear case extend from a pressing end 4041, a supporting point 501 connected to the mobile phone rear case is provided at an end of the supporting bar 5, a micro fan 6 is installed on the supporting bar 5, the micro fan 6 is provided with a touch switch (not shown in the figure), at least one through hole 502 is provided on the supporting bar 5, a driving member 503 made of a shape memory alloy is installed in the through hole 502, one end of the driving member 503 is connected to the touch switch, the other end of the driving member 503 abuts against the mobile phone rear case, the driving member 503 is in a martensite state when the temperature of the mobile phone rear case reaches an early warning value, the driving member 503 is in an austenite state when the temperature of the mobile phone rear case is lower than. The miniature fan is switched on and off by utilizing the shape change of the shape memory alloy under the temperature change, so that the precision is higher, the cooling of the mobile phone is facilitated, the loss of the mobile phone is avoided, a control structure is not needed, the cooling structure is simplified, and the production cost and the installation space are reduced.
In addition, a groove matched with the first screwing piece can be arranged on the base plate 101, and the first screwing piece 406 is positioned in the groove. The outer surface of the base plate can be in a plane structure by arranging the screwing piece in the groove, so that the appearance is simplified.
The method comprises the steps that a smart phone is arranged in a lens part of the AR helmet, video information of a real scene is obtained through a camera device of the smart phone, the video is played, a scene identifier of the currently played video is identified, a virtual object and a baking chartlet corresponding to the scene identifier are searched, wherein the baking chartlet is generated in advance according to attribute information of the video scene, and the virtual object is rendered according to the baking chartlet.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the embodiments of the present invention, and not to limit the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for rendering color effects of virtual objects in a video, comprising:
identifying a scene identifier of a currently played video;
searching a virtual object and a baking map corresponding to the scene identifier, wherein the baking map is generated in advance according to the attribute information of the video scene;
rendering the virtual object according to the baking map.
2. The method of claim 1, wherein prior to said identifying a scene identification of a currently playing video, the method further comprises:
identifying a video scene included in a video, and recording a scene identification of the video scene.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
acquiring a virtual object corresponding to the video scene;
analyzing attribute information in the video scene, baking and rendering the virtual object according to the attribute information, and generating a baking map corresponding to the video scene.
4. The method of claim 3, wherein the attribute information comprises light source information and color information, wherein analyzing the attribute information in the video scene and performing baked rendering on the virtual object according to the attribute information comprises:
determining light source information corresponding to the video scene;
and determining corresponding color information of the virtual object in the video scene according to the light source information.
5. The method of claim 4, wherein the determining the corresponding color information of the virtual object in the video scene according to the light source information comprises:
determining a target object influencing the color effect of the virtual object in the video scene according to the light source information and the position of the virtual object in the video scene;
and acquiring color information of each pixel point on the target object, and determining the corresponding color information of the virtual object in the video scene according to the color information.
6. The method according to any one of claims 1 to 5, wherein:
the method is applied to an AR helmet comprising a grip portion, a lens portion and a head-mount portion,
the clamping part comprises a base, a base plate and an inner frame, the base plate and the inner frame are both arranged on the base, the inner frame is arranged on one side close to the lens part, the base plate is arranged on one side far away from the lens part, a clamping device is arranged on the base plate and comprises an installation hole, an installation cover, a first bolt, a guide sleeve and a guide pin, the installation cover, the first bolt, the guide sleeve and the guide pin are arranged in the installation hole, the installation hole comprises a first section and a second section which are adjacent, the inner diameter of the first section is smaller than that of the second section, an end cover is arranged on the outer end of the second section, an adjusting ring is arranged at the end part of the second section close to the first section, a limit flange which is matched with the adjusting ring and limits the moving stroke of the guide sleeve is arranged at the inner end of the guide sleeve, and a shaft hole is arranged on, the first bolt is installed on the installation cover through the shaft hole, the outer end part of the first bolt is connected with a first screwing piece, the inner end part of the first bolt is in threaded connection with the inner end part of a guide sleeve installed in the installation hole, the outer end part of the guide sleeve is provided with a pressing end for pressing a mobile phone, the outer wall of the guide sleeve is provided with a groove matched with the guide pin along the horizontal direction, one end of the guide pin is installed on the inner wall of the installation hole, and the other end of the guide pin is installed in the groove;
the mobile phone acquires video information of a real scene through a camera device carried by the mobile phone, plays the video, identifies a scene identifier of the currently played video, searches a virtual object and a baking chartlet corresponding to the scene identifier, wherein the baking chartlet is generated in advance according to attribute information of the video scene, and renders the virtual object according to the baking chartlet.
7. The method of claim 6, wherein the holder of the AR helmet is slidably engaged with the lens, the lens is provided with a mounting plate, the holder is mounted on the mounting plate, the mounting plate is provided with a plurality of rollers at regular intervals along a width direction thereof, and the holder has a locking structure for locking the guide sleeve and the rollers.
8. The method of claim 7, wherein the locking structure of the AR helmet comprises a return spring and a sleeve and a threaded sleeve that are bilaterally symmetric about and disposed below a guide sleeve, the upper parts of the inner ends of the sleeve and the threaded sleeve are provided with first locking parts matched with the outer wall of the lower part of the guide sleeve in size, the lower parts of the inner ends of the sleeve and the thread sleeve are provided with second locking parts matched with the size of the roller, the inner end of the sleeve is provided with a first spring groove, the inner end of the threaded sleeve is provided with a second spring groove, one end of the return spring is arranged in the first spring groove, the other end of the return spring is arranged in the second spring groove, the sleeve and the threaded sleeve are internally provided with a second bolt, the sleeve and the threaded sleeve are connected through the second bolt and a locking nut matched with the second bolt, and at least one end part of the second bolt is provided with a second screwing piece.
9. The method as claimed in claim 6, wherein the pressing end of the AR helmet is extended with a plurality of support bars, the end of each support bar is provided with a support point connected with the rear shell of the mobile phone, each support bar is provided with a micro fan, each micro fan is provided with a touch switch, each support bar is provided with at least one through hole, a driving member made of shape memory alloy is arranged in each through hole, one end of each driving member is connected with the touch switch, the other end of each driving member is abutted against the rear shell of the mobile phone, each driving member is in a martensite state when the temperature of the rear shell of the mobile phone reaches an early warning value, the micro fan is turned on through the touch switch, each driving member is in an Austenite state when the temperature of the rear shell of the mobile phone is lower than the early warning;
the base plate is provided with a groove matched with the first screwing piece, and the first screwing piece is located in the groove.
10. An apparatus for rendering color effects of virtual objects in video, comprising:
the identification module is used for identifying the scene identification of the currently played video;
the searching module is used for searching a virtual object and a baking map corresponding to the scene identifier, wherein the baking map is generated in advance according to the attribute information of the video scene;
and the rendering module is used for rendering the virtual object according to the baking map.
CN201711090151.0A 2017-11-08 2017-11-08 Rendering method and device for color effect of virtual object in video Active CN107871339B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711090151.0A CN107871339B (en) 2017-11-08 2017-11-08 Rendering method and device for color effect of virtual object in video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711090151.0A CN107871339B (en) 2017-11-08 2017-11-08 Rendering method and device for color effect of virtual object in video

Publications (2)

Publication Number Publication Date
CN107871339A true CN107871339A (en) 2018-04-03
CN107871339B CN107871339B (en) 2019-12-24

Family

ID=61752673

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711090151.0A Active CN107871339B (en) 2017-11-08 2017-11-08 Rendering method and device for color effect of virtual object in video

Country Status (1)

Country Link
CN (1) CN107871339B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110166760A (en) * 2019-05-27 2019-08-23 浙江开奇科技有限公司 Image treatment method and terminal device based on panoramic video image
CN110354500A (en) * 2019-07-15 2019-10-22 网易(杭州)网络有限公司 Effect processing method, device, equipment and storage medium
CN110460892A (en) * 2018-05-08 2019-11-15 日本聚逸株式会社 Dynamic image dissemination system, dynamic image distribution method and recording medium
CN110852143A (en) * 2018-08-21 2020-02-28 脸谱公司 Interactive text effects in augmented reality environments
CN111311757A (en) * 2020-02-14 2020-06-19 惠州Tcl移动通信有限公司 Scene synthesis method and device, storage medium and mobile terminal
CN111340684A (en) * 2020-02-12 2020-06-26 网易(杭州)网络有限公司 Method and device for processing graphics in game
CN111866489A (en) * 2019-04-29 2020-10-30 浙江开奇科技有限公司 Method for realizing immersive panoramic teaching
CN111932641A (en) * 2020-09-27 2020-11-13 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN112449210A (en) * 2019-08-28 2021-03-05 北京字节跳动网络技术有限公司 Sound processing method, sound processing device, electronic equipment and computer readable storage medium
CN113110731A (en) * 2019-12-25 2021-07-13 华为技术有限公司 Method and device for generating media content
CN116245998A (en) * 2023-05-09 2023-06-09 北京百度网讯科技有限公司 Rendering map generation method and device, and model training method and device
WO2024067159A1 (en) * 2022-09-28 2024-04-04 北京字跳网络技术有限公司 Video generation method and apparatus, electronic device, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101710429A (en) * 2009-10-12 2010-05-19 湖南大学 Illumination algorithm of augmented reality system based on dynamic light map
CN204203552U (en) * 2014-10-31 2015-03-11 成都理想境界科技有限公司 With mobile terminal with the use of headset equipment
CN105405168A (en) * 2015-11-19 2016-03-16 青岛黑晶信息技术有限公司 Method and apparatus for implementing three-dimensional augmented reality
US20170045746A1 (en) * 2013-05-17 2017-02-16 Castar, Inc. Virtual reality attachment for a head mounted display
CN206301087U (en) * 2016-12-30 2017-07-04 广州邦士度眼镜有限公司 A kind of new AR intelligent glasses
CN107134005A (en) * 2017-05-04 2017-09-05 网易(杭州)网络有限公司 Illumination adaptation method, device, storage medium, processor and terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101710429A (en) * 2009-10-12 2010-05-19 湖南大学 Illumination algorithm of augmented reality system based on dynamic light map
US20170045746A1 (en) * 2013-05-17 2017-02-16 Castar, Inc. Virtual reality attachment for a head mounted display
CN204203552U (en) * 2014-10-31 2015-03-11 成都理想境界科技有限公司 With mobile terminal with the use of headset equipment
CN105405168A (en) * 2015-11-19 2016-03-16 青岛黑晶信息技术有限公司 Method and apparatus for implementing three-dimensional augmented reality
CN206301087U (en) * 2016-12-30 2017-07-04 广州邦士度眼镜有限公司 A kind of new AR intelligent glasses
CN107134005A (en) * 2017-05-04 2017-09-05 网易(杭州)网络有限公司 Illumination adaptation method, device, storage medium, processor and terminal

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110460892B (en) * 2018-05-08 2022-06-14 日本聚逸株式会社 Moving image distribution system, moving image distribution method, and recording medium
CN110460892A (en) * 2018-05-08 2019-11-15 日本聚逸株式会社 Dynamic image dissemination system, dynamic image distribution method and recording medium
CN110852143A (en) * 2018-08-21 2020-02-28 脸谱公司 Interactive text effects in augmented reality environments
CN110852143B (en) * 2018-08-21 2024-04-09 元平台公司 Interactive text effects in an augmented reality environment
CN111866489A (en) * 2019-04-29 2020-10-30 浙江开奇科技有限公司 Method for realizing immersive panoramic teaching
CN110166760A (en) * 2019-05-27 2019-08-23 浙江开奇科技有限公司 Image treatment method and terminal device based on panoramic video image
CN110354500A (en) * 2019-07-15 2019-10-22 网易(杭州)网络有限公司 Effect processing method, device, equipment and storage medium
US12022162B2 (en) 2019-08-28 2024-06-25 Beijing Bytedance Network Technology Co., Ltd. Voice processing method and apparatus, electronic device, and computer readable storage medium
CN112449210A (en) * 2019-08-28 2021-03-05 北京字节跳动网络技术有限公司 Sound processing method, sound processing device, electronic equipment and computer readable storage medium
CN113110731B (en) * 2019-12-25 2023-07-14 华为技术有限公司 Method and device for generating media content
CN113110731A (en) * 2019-12-25 2021-07-13 华为技术有限公司 Method and device for generating media content
CN111340684B (en) * 2020-02-12 2024-03-01 网易(杭州)网络有限公司 Method and device for processing graphics in game
CN111340684A (en) * 2020-02-12 2020-06-26 网易(杭州)网络有限公司 Method and device for processing graphics in game
CN111311757B (en) * 2020-02-14 2023-07-18 惠州Tcl移动通信有限公司 Scene synthesis method and device, storage medium and mobile terminal
CN111311757A (en) * 2020-02-14 2020-06-19 惠州Tcl移动通信有限公司 Scene synthesis method and device, storage medium and mobile terminal
WO2022062577A1 (en) * 2020-09-27 2022-03-31 北京达佳互联信息技术有限公司 Image processing method and apparatus
CN111932641B (en) * 2020-09-27 2021-05-14 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
US11610364B2 (en) 2020-09-27 2023-03-21 Beijing Dajia Internet Information Technology Co., Ltd. Method, device, and storage medium for applying lighting to a rendered object in a scene
CN111932641A (en) * 2020-09-27 2020-11-13 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
WO2024067159A1 (en) * 2022-09-28 2024-04-04 北京字跳网络技术有限公司 Video generation method and apparatus, electronic device, and storage medium
CN116245998A (en) * 2023-05-09 2023-06-09 北京百度网讯科技有限公司 Rendering map generation method and device, and model training method and device
CN116245998B (en) * 2023-05-09 2023-08-29 北京百度网讯科技有限公司 Rendering map generation method and device, and model training method and device

Also Published As

Publication number Publication date
CN107871339B (en) 2019-12-24

Similar Documents

Publication Publication Date Title
CN107871339B (en) Rendering method and device for color effect of virtual object in video
CN107845132B (en) Rendering method and device for color effect of virtual object
CN107749076B (en) Method and device for generating real illumination in augmented reality scene
CN107705353B (en) Rendering method and device for virtual object shadow effect applied to augmented reality
CN107749075B (en) Method and device for generating shadow effect of virtual object in video
US11270419B2 (en) Augmented reality scenario generation method, apparatus, system, and device
US10559121B1 (en) Infrared reflectivity determinations for augmented reality rendering
US10607567B1 (en) Color variant environment mapping for augmented reality
US10777010B1 (en) Dynamic environment mapping for augmented reality
US8661053B2 (en) Method and apparatus for enabling virtual tags
JP7412348B2 (en) Display device and display control method
JP2021511729A (en) Extension of the detected area in the image or video data
CN106730815B (en) Somatosensory interaction method and system easy to realize
CN112927349B (en) Three-dimensional virtual special effect generation method and device, computer equipment and storage medium
CN108109161B (en) Video data real-time processing method and device based on self-adaptive threshold segmentation
CN114125310B (en) Photographing method, terminal device and cloud server
CN108111911B (en) Video data real-time processing method and device based on self-adaptive tracking frame segmentation
CN109361880A (en) A kind of method and system showing the corresponding dynamic picture of static images or video
WO2017092432A1 (en) Method, device, and system for virtual reality interaction
US20240331245A1 (en) Video processing method, video processing apparatus, and storage medium
CN114387445A (en) Object key point identification method and device, electronic equipment and storage medium
CN112714305A (en) Presentation method, presentation device, presentation equipment and computer-readable storage medium
CN111815782A (en) Display method, device and equipment of AR scene content and computer storage medium
CN107728787B (en) Information display method and device in panoramic video
CN107680105B (en) Video data real-time processing method and device based on virtual world and computing equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240910

Address after: Room 3011, 3rd Floor, Building A, No. 266 Tinglan Street, Qiaosi Street, Linping District, Hangzhou City, Zhejiang Province 311101

Patentee after: Pacific Future Technology (Hangzhou) Co.,Ltd.

Country or region after: China

Address before: 518000 area w, 1st floor, lingchuang Tianxia, Yannan Road, Meiban Avenue, Bantian street, Longgang District, Shenzhen City, Guangdong Province

Patentee before: Pacific future technology (Shenzhen) Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right