Disclosure of Invention
The method and the device for rendering the color effect of the virtual object in the video are provided by the embodiment of the invention, and are used for solving at least one of the problems in the related art.
An embodiment of the present invention provides a method for rendering a color effect of a virtual object in a video, including:
identifying a scene identifier of a currently played video;
searching a virtual object and a baking map corresponding to the scene identifier, wherein the baking map is generated in advance according to the attribute information of the video scene;
rendering the virtual object according to the baking map.
Optionally, the method further comprises: identifying a video scene included in the video, and recording a scene identification of the video scene.
Optionally, the method further comprises: acquiring a virtual object corresponding to the video scene; analyzing attribute information in the video scene, baking and rendering the virtual object according to the attribute information, and generating a baking map corresponding to the video scene.
Optionally, the attribute information includes light source information and color information, the analyzing the attribute information in the video scene, and performing baking rendering on the virtual object according to the attribute information includes: determining light source information corresponding to the video scene; and determining corresponding color information of the virtual object in the video scene according to the light source information.
Optionally, the determining, according to the light source information, color information corresponding to the virtual object in the video scene includes: determining a target object influencing the color effect of the virtual object in the video scene according to the light source information and the position of the virtual object in the video scene; and acquiring color information of each pixel point on the target object, and determining the corresponding color information of the virtual object in the video scene according to the color information.
Optionally, the method is applied to an AR helmet, comprising a grip, a lens and a head-mount,
the clamping part comprises a base, a base plate and an inner frame, the base plate and the inner frame are both arranged on the base, the inner frame is arranged on one side close to the lens part, the base plate is arranged on one side far away from the lens part, a clamping device is arranged on the base plate and comprises an installation hole, an installation cover, a first bolt, a guide sleeve and a guide pin, the installation cover, the first bolt, the guide sleeve and the guide pin are arranged in the installation hole, the installation hole comprises a first section and a second section which are adjacent, the inner diameter of the first section is smaller than that of the second section, an end cover is arranged on the outer end of the second section, an adjusting ring is arranged at the end part of the second section close to the first section, a limit flange which is matched with the adjusting ring and limits the moving stroke of the guide sleeve is arranged at the inner end of the guide sleeve, and a shaft hole is arranged on, the first bolt is installed on the installation cover through the shaft hole, the outer end part of the first bolt is connected with a first screwing piece, the inner end part of the first bolt is in threaded connection with the inner end part of a guide sleeve installed in the installation hole, the outer end part of the guide sleeve is provided with a pressing end for pressing a mobile phone, the outer wall of the guide sleeve is provided with a groove matched with the guide pin along the horizontal direction, one end of the guide pin is installed on the inner wall of the installation hole, and the other end of the guide pin is installed in the groove;
the mobile phone acquires video information of a real scene through a camera device carried by the mobile phone, plays the video, identifies a scene identifier of the currently played video, searches a virtual object and a baking chartlet corresponding to the scene identifier, wherein the baking chartlet is generated in advance according to attribute information of the video scene, and renders the virtual object according to the baking chartlet.
Optionally, the AR helmet the clamping part with lens tip sliding fit, lens tip is equipped with a mounting panel, the clamping part is installed on the mounting panel, the mounting panel is equipped with a plurality of gyro wheels along its width direction uniform interval, the clamping part has the locking the uide bushing with the locking structure of gyro wheel.
Optionally, the locking structure of the AR helmet comprises a return spring, and a sleeve and a threaded sleeve which are bilaterally symmetric about the guide sleeve and are arranged below the guide sleeve, the upper parts of the inner ends of the sleeve and the threaded sleeve are provided with first locking parts matched with the outer wall of the lower part of the guide sleeve in size, the lower parts of the inner ends of the sleeve and the thread sleeve are provided with second locking parts matched with the size of the roller, the inner end of the sleeve is provided with a first spring groove, the inner end of the threaded sleeve is provided with a second spring groove, one end of the return spring is arranged in the first spring groove, the other end of the return spring is arranged in the second spring groove, the sleeve and the threaded sleeve are internally provided with a second bolt, the sleeve and the threaded sleeve are connected through the second bolt and a locking nut matched with the second bolt, and at least one end part of the second bolt is provided with a second screwing piece.
Optionally, the pressing end of the AR helmet extends with a plurality of support bars, the end of each support bar is provided with a support point connected with the rear shell of the mobile phone, the support bar is provided with a micro fan, the micro fan is provided with a touch switch, the support bar is provided with at least one through hole, a driving piece made of shape memory alloy is installed in the through hole, one end of the driving piece is connected with the touch switch, the other end of the driving piece abuts against the rear shell of the mobile phone, the driving piece is in a martensite state when the temperature of the rear shell of the mobile phone reaches an early warning value, the micro fan is turned on through the touch switch, the driving piece is in an austenite state when the temperature of the rear shell of the mobile phone is lower than the early warning;
the base plate is provided with a groove matched with the first screwing piece, and the first screwing piece is located in the groove.
Another aspect of the embodiments of the present invention provides a device for rendering a color effect of a virtual object in a video, including:
the identification module is used for identifying the scene identification of the currently played video;
the searching module is used for searching a virtual object and a baking map corresponding to the scene identifier, wherein the baking map is generated in advance according to the attribute information of the video scene;
and the rendering module is used for rendering the virtual object according to the baking map.
Optionally, the apparatus further comprises: and the recording module is used for identifying the video scenes included in the video and recording the scene identifiers of the video scenes.
Optionally, the apparatus further comprises: the acquisition module is used for acquiring a virtual object corresponding to the video scene; and the analysis module is used for analyzing the attribute information in the video scene, baking and rendering the virtual object according to the attribute information and generating a baking map corresponding to the video scene.
Optionally, the attribute information includes light source information and color information, and the analysis module is configured to determine light source information corresponding to the video scene; and determining corresponding color information of the virtual object in the video scene according to the light source information.
Optionally, the analysis module is further configured to determine, according to the light source information and the position of the virtual object in the video scene, a target object that affects a color effect of the virtual object in the video scene; and acquiring color information of each pixel point on the target object, and determining the corresponding color information of the virtual object in the video scene according to the color information.
Another aspect of an embodiment of the present invention provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method for rendering color effects of virtual objects in video according to any of the embodiments of the invention.
According to the technical scheme, the rendering method, the rendering device and the electronic equipment for the color effect of the virtual object in the video provided by the embodiment of the invention have the advantages that the scene identification of the currently played video is identified; searching a virtual object and a baking map corresponding to the scene identifier, wherein the baking map is generated in advance according to the attribute information of the video scene; rendering the virtual object according to the baking map. The embodiment of the invention realizes the fusion of the color effect of the virtual object and the video scene, has stronger sense of reality for users, and renders by using a baking mapping mode, thereby reducing the rendering efficiency and saving CPU resources. Meanwhile, the mechanical structure of the AR helmet based on the method is well designed, so that the mobile phone can be better taken and placed, the heat dissipation of the mobile phone is more facilitated, the phenomena of shaking, shaking and the like are not easy to occur in the using process, and the immersion and reality of a user in the using process are enhanced.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the embodiments of the present invention, the technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments of the present invention shall fall within the scope of the protection of the embodiments of the present invention.
The execution subject of the embodiment of the invention is electronic equipment, and the electronic equipment comprises but is not limited to a mobile phone, a tablet computer, a head-mounted AR (augmented reality) device and AR glasses. In order to better explain the following embodiments, the application scenario of the present invention is explained first. When a user watches a video file by using the electronic equipment, the video file is presented to a virtual object generated by a user computer on the basis of presenting the real content of the video file, the virtual object and the real content coexist in the same frame of video picture, and an augmented reality environment integrating the virtual object and the real content is presented to the user from the aspects of sense and experience effects.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Fig. 1 is a flowchart of a rendering method for color effects of virtual objects in a video according to an embodiment of the present invention. As shown in fig. 1, a method for rendering a color effect of a virtual object in a video according to an embodiment of the present invention specifically includes:
s101, identifying a scene identification of a currently played video.
The method for rendering the color effect of the virtual object in the video, which is provided by the embodiment of the invention, is applied to an augmented reality scene, and when a video file is played in the scene, the color effect of the virtual object in the video in the scene can be rendered. Wherein the virtual object is obtained by simulation of an augmented reality electronic device; the user can experience the augmented reality effect corresponding to the video by means of the electronic equipment.
By video scene is generally meant video content acquired by one shot, which has continuity and video content is substantially the same. When the virtual objects are in the same video scene, the color effect of the virtual objects on the multi-frame video frames corresponding to the video scene is basically not changed because the video contents are approximately the same. Therefore, the color effect of the virtual object in the video scene can be generated according to the attribute information of the same video scene, so that the determination of the color information frame by frame is avoided, and the efficiency is improved. In particular, the attribute information may include light source information and color information of the video scene.
Before this step, it is necessary to identify the video scenes included in the video and record the scene identifiers of the video scenes.
Specifically, the video scene included in the video can be obtained by comparing video frames of adjacent preset frames in the video. As an optional implementation manner of this embodiment, a first frame of a video is used as a first frame of a first scene, two adjacent frames at a preset interval of the video are sequentially selected with the first frame as a starting point, a feature point extraction algorithm is used in each video frame to obtain feature points on different objects included in the frame, where the feature points may be pixel points with certain features, such as corner points and intersection points at edges in an image, or pixel points with certain statistical features in a certain field of the pixel points, the feature point extraction algorithm includes, for example, an SIFT or SURF algorithm, and the feature points have a multi-dimensional feature vector representing properties of the features. And calculating the size relation between the Euclidean distance between the feature vectors of the feature points of the two video frames and a preset threshold value, wherein if the Euclidean distance is smaller than the preset threshold value, the two feature points are matched, and otherwise, the two feature points are not matched. When the two frames of video images do not match, the second frame of the two frames may be used as the last frame of the first video scene, and the first frame of the second video scene of the two frames may be used as the first frame of the first video scene. And taking the first frame of the second scene as the starting point of the video frame to be matched again, and continuing to perform feature point matching between different frames, thereby determining the tail frame of the second scene and the first frame of the third scene. All frames (including the first scene head frame and the second scene end frame) from the first scene head frame to the second scene end frame are video frames corresponding to the second scene. After the comparison is completed by the method, the video scene included in the video can be determined.
After video scenes included in the video are identified, a scene identifier can be determined for each video scene and recorded, and the scene identifier is used as a unique identifier of the corresponding video scene. In this step, a video scene recognition model may be trained in advance, and a currently played video is input into the model to obtain a corresponding video scene and a scene identifier of the video scene.
S102, searching a virtual object and a baking map corresponding to the scene identification.
Wherein the baking map is generated in advance according to the attribute information of the video scene. Before the step, a baking map of the virtual object in each video scene needs to be generated, when a light source irradiates in the video scene, a reflection (including a reflection color) and a shadow are formed on the surface of the virtual object, the reflection and the shadow can be rendered into a map form through baking, when the video is played, the baking map is directly covered on the corresponding virtual object, and the color effect of the virtual object is obtained without calculating the illumination information in the video scene in real time in the video playing process. The baked map is a texture map of a color effect generated by an object in a video scene when the virtual object receives illumination, and the generation mode may include: acquiring a virtual object corresponding to a video scene; and analyzing attribute information in the video scene, baking and rendering the virtual object according to the attribute information, and generating a baking map corresponding to the video scene.
Specifically, the virtual object is an object superimposed in the video scene and viewable by the user through the electronic device, and may include virtual contents such as a physical image (e.g., an image of a physical object such as a person, an animal, or an article), a special effect (e.g., a smoke effect, a steam effect, a motion trajectory effect, and the like), and a natural phenomenon (e.g., rain, snow, a rainbow, a sun aperture, and the like), and may also replace a certain part of the person, the animal, the article, information, and the like in the video scene, and the virtual object may be static or dynamic, which is not limited herein. The virtual object corresponding to the video scene may be a virtual object matched with the characteristics of the video scene itself, or a virtual object embodied by the matching of the video scene and surrounding scenes. Optionally, virtual objects corresponding to different video scenes and positions of the virtual objects in the current video scene may be preset.
It should be noted that the attribute information includes, but is not limited to, light source information and color information of the video scene. In this step, light source information corresponding to the video scene is determined, color information corresponding to the virtual object in the video scene is determined according to the light source information, and finally, baking rendering is performed on the virtual object according to the light source information and the color information.
Specifically, the light source information includes illumination intensity, light source position, and the like, and by analyzing the video scene picture, the location, geographical position, current season, time, and the like of the video scene are obtained, and the object appearing in the video scene can be analyzed to determine the location corresponding to the object. For example, it is determined that a video scene is indoor, and at this time, a target object (e.g., a lamp, a movie screen in a movie theater, or a light-emitting object lamp in darkness) serving as a light source in the video scene is searched, where the position of the target object in the video scene is a light source position, and then, an illumination intensity of the light source is determined according to a brightness degree of the video scene, and the light source position and the illumination intensity are used as illumination information; the video scene is determined to be outdoor, the illumination information of the outdoor environment is determined by the illumination parameters of the sun, and the illumination parameters of the sun need to be determined according to latitude information and current time information because the illumination of the sun is different except for the difference of areas and the angles and heights of the running track relative to the ground illumination in different seasons or different moments of a day.
Optionally, the corresponding geographic location of the object appearing in the video scene is determined (for example, if a beijing station appears in the video scene, it is determined that the real scene is located in beijing) and the current season (for example, it is inferred through the clothing information of the scenery and/or people in the video scene), the longitude and latitude of the location of the video scene are determined through the geographic location, in addition, the current time information can be determined according to the brightness of the picture of the video scene, so as to calculate the illumination parameters (the elevation angle and the azimuth angle) of the sun, and the calculation methods of the elevation angle and the azimuth angle include multiple methods, which are not limited herein. Specifically, the light source information of the sun may include the illumination intensity, the position of the sunlight, and the like.
As an optional implementation manner of this embodiment, determining, according to the light source information, color information corresponding to the virtual object in the video scene includes: determining a target object influencing the color effect of the virtual object in the video scene according to the light source information and the position of the virtual object in the video scene; and acquiring color information of each pixel point on the target object, and determining the corresponding color information of the virtual object in the video scene according to the color information.
Specifically, after obtaining the position of the light source in the video scene, the incident direction of the light source relative to each object in the video scene may be determined, when the light source irradiates on the above objects according to the incident direction, a reflection light and a shadow may be formed on the surface of the above objects, after the position of the virtual object is determined, if the reflected light of a certain object just passes through the virtual object, that is, indirect illumination is formed on the surface of the virtual object, the color of the object may affect the color effect of the surface of the virtual object, for example, if a red billboard forms a reflection light on the surface of the virtual object, the surface of the virtual object may form a color effect of a reflection red light, and thus the object is taken as a target object affecting the color effect of the virtual object.
After the target object is determined, if the color of the target object is a pure color, that is, the color information of each pixel point on the target object is the same, the color corresponding to the virtual object in the video scene can be directly determined according to the color information of any pixel point. Optionally, the color information may be an RGB value or a gray value, and the present invention is not limited herein.
If the color of the target object is not a pure color, that is, the color information of each pixel point on the target object is different, the color information of each pixel point can be weighted and calculated, and the color corresponding to the weighted and calculated value is determined as the color corresponding to the virtual object in the video scene. Specifically, when the color information is an RGB value, the RGB values of the pixels on the target object are respectively obtained; and when the color information is a gray value, respectively acquiring the gray value of each pixel point on the target object. And then, carrying out weighted calculation on the color parameters of the pixel points to obtain a weighted calculation value, wherein the weighted calculation value is the corresponding color information of the virtual object in the video scene.
After the illumination information and the color information of the video scene are obtained, the position and the reflected light intensity of the reflected light of the target object reflected to the surface of the virtual object can be obtained according to the illumination information of the video scene, the reflected light position is counted, the position and the range of the indirect reflection formed by the target object on the surface of the virtual object are determined, the position is used as a color rendering position, the range is used as a color rendering area, and the intensity of the reflected light is used as the color rendering intensity. And coloring the color on the virtual object according to the color rendering information so as to finish baking rendering of the virtual object and generate a baking map corresponding to the video scene.
S103, rendering the virtual object according to the baking map.
After finding the baking map of the virtual object corresponding to the scene identifier in step S102, the baking map may be attached to the virtual object, thereby completing rendering of the virtual object. Because the baking map comprises the color effect of the virtual object, the illumination information in each video scene does not need to be calculated in real time in the playing process, the rendering efficiency is greatly improved, and the CPU resource is saved.
The embodiment of the invention identifies the scene identification of the current playing video; searching a virtual object and a baking map corresponding to the scene identifier, wherein the baking map is generated in advance according to the attribute information of the video scene; rendering the virtual object according to the baking map. The embodiment of the invention realizes the fusion of the color effect of the virtual object and the video scene, has stronger sense of reality for users, and renders by using a baking mapping mode, thereby reducing the rendering efficiency and saving CPU resources.
Fig. 2 is a flowchart of a rendering method for color effects of virtual objects in a video according to an embodiment of the present invention. As shown in fig. 2, this embodiment is a specific implementation scheme of the embodiment shown in fig. 1, and therefore details of specific implementation methods and beneficial effects of each step in the embodiment shown in fig. 1 are not repeated, and the method for rendering color effects of virtual objects in a video provided in the embodiment of the present invention specifically includes:
s201, identifying a video scene included in the video, and recording a scene identifier of the video scene.
S202, acquiring a virtual object corresponding to the video scene.
And S203, analyzing the attribute information in the video scene, baking and rendering the virtual object according to the attribute information, and generating a baking map corresponding to the video scene.
It should be noted that the attribute information includes, but is not limited to, light source information and color information of the video scene. In this step, light source information corresponding to the video scene is determined, color information corresponding to the virtual object in the video scene is determined according to the light source information, and finally, baking rendering is performed on the virtual object according to the light source information and the color information.
And S204, identifying the scene identification of the current playing video.
S205, searching a virtual object and a baking map corresponding to the scene identification.
Wherein the baking map is generated in advance according to the attribute information of the video scene.
S206, rendering the virtual object according to the baking map.
The embodiment of the invention identifies the scene identification of the current playing video; searching a virtual object and a baking map corresponding to the scene identifier, wherein the baking map is generated in advance according to the attribute information of the video scene; rendering the virtual object according to the baking map. The embodiment of the invention realizes the fusion of the color effect of the virtual object and the video scene, has stronger sense of reality for users, and renders by using a baking mapping mode, thereby reducing the rendering efficiency and saving CPU resources.
Fig. 3 is a structural diagram of a rendering apparatus for color effects of virtual objects in a video according to an embodiment of the present invention. As shown in fig. 3, the apparatus specifically includes: an identification module 1000, a lookup module 2000, and a rendering module 3000.
The identifying module 1000 is configured to identify a scene identifier of a currently played video; the searching module 2000 is configured to search for a virtual object and a baked map corresponding to the scene identifier, where the baked map is generated in advance according to attribute information of the video scene; the rendering module 3000 is configured to render the virtual object according to the baking map.
The rendering apparatus for color effects of virtual objects in a video according to an embodiment of the present invention is specifically configured to execute the method provided in the embodiment shown in fig. 1, and the implementation principle, the method, the function and the like of the method are similar to those of the embodiment shown in fig. 1, and are not described herein again.
Fig. 4 is a structural diagram of a rendering apparatus for color effects of virtual objects in a video according to an embodiment of the present invention. As shown in fig. 4, the apparatus specifically includes: an identification module 1000, a lookup module 2000, and a rendering module 3000.
The identifying module 1000 is configured to identify a scene identifier of a currently played video; the searching module 2000 is configured to search for a virtual object and a baked map corresponding to the scene identifier, where the baked map is generated in advance according to attribute information of the video scene; the rendering module 3000 is configured to render the virtual object according to the baking map.
Optionally, the apparatus further comprises: a recording module 4000.
The recording module 4000 is configured to identify a video scene included in the video, and record a scene identifier of the video scene.
Optionally, the apparatus further comprises: an acquisition module 5000 and an analysis module 6000.
The obtaining module 5000 is configured to obtain a virtual object corresponding to the video scene; the analysis module 6000 is configured to analyze attribute information in the video scene, perform baking rendering on the virtual object according to the attribute information, and generate a baking map corresponding to the video scene.
Optionally, the attribute information includes light source information and color information, and the analysis module 6000 is configured to determine light source information corresponding to the video scene; and determining corresponding color information of the virtual object in the video scene according to the light source information.
Optionally, the analysis module 6000 is further configured to determine, according to the light source information and the position of the virtual object in the video scene, a target object that affects a color effect of the virtual object in the video scene; and acquiring color information of each pixel point on the target object, and determining the corresponding color information of the virtual object in the video scene according to the color information.
The rendering apparatus for color effects of virtual objects in a video according to an embodiment of the present invention is specifically configured to execute the method provided in the embodiment shown in fig. 1 and/or fig. 2, and the implementation principle, the method, the function and the like of the rendering apparatus are similar to those of the embodiment shown in fig. 1 and/or fig. 2, and are not described herein again.
The rendering apparatus for color effects of virtual objects in videos according to embodiments of the present invention may be independently disposed in the electronic device as one of software or hardware functional units, or may be integrated in a processor as one of functional modules to execute the rendering method for color effects of virtual objects in videos according to embodiments of the present invention.
Fig. 5 is a schematic diagram of a hardware structure of an electronic device executing the method for rendering color effects of virtual objects in a video according to the embodiment of the present invention. As shown in fig. 5, the electronic device includes:
one or more processors 5100 and memory 5200, illustrated in fig. 5 as processor 5100.
The apparatus for performing the method for rendering color effects of virtual objects in a video may further include: an input device 5300 and an output device 5300.
The processor 5100, the memory 5200, the input device 5300, and the output device 5400 may be connected by a bus or other means, and the bus connection is exemplified in fig. 5.
The memory 5200, as a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the method for rendering color effects of virtual objects in video in the embodiments of the present invention. The processor 5100 executes various functional applications of the server and data processing, i.e., a rendering method of a color effect of a virtual object in the video, by running a nonvolatile software program, instructions, and modules stored in the memory 5200.
The memory 5200 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created by use of a rendering apparatus for a color effect of a virtual object in a video provided according to an embodiment of the present invention, and the like. Additionally, memory 5200 may include high speed random access memory 5200 and may also include non-volatile memory 5200, such as at least one piece of disk storage 5200, flash memory device, or other piece of non-volatile solid state memory 5200. In some embodiments, memory 5200 optionally includes memory 5200 remotely located relative to the processor, and such remote memory 5200 may be connected via a network to a rendering device for virtual object color effects in the video. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 5300 may receive input numeric or character information and generate key signal inputs related to user settings and function control of a rendering device for virtual object color effects in a video. The input device 5300 may include a pressing module or the like.
The one or more modules are stored in the memory 5200 and, when executed by the one or more processors 5100, perform a rendering method for virtual object color effects in the video.
The electronic device of embodiments of the present invention exists in a variety of forms, including but not limited to:
(1) mobile communication devices, which are characterized by mobile communication capabilities and are primarily targeted at providing voice and data communications. Such terminals include smart phones (e.g., iphones), multimedia phones, functional phones, and low-end phones, among others.
(2) The ultra-mobile personal computer equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include PDA, MID, and UMPC devices, such as ipads.
(3) Portable entertainment devices such devices may display and play multimedia content. Such devices include audio and video players (e.g., ipods), handheld game consoles, electronic books, as well as smart toys and portable car navigation devices.
(4) The server is similar to a general computer architecture, but has higher requirements on processing capability, stability, reliability, safety, expandability, manageability and the like because of the need of providing highly reliable services.
(5) And other electronic devices with data interaction functions.
The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The embodiment of the present invention provides a non-transitory computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are executed by an electronic device, the electronic device is caused to execute a method for rendering a color effect of a virtual object in a video in any method embodiment described above.
Embodiments of the present invention provide a computer program product, where the computer program product includes a computer program stored on a non-transitory computer readable storage medium, where the computer program includes program instructions, where the program instructions, when executed by an electronic device, cause the electronic device to perform a method for rendering a color effect of a virtual object in a video in any of the above-mentioned method embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions and/or portions thereof that contribute to the prior art may be embodied in the form of a software product that can be stored on a computer-readable storage medium including any mechanism for storing or transmitting information in a form readable by a computer (e.g., a computer). For example, a machine-readable medium includes Read Only Memory (ROM), Random Access Memory (RAM), magnetic disk storage media, optical storage media, flash memory storage media, electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.), and others, and the computer software product includes instructions for causing a computing device (which may be a personal computer, server, or network device, etc.) to perform the methods described in the various embodiments or portions of the embodiments.
In another embodiment, fig. 6 provides an AR helmet as an implementation device of the rendering method of the color effect of the virtual object in the video, the AR helmet includes a clamping portion 1, a lens portion 2 and a head-mounted portion 3, wherein the clamping portion 1 includes a base 101, a substrate 102 and an inner frame 103, the substrate 102 and the inner frame 103 are both vertically mounted on the base 101, the substrate 102 is a plate-shaped structure, the inner frame 103 is a frame structure adapted to the lens portion, the substrate 102 and the inner frame 103 are located in front of and behind the base 101, that is, the inner frame 103 is disposed on a side close to the lens portion 2, the substrate 102 is disposed on a side far away from the lens portion 2, and an electronic device such as a mobile phone is mounted between the substrate 102 and the inner frame 103.
Another improvement of this embodiment is shown in conjunction with fig. 7 and 8: the clamping device 4 for clamping the mobile phone is arranged on the base plate 101, the clamping device 4 comprises a mounting hole 401, a mounting cover 402, a first bolt 403, a guide sleeve 404, a guide pin 405 and other structures, the mounting hole 401 is provided with a first end far away from the inner frame 401 and a second end close to the inner frame, specifically, the mounting hole 401 comprises a first section and a second section which are adjacent, the inner diameter of the first section is smaller than that of the second section, the end cover 402 is mounted on the outer end of the second section, an adjusting ring 407 is mounted at the end part, close to the first section, of the second section, and a limiting flange 408 which is matched with the adjusting ring 407 and limits the moving stroke of the guide sleeve is arranged at the.
The first end is provided with a mounting cover 402, the mounting cover 402 is provided with a shaft hole 4021, a first bolt 403 is mounted on the mounting cover 402 through the shaft hole 4021, the outer end of the first bolt 403 is connected with a first screwing piece 406, the inner end of the first bolt 403 is in threaded connection with the inner end of a guide sleeve 404 mounted in the mounting hole 401, the outer end of the guide sleeve 404 is provided with a pressing end 4041 for pressing the mobile phone, the outer wall of the guide sleeve 404 is provided with a groove (not shown) matched with a guide pin 405 in the horizontal direction, one end of the guide pin 405 is mounted on the inner wall of the mounting hole 401, and the other end of the guide pin 405 is mounted in. When a user rotates the first screwing piece 406, the first screw rod 403 is driven to rotate, the guide sleeve 404 is driven to rotate and move forwards/backwards, the guide sleeve only has forward or backward displacement due to the existence of the guide pin, the pressing end 4041 is pressed on the mobile phone and the inner frame 103, the process can realize slow output of the pressing end, the pressing force is adjustable, damage to a rear shell of the mobile phone can be avoided, the mobile phone is fixed through a point structure of the supporting end, the effect is superior to that of fixing of a clamping plate or a face shell in the prior art, the heat dissipation performance of the mobile phone is not affected, the structure is strong in adaptability, and the mobile phone is suitable for mobile phones with various screen sizes and thicknesses.
The applicant finds that part of mobile phones are not provided with functions of switching playing programs and zooming sounds in an AR scene, so that most users can only take the mobile phones out of the clamping mechanism for playing switching and adjusting sounds and pictures when needing the operations, so that the applicant designs the clamping part 1 and the lens part 2 to be in sliding fit, specifically, the lens part 2 is provided with the mounting plate 201, the clamping part 1 is mounted on the mounting plate 201, the mounting plate 201 is provided with a plurality of rollers 2011 at uniform intervals along the width direction of the mounting plate 201, and more favorably, the clamping part and the lens are in sliding fit, so that the mobile phones can be taken out when needing to operate the mobile phones, and the clamping part is pushed back to the original position for watching after the operations are finished, and the operation is convenient and fast.
Referring to fig. 8, in this embodiment, a locking structure 104 capable of locking the guide sleeve and the roller is further disposed on the clamping portion 1, and the locking structure 104 not only can prevent the first bolt from being reset, but also can lock the sliding fit between the clamping portion and the lens portion 2. Specifically, the locking structure 104 of this embodiment includes a return spring 1041, and a sleeve 1042 and a screw sleeve 1043 which are bilaterally symmetric with respect to the guide sleeve 404 and are disposed below the guide sleeve 404, an upper portion of an inner end of the sleeve 1042 and the screw sleeve 1043 has a first locking portion 1044 which is matched with the outer wall of the lower portion of the guide sleeve in size, a lower portion of the inner end of the sleeve 1042 and the screw sleeve 1043 has a second locking portion 1045 which is matched with the roller 2011 in size, the inner end of the sleeve 1042 is provided with a first spring slot 1046, the inner end of the screw sleeve 1043 is provided with a second spring slot 1047, one end of the return spring 1041 is mounted in the first spring slot 1046, the other end of the return spring is mounted in the second spring slot 1047, a second bolt 1048 is mounted in the sleeve 1042 and the screw sleeve 1043, the sleeve 1042 and the screw sleeve 1043 are connected by the second bolt 1048 and a locking nut 1049 which is matched with the second bolt 1048, and at least one end. The locking structure 104 can fix the guide sleeve 404, and can lock the sliding fit of the clamping part 1 and the lens part 2, thereby realizing the multifunction and simplified structure of one structure.
In addition, the applicant also finds that most of the existing AR helmets do not have a mobile phone heat dissipation structure, or the heat dissipation of the mobile phone is realized through a complex temperature sensor, a complex controller and other structures, the structure is complex, the manufacturing cost is high, the size of the AR helmet is greatly increased, and the light weight cannot be realized. Therefore, the applicant improves on the basis, referring to fig. 9, in this embodiment, a plurality of supporting bars 5 parallel to the mobile phone rear case extend from a pressing end 4041, a supporting point 501 connected to the mobile phone rear case is provided at an end of the supporting bar 5, a micro fan 6 is installed on the supporting bar 5, the micro fan 6 is provided with a touch switch (not shown in the figure), at least one through hole 502 is provided on the supporting bar 5, a driving member 503 made of a shape memory alloy is installed in the through hole 502, one end of the driving member 503 is connected to the touch switch, the other end of the driving member 503 abuts against the mobile phone rear case, the driving member 503 is in a martensite state when the temperature of the mobile phone rear case reaches an early warning value, the driving member 503 is in an austenite state when the temperature of the mobile phone rear case is lower than. The miniature fan is switched on and off by utilizing the shape change of the shape memory alloy under the temperature change, so that the precision is higher, the cooling of the mobile phone is facilitated, the loss of the mobile phone is avoided, a control structure is not needed, the cooling structure is simplified, and the production cost and the installation space are reduced.
In addition, a groove matched with the first screwing piece can be arranged on the base plate 101, and the first screwing piece 406 is positioned in the groove. The outer surface of the base plate can be in a plane structure by arranging the screwing piece in the groove, so that the appearance is simplified.
The method comprises the steps that a smart phone is arranged in a lens part of the AR helmet, video information of a real scene is obtained through a camera device of the smart phone, the video is played, a scene identifier of the currently played video is identified, a virtual object and a baking chartlet corresponding to the scene identifier are searched, wherein the baking chartlet is generated in advance according to attribute information of the video scene, and the virtual object is rendered according to the baking chartlet.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the embodiments of the present invention, and not to limit the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.