CN107705353B - Rendering method and device for virtual object shadow effect applied to augmented reality - Google Patents

Rendering method and device for virtual object shadow effect applied to augmented reality Download PDF

Info

Publication number
CN107705353B
CN107705353B CN201711075801.4A CN201711075801A CN107705353B CN 107705353 B CN107705353 B CN 107705353B CN 201711075801 A CN201711075801 A CN 201711075801A CN 107705353 B CN107705353 B CN 107705353B
Authority
CN
China
Prior art keywords
shadow
virtual object
real scene
light
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711075801.4A
Other languages
Chinese (zh)
Other versions
CN107705353A (en
Inventor
休·伊恩·罗伊
李建亿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pacific Future Technology (shenzhen) Co Ltd
Original Assignee
Pacific Future Technology (shenzhen) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pacific Future Technology (shenzhen) Co Ltd filed Critical Pacific Future Technology (shenzhen) Co Ltd
Priority to CN201711075801.4A priority Critical patent/CN107705353B/en
Publication of CN107705353A publication Critical patent/CN107705353A/en
Application granted granted Critical
Publication of CN107705353B publication Critical patent/CN107705353B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • AHUMAN NECESSITIES
    • A42HEADWEAR
    • A42BHATS; HEAD COVERINGS
    • A42B3/00Helmets; Helmet covers ; Other protective head coverings
    • A42B3/04Parts, details or accessories of helmets
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Abstract

The embodiment of the invention provides a rendering method and device of a virtual object shadow effect applied to augmented reality, and belongs to the technical field of augmented reality. The method comprises the following steps: acquiring attribute information in a real scene; determining the current light and shadow parameters of the real scene according to the attribute information; rendering the shadow effect of the virtual object according to the current shadow parameter of the real scene; and synthesizing the virtual object with the rendered shadow effect into the real scene. The embodiment of the invention realizes synchronous change of the shadow effect of the virtual object and the shadow effect of the real scene, and has stronger sense of reality and better user experience for users.

Description

Rendering method and device for virtual object shadow effect applied to augmented reality
Technical Field
The invention relates to the technical field of augmented reality, in particular to a method and a device for rendering a shadow effect of a virtual object.
Background
Augmented Reality (AR) technology is a technology for calculating the position and angle of a camera image in real time, adding corresponding images, videos and 3D models, sleeving a virtual world on a real world on a screen and interacting. By using an AR device, a user can perceive the presence of a virtual object in the real world, for example: when the user adopts the head-wearing AR equipment, real environment data are collected through a camera device in the equipment, and then virtual effects generated by a computer are fused with the real environment data. Specific application scenes are diversified, for example, in the home of the user, the head-mounted AR helmet can fuse virtual decoration effects with a real home environment and the like.
In fact, the AR helmet may adopt a similar design structure to a common VR helmet in the market, and when a smart phone is used in cooperation with a specially-made lens to play a complete virtual picture, the AR helmet is a VR device.
However, existing AR devices suffer from software and hardware drawbacks as follows:
because the virtual objects are generated by the computer in advance, the environment information of the real world of the user in the using process can not be acquired, so that the light and shadow effect of the virtual objects can not be matched with the real world, and the user can easily feel unreal;
present AR helmet, the installation of cell-phone with take out inconveniently, fish tail cell-phone surface easily when installation and take out, and splint compress tightly the cell-phone backshell for a long time, be unfavorable for the cell-phone heat dissipation, to different screen size, the cell-phone of thickness need set up complicated structure and carry out the adaptability and adjust, this structure also can't be adjusted the dynamics of pressing from both sides tight cell-phone, and also do not benefit to the heat dissipation of cell-phone, appear the shake easily in the use, rock phenomenons such as, influence the sense of immersing of user in the use, perhaps cause the user to produce uncomfortable feelings such as dizzy even.
Disclosure of Invention
The method and the device for rendering the shadow effect of the virtual object provided by the embodiment of the invention are used for solving at least one of the problems in the related art.
An embodiment of the present invention provides a method for rendering a virtual object shadow effect, including:
acquiring attribute information in a real scene;
determining the current light and shadow parameters of the real scene according to the attribute information;
rendering the shadow effect of the virtual object according to the current shadow parameter of the real scene;
and synthesizing the virtual object with the rendered shadow effect into the real scene.
Optionally, the attribute information includes location information and time information, and before determining the current light and shadow parameter of the real scene according to the attribute information, the method further includes: judging whether the attribute information meets a preset condition or not; when the attribute information meets the preset condition, determining the current light and shadow parameters of the real scene according to the attribute information comprises: determining the azimuth parameters of the sun according to the position information and the time information; and determining the current light and shadow parameters of the real scene according to the azimuth parameters of the sun.
Optionally, when the attribute information does not satisfy the preset condition, the determining, according to the attribute information, the current light and shadow parameter of the real scene includes: and determining the current light and shadow parameters of the real scene according to the position information.
Optionally, the rendering the shadow effect of the virtual object according to the current shadow parameter of the real scene includes: acquiring the position of a virtual object in the real scene; determining the light and shadow parameters of the virtual object according to the position of the virtual object in the real scene and the current light and shadow parameters of the real scene; and rendering the shadow effect of the virtual object according to the shadow parameters of the virtual object.
Optionally, after the rendering the shadow effect of the virtual object according to the current shadow parameters of the real scene, the method further includes: analyzing the attribute information, and determining an adjustment parameter of the light and shadow effect of the virtual object; and adjusting the light and shadow effect of the virtual object by combining the adjusting parameters.
Optionally, the method is applied to an AR helmet comprising a grip, a lens and a head-mount,
the clamping part comprises a base, a base plate and an inner frame, the base plate and the inner frame are both arranged on the base, the inner frame is arranged on one side close to the lens part, the base plate is arranged on one side far away from the lens part, a clamping device is arranged on the base plate and comprises an installation hole, an installation cover, a first bolt, a guide sleeve and a guide pin, the installation cover, the first bolt, the guide sleeve and the guide pin are arranged in the installation hole, the installation hole comprises a first section and a second section which are adjacent, the inner diameter of the first section is smaller than that of the second section, an end cover is arranged on the outer end of the second section, an adjusting ring is arranged at the end part of the second section close to the first section, a limit flange which is matched with the adjusting ring and limits the moving stroke of the guide sleeve is arranged at the inner end of the guide sleeve, and a shaft hole is arranged on, the first bolt is installed on the installation cover through the shaft hole, the outer end part of the first bolt is connected with a first screwing piece, the inner end part of the first bolt is in threaded connection with the inner end part of a guide sleeve installed in the installation hole, the outer end part of the guide sleeve is provided with a pressing end for pressing a mobile phone, the outer wall of the guide sleeve is provided with a groove matched with the guide pin along the horizontal direction, one end of the guide pin is installed on the inner wall of the installation hole, and the other end of the guide pin is installed in the groove;
the mobile phone acquires attribute information in a real scene through a camera device carried by the mobile phone, determines a current light and shadow parameter of the real scene according to the attribute information, renders a light and shadow effect of a virtual object according to the current light and shadow parameter of the real scene, and synthesizes the virtual object with the rendered light and shadow effect into the real scene.
Optionally, the AR helmet the clamping part with lens tip sliding fit, lens tip is equipped with a mounting panel, the clamping part is installed on the mounting panel, the mounting panel is equipped with a plurality of gyro wheels along its width direction uniform interval, the clamping part has the locking the uide bushing with the locking structure of gyro wheel.
Optionally, the locking structure of the AR helmet comprises a return spring, and a sleeve and a threaded sleeve which are bilaterally symmetric about the guide sleeve and are arranged below the guide sleeve, the upper parts of the inner ends of the sleeve and the threaded sleeve are provided with first locking parts matched with the outer wall of the lower part of the guide sleeve in size, the lower parts of the inner ends of the sleeve and the thread sleeve are provided with second locking parts matched with the size of the roller, the inner end of the sleeve is provided with a first spring groove, the inner end of the threaded sleeve is provided with a second spring groove, one end of the return spring is arranged in the first spring groove, the other end of the return spring is arranged in the second spring groove, the sleeve and the threaded sleeve are internally provided with a second bolt, the sleeve and the threaded sleeve are connected through the second bolt and a locking nut matched with the second bolt, and at least one end part of the second bolt is provided with a second screwing piece.
Optionally, the pressing end of the AR helmet extends with a plurality of support bars, the end of each support bar is provided with a support point connected with the rear shell of the mobile phone, the support bar is provided with a micro fan, the micro fan is provided with a touch switch, the support bar is provided with at least one through hole, a driving piece made of shape memory alloy is installed in the through hole, one end of the driving piece is connected with the touch switch, the other end of the driving piece abuts against the rear shell of the mobile phone, the driving piece is in a martensite state when the temperature of the rear shell of the mobile phone reaches an early warning value, the micro fan is turned on through the touch switch, the driving piece is in an austenite state when the temperature of the rear shell of the mobile phone is lower than the early warning; the base plate is provided with a groove matched with the first screwing piece, and the first screwing piece is located in the groove.
Another aspect of the embodiments of the present invention provides a rendering apparatus for virtual object shadow effect, including:
the acquisition module is used for acquiring attribute information in a real scene;
the determining module is used for determining the current light and shadow parameters of the real scene according to the attribute information;
the rendering module is used for rendering the shadow effect of the virtual object according to the current shadow parameters of the real scene;
and the synthesis module is used for synthesizing the virtual object with the rendered shadow effect into the real scene.
Optionally, the attribute information includes location information and time information, and the apparatus further includes: the judging module is used for judging whether the attribute information meets a preset condition or not; when the attribute information meets the preset condition, the determining module is further used for determining the orientation parameter of the sun according to the position information and the time information; and determining the current light and shadow parameters of the real scene according to the azimuth parameters of the sun.
Optionally, when the attribute information does not satisfy the preset condition, the determining module is further configured to determine a current light and shadow parameter of the real scene according to the position information.
Optionally, the rendering module further comprises: the obtaining submodule is used for obtaining the position of a virtual object in the real scene; the determining submodule is used for determining the light and shadow parameters of the virtual object according to the position of the virtual object in the real scene and the current light and shadow parameters of the real scene; and the rendering submodule is used for rendering the shadow effect of the virtual object according to the shadow parameters of the virtual object.
Optionally, the apparatus further comprises: the analysis module is used for analyzing the attribute information and determining an adjustment parameter of the light and shadow effect of the virtual object; and the adjusting module is used for adjusting the light and shadow effect of the virtual object by combining the adjusting parameters.
Another aspect of an embodiment of the present invention provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any one of the methods for rendering virtual object shading effects described above in accordance with embodiments of the present invention.
According to the technical scheme, the rendering method, the rendering device and the electronic equipment of the virtual object shadow effect provided by the embodiment of the invention have the advantages that the attribute information in the real scene is acquired; determining the current light and shadow parameters of the real scene according to the attribute information; rendering the shadow effect of the virtual object according to the current shadow parameter of the real scene; and synthesizing the virtual object with the rendered shadow effect into the real scene. The embodiment of the invention realizes the synchronous change of the shadow effect of the virtual object and the shadow effect of the real scene, and has stronger sense of reality for users and better user experience.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present invention, and it is also possible for a person skilled in the art to obtain other drawings based on the drawings.
Fig. 1 is a flowchart of a rendering method for a shadow effect of a virtual object according to an embodiment of the present invention;
FIG. 2 is a flowchart of a method for rendering a shadow effect of a virtual object according to an embodiment of the present invention;
FIG. 3 is a block diagram of a rendering apparatus for virtual object shading effects according to an embodiment of the present invention;
FIG. 4 is a block diagram of a rendering apparatus for virtual object shading effects according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a hardware structure of an electronic device executing a rendering method for virtual object light and shadow effects provided by an embodiment of the present invention;
FIG. 6 is a schematic diagram of an AR helmet according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a clamping device of an AR helmet according to an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of a locking structure of an AR helmet according to an embodiment of the present invention;
fig. 9 is a schematic structural view of a support bar of an AR helmet according to an embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the embodiments of the present invention, the technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments of the present invention shall fall within the scope of the protection of the embodiments of the present invention.
The execution subject of the embodiment of the invention is electronic equipment, and the electronic equipment comprises but is not limited to a mobile phone, a tablet computer, a head-mounted AR (augmented reality) device and AR glasses. Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
In real life, the application range of Augmented Reality (AR) technology is becoming wider and wider. For example, the user may identify a place or an object via the electronic device and link digital information corresponding to the place or the object with a real-world scene. The digital information may be nearby places of interest, such as museums, shops, restaurants or walking routes to the next bus stop; when purchasing furniture, a consumer can use the electronic device to 'place' the selected digital edition furniture in the living room of the consumer, thereby more conveniently testing whether the size, style and color of the furniture are proper at a certain position.
Fig. 1 is a flowchart of a rendering method for a virtual object light and shadow effect according to an embodiment of the present invention. As shown in fig. 1, a method for rendering a virtual object light and shadow effect according to an embodiment of the present invention specifically includes:
s101, acquiring attribute information of a real scene;
in this step, when the user views the augmented reality small virtual object superimposed on the real scene through the electronic device, the electronic device acquires attribute information of the real scene. Specifically, the real scene includes, but is not limited to, buildings, articles, people, animals, and the like, the attribute information may include location information, time information, and the like, and the location information is not limited to a location where the real scene is located, and also includes an environment around the real scene, weather, and the like. For example, if a user wants to watch the superimposed augmented reality information of an automobile, the user can obtain whether the automobile is indoors or outdoors, whether buildings and the like are around the automobile to block light, the current weather condition and the like; the longitude and latitude of the automobile and the distance between the automobile and the horizontal plane can be acquired; the current season, time, etc. may also be obtained.
And S102, determining the current light and shadow parameters of the real scene according to the attribute information.
As described above, the attribute information may include environment information, location information, and time information, and as some optional implementation manners of the embodiment of the present invention, the step S102 may include the following sub-steps:
s1021, judging whether the attribute information meets a preset condition, and if so, executing the steps S1022 to S1023; if the preset condition is not satisfied, step S1024 is executed.
Specifically, the preset condition may be that the real scene is under the sun illumination, for example, the real scene is located outdoors and the corresponding current time is located before sunset and after sunrise, or the real scene is located indoors and can receive the sun illumination, and the invention is not limited herein.
Step S1022, determining the azimuth parameter of the sun according to the position information and the time information.
It should be noted that the solar azimuth parameters may include a solar altitude and a solar azimuth. The solar altitude refers to an included angle between the incident direction of sunlight and the horizontal direction of the ground, the larger the solar altitude is, the smaller the heating area is, the more concentrated the light and heat is, and the more solar radiation energy is obtained; the sun azimuth angle, i.e. the sun position, refers to the angle between the projection of the sun rays in the horizontal direction on the ground and the meridian line of the real scene, and can be approximately regarded as the angle between the shadow of a straight line standing on the ground in the sun and the south direction. The azimuth is 0 in the positive south direction, negative from south to east to north, and positive from south to west. For example, if the sun is in the east, the azimuth angle is-90 degrees; in the north-east direction, the azimuth angle is-135 degrees; the azimuth angle is 90 degrees in the positive west and 135 degrees in the positive north-west.
In the step, firstly, determining the longitude and latitude of the location of a real scene according to the position information; because the illumination of the sun is different in regions, the operation track has different angles and heights relative to the ground illumination in different seasons or different times of a day, and the azimuth parameters (height angle and azimuth angle) of the sun need to be calculated according to the longitude and latitude and the current time information; the calculation method of the elevation angle and the azimuth angle includes various methods, and specifically, can be calculated by the following formula:
sinh⊙=sinδsinφ+cosδcosφcost
wherein h ⊙ solar altitude
the solar time angle t ═ 12 (current time) 15 °
Delta solar declination, related to the current time
Phi local latitude
cosA=(sinh⊙sinφ-sinδ)/cosh⊙cosφ
Wherein, the azimuth angle of the sun A
h ⊙ sun altitude
Declination of delta sun
Phi local latitude
And step S1023, determining the current light and shadow parameters of the real scene according to the orientation parameters of the sun.
In this step, the light and shadow parameters may include the illumination intensity, the incident direction, the shadow position, the shadow length, and the like. Optionally, a corresponding relation table of the solar azimuth parameter and the light and shadow parameter may be pre-established, and the current light and shadow parameter of the real scene may be obtained by querying in the corresponding relation table according to the azimuth angle and the altitude angle of the sun.
Alternatively, in the table of correspondence between the sun azimuth parameters and the shadow parameters, the shadow parameters corresponding to the sun azimuth parameters may be a range value, since different scenes may have an influence on the shadow parameters, such as the incident intensity, the shadow position, and the like corresponding to the shop and the car under the same sun azimuth parameter. The shadow lengths and the like are different, so that the proper light and shadow parameters can be determined according to the specific object of the real scene.
And S1024, determining the current light and shadow parameters of the real scene according to the position information.
Specifically, when the environment information does not satisfy the preset condition, that is, the real scene is not in the sun illumination condition, including but not limited to that the real scene is in a movie theater environment, the current time is before sunrise after sunset, the real scene is outdoors but in rainy weather, and the like.
Alternatively, various modes in the real world under non-lighting conditions, such as a movie theater mode, a cloudy mode, a rainy mode, a nighttime mode, and the like, may be predetermined, and by acquiring information of the movie theater, the cloudy, the rainy, the nighttime shadows, the environment, and the like in the real world, the shadow parameter table corresponding to different modes may be established. In this step, the target mode corresponding to the real scene is determined according to the position information, and the light and shadow parameter corresponding to the target mode is searched in the light and shadow parameter table according to the mode to serve as the current light and shadow parameter of the real scene.
S103, rendering the shadow effect of the virtual object according to the current shadow parameters of the real scene.
It should be noted that the virtual object is an object superimposed in the real scene that can be seen by the user through the electronic device, and may be a person, an animal, an article, information, and the like added in the real scene, or may be a person, an animal, an article, information, and the like replacing a certain part in the real scene, and the virtual object may be static or dynamic, which is not limited herein.
In this step, the virtual object may be rendered with a light and shadow effect directly according to the light and shadow parameters (including the light intensity, the incident direction, the shadow position, the shadow length, and the like) obtained in step S102. In the actual rendering process, the light and shadow parameters can be adjusted in real time according to the position or time of the virtual object and other factors, so that a very real illumination change effect is realized.
In practice, global illumination is more consistent with the light and shadow effect in the real world. The global illumination is a light and shadow effect formed by direct illumination and indirect illumination together, the direct illumination is a light and shadow effect formed by directly irradiating light emitted from a light source on an object, and the indirect illumination is a light and shadow effect formed by reflecting light emitted from the light source on the surfaces of some objects from the surfaces of other objects. The light source may include sunlight under sunlight or an artificial light source under non-sunlight, and the invention is not limited herein. The virtual object shadow effect is rendered through global illumination, so that the virtual object can be matched with a real scene more and is more real.
Therefore, step S103 may optionally further comprise the following sub-steps:
and S1031, acquiring the position of the virtual object in the real scene.
S1032, determining the light and shadow parameters of the virtual object according to the position of the virtual object in the real scene and the current light and shadow parameters of the real scene.
S1033, rendering the shadow effect of the virtual object according to the shadow parameters of the virtual object.
Specifically, the position of a virtual object in a real scene is obtained, and the environmental condition around the virtual object in the real scene is determined, so that whether the phenomenon that a light source is reflected to the virtual object to form indirect illumination exists in the real scene or not is judged; if the phenomenon of indirect illumination exists, calculating a reflection light shadow parameter of a certain range of incident light around the direction of the incident light reflected into the eyes of the user by the virtual object according to the sight line direction of the user, the normal direction of the surface of the virtual object, the light shadow parameter, wherein the reflection light shadow parameter includes but is not limited to the intensity of the reflection light and the direction of the reflection light. The current light and shadow parameters and the reflection light and shadow parameters of the real scene are jointly used as the light and shadow parameters of the virtual object, the light reflection and shadow effects formed on the surface of the virtual object can be determined according to the light and shadow parameters of the virtual object, and the light and shadow parameters are rendered into a chartlet form by baking to cover the surface of the virtual object, so that the rendering of the light and shadow effects on the virtual object is completed.
S104, synthesizing the virtual object with the rendered shadow effect into the real scene.
In this step, the virtual object with the rendered shadow effect is synthesized into a real scene and projected to the eyes of the user through the electronic device, so that the user can feel the augmented reality effect.
The embodiment of the invention obtains the attribute information in the real scene; determining the current light and shadow parameters of the real scene according to the attribute information; rendering the shadow effect of the virtual object according to the current shadow parameter of the real scene; and synthesizing the virtual object with the rendered shadow effect into the real scene. The method and the device have the advantages that the light and shadow effect of the virtual object and the light and shadow effect of the real scene are changed synchronously, the sense of reality of the user is stronger, and the user experience is better.
Fig. 2 is a flowchart of a rendering method for a shadow effect of a virtual object according to an embodiment of the present invention. As shown in fig. 2, this embodiment is a specific implementation scheme of the embodiment shown in fig. 1, and therefore details of specific implementation methods and beneficial effects of each step in the embodiment shown in fig. 1 are not described again, and the rendering method for a virtual object shadow effect provided in the embodiment of the present invention specifically includes:
s201, acquiring attribute information of a real scene.
Specifically, the attribute information may include position information and time information corresponding to the real scene
S202, judging whether the attribute information meets a preset condition.
In this step, if the preset condition is satisfied, executing step S203-step S204; if the preset condition is not satisfied, step S205 is executed.
S203, determining the azimuth parameters of the sun according to the position information and the time information.
And S204, determining the current light and shadow parameters of the real scene according to the azimuth parameters of the sun.
S205, determining the current light and shadow parameters of the real scene according to the position information.
S206, rendering the shadow effect of the virtual object according to the current shadow parameters of the real scene.
And S207, analyzing the attribute information and determining an adjusting parameter of the light and shadow effect of the virtual object.
And S208, adjusting the light and shadow effect of the virtual object by combining the adjusting parameters.
Specifically, whether factors influencing the light and shadow effect exist or not is determined through analysis of the real scene attribute information. Factors influencing the light and shadow effect may include various types, such as weather, environment and the like, for example, the current weather condition is obtained according to the position information of the real scene, if there is haze at present, the illumination intensity value will be correspondingly reduced, and the illumination effect of the virtual object will also be changed; the environment around the virtual object is obtained according to the position information of the real scene, and if objects (including buildings, articles, people and the like) which shield the light of the virtual object or influence the shadow of the virtual object exist, the illumination intensity value is correspondingly reduced, and the shadow length, the shadow area and the like are also correspondingly changed.
After determining the factors influencing the light and shadow effect of the virtual object, the specific influence on the light and shadow effect of the virtual object, such as the influence on the illumination intensity, the shadow length, the shadow area and the like, can be determined according to the type of the factors, and meanwhile, the specific parameters (such as the haze parameter, the height and the position of the shielding object and the like) of the factors can be obtained to determine the influence degree of the factors on the shadow effect, so that the adjustment parameters are calculated. Optionally, the adjustment parameter may be an adjustment ratio or a specific adjustment value, which is not limited in the present invention. By adjusting the light and shadow effect, the sense of reality of the virtual object in the real scene is further enhanced, and a user can generate the sense that the virtual object also belongs to a part of the real world.
S209, synthesizing the virtual object with the rendered shadow effect into the real scene.
It should be noted that step S207 and step S208 are optional steps, and if steps S207-S208 are not executed, the virtual object with the rendered light and shadow effect in step S206 may be directly synthesized into the real scene; if steps S207-S208 are executed, the rendered virtual object with adjusted light and shadow effect is synthesized into the real scene.
The embodiment of the invention obtains the attribute information in the real scene; determining the current light and shadow parameters of the real scene according to the attribute information; rendering the shadow effect of the virtual object according to the current shadow parameter of the real scene; and synthesizing the virtual object with the rendered shadow effect into the real scene. The method has the advantages that the synchronous change of the light and shadow effect of the virtual object and the light and shadow effect of the real scene is realized, meanwhile, the influence of real scene intervention factors (such as pollution conditions and surrounding environment) on the light and shadow effect is considered, the reality sense of a user is stronger, and the user experience is better.
Fig. 3 is a structural diagram of a rendering apparatus for virtual object shadow effect according to an embodiment of the present invention. As shown in fig. 3, the apparatus specifically includes: an acquisition module 1000, a determination module 3000, a rendering module 4000, and a composition module 7000.
The acquisition module is used for acquiring attribute information in a real scene; the determining module is used for determining the current light and shadow parameters of the real scene according to the attribute information; the rendering module is used for rendering the shadow effect of the virtual object according to the current shadow parameters of the real scene; the synthesis module is used for synthesizing the virtual object with the rendered shadow effect into the real scene.
The rendering apparatus for virtual object light and shadow effect provided in the embodiment of the present invention is specifically configured to execute the method provided in the embodiment shown in fig. 1, and the implementation principle, the method, the function and the like of the rendering apparatus are similar to those of the embodiment shown in fig. 1, and are not described herein again.
Fig. 4 is a structural diagram of a rendering apparatus for virtual object shadow effect according to an embodiment of the present invention. As shown in fig. 4, the apparatus specifically includes: the system comprises an acquisition module 1000, a judgment module 2000, a determination module 3000, a rendering module 4000, an analysis module 5000, an adjustment module 6000 and a synthesis module 7000.
The acquisition module is used for acquiring attribute information in a real scene; the judging module is used for judging whether the attribute information meets a preset condition or not; the determining module is used for determining the current light and shadow parameters of the real scene according to the attribute information; the rendering module is used for rendering the shadow effect of the virtual object according to the current shadow parameters of the real scene; the analysis module is used for analyzing the attribute information and determining an adjustment parameter of the light and shadow effect of the virtual object; the adjusting module is used for adjusting the light and shadow effect of the virtual object by combining the adjusting parameters; the synthesis module is used for synthesizing the virtual object with the rendered shadow effect into the real scene.
Optionally, when the attribute information meets the preset condition, the determining module is further configured to determine an azimuth parameter of the sun according to the position information and the time information; and determining the current light and shadow parameters of the real scene according to the azimuth parameters of the sun.
Optionally, when the attribute information does not satisfy the preset condition, the determining module is further configured to determine a current light and shadow parameter of the real scene according to the position information.
Optionally, the rendering module further comprises: an obtaining submodule 4100 configured to obtain a position of a virtual object in the real scene; a determining sub-module 4200, configured to determine a light and shadow parameter of the virtual object according to a position of the virtual object in the real scene and a current light and shadow parameter of the real scene; and the rendering submodule 4300 is configured to render the shadow effect of the virtual object according to the shadow parameter of the virtual object.
The rendering apparatus for virtual object light and shadow effect provided in the embodiment of the present invention is specifically configured to execute the method provided in the embodiment shown in fig. 1 and/or fig. 2, and the implementation principle, method, and functional purpose thereof are similar to those of the embodiment shown in fig. 2, and are not described herein again.
The rendering apparatus for virtual object shadow effect according to the embodiments of the present invention may be independently disposed in the electronic device as one of software or hardware functional units, or may be integrated in a processor as one of functional modules to execute the rendering method for virtual object shadow effect according to the embodiments of the present invention.
Fig. 5 is a schematic diagram of a hardware structure of an electronic device executing the rendering method for virtual object light and shadow effects according to the embodiment of the present invention. As shown in fig. 5, the electronic device includes:
one or more processors 5100 and memory 5200, illustrated in fig. 5 as processor 5100.
The apparatus for performing the method for rendering the shadow effect of the virtual object may further include: an input device 5300 and an output device 5300.
The processor 5100, the memory 5200, the input device 5300, and the output device 5400 may be connected by a bus or other means, and the bus connection is exemplified in fig. 5.
The memory 5200, as a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the rendering method of the virtual object shadow effect in the embodiment of the present invention. The processor 5100 executes various functional applications of the server and data processing, i.e., a rendering method of realizing the light and shadow effect of the virtual object, by running a nonvolatile software program, instructions, and modules stored in the memory 5200.
The memory 5200 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created by use of the rendering apparatus for virtual object shading effects provided according to an embodiment of the present invention, and the like. Additionally, memory 5200 may include high speed random access memory 5200 and may also include non-volatile memory 5200, such as at least one piece of disk storage 5200, flash memory device, or other piece of non-volatile solid state memory 520. In some embodiments, the memory 5200 optionally includes memory 5200 remotely located relative to the processor, and these remote memories 5200 may be connected to the virtual object shadow effect rendering device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 5300 may receive input of numeric or character information and generate key signal inputs related to user settings and function control of the rendering device for the light and shadow effects of the virtual object. The input device 5300 may include a pressing module or the like.
The one or more modules are stored in the memory 5200 and, when executed by the one or more processors 5100, perform a rendering method of the virtual object light and shadow effects.
The electronic device of embodiments of the present invention exists in a variety of forms, including but not limited to:
(1) mobile communication devices, which are characterized by mobile communication capabilities and are primarily targeted at providing voice and data communications. Such terminals include smart phones (e.g., iphones), multimedia phones, functional phones, and low-end phones, among others.
(2) The ultra-mobile personal computer equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include PDA, MID, and UMPC devices, such as ipads.
(3) Portable entertainment devices such devices may display and play multimedia content. Such devices include audio and video players (e.g., ipods), handheld game consoles, electronic books, as well as smart toys and portable car navigation devices.
(4) The server is similar to a general computer architecture, but has higher requirements on processing capability, stability, reliability, safety, expandability, manageability and the like because of the need of providing highly reliable services.
(5) And other electronic devices with data interaction functions.
The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The embodiment of the present invention provides a non-transitory computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are executed by an electronic device, the electronic device is caused to execute a method for rendering a virtual object shadow effect in any method embodiment described above.
An embodiment of the present invention provides a computer program product, where the computer program product includes a computer program stored on a non-transitory computer readable storage medium, and the computer program includes program instructions, where the program instructions, when executed by an electronic device, cause the electronic device to perform the method for rendering the light and shadow effect of the virtual object in any of the above-mentioned method embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions and/or portions thereof that contribute to the prior art may be embodied in the form of a software product that can be stored on a computer-readable storage medium including any mechanism for storing or transmitting information in a form readable by a computer (e.g., a computer). For example, a machine-readable medium includes Read Only Memory (ROM), Random Access Memory (RAM), magnetic disk storage media, optical storage media, flash memory storage media, electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.), and others, and the computer software product includes instructions for causing a computing device (which may be a personal computer, server, or network device, etc.) to perform the methods described in the various embodiments or portions of the embodiments.
In another embodiment, fig. 6 provides an AR helmet as an implementation device of the above-mentioned rendering method of the light and shadow effect of the virtual object, the AR helmet includes a clamping portion 1, a lens portion 2 and a head-mounted portion 3, wherein the clamping portion 1 includes a base 101, a base plate 102 and an inner frame 103, the base plate 102 and the inner frame 103 are both vertically mounted on the base 101, the base plate 102 is a plate-shaped structure, the inner frame 103 is a frame structure adapted to the lens portion, the base plate 102 and the inner frame 103 are located in front of and behind the base 101, that is, the inner frame 103 is located at a side close to the lens portion 2, the base plate 102 is located at a side far away from the lens portion 2, and an electronic device such as a mobile phone is mounted between the base plate 102.
Another improvement of this embodiment is shown in conjunction with fig. 7 and 8: the clamping device 4 for clamping the mobile phone is arranged on the base plate 101, the clamping device 4 comprises a mounting hole 401, a mounting cover 402, a first bolt 403, a guide sleeve 404, a guide pin 405 and other structures, the mounting hole 401 is provided with a first end far away from the inner frame 401 and a second end close to the inner frame, specifically, the mounting hole 401 comprises a first section and a second section which are adjacent, the inner diameter of the first section is smaller than that of the second section, the end cover 402 is mounted on the outer end of the second section, an adjusting ring 407 is mounted at the end part, close to the first section, of the second section, and a limiting flange 408 which is matched with the adjusting ring 407 and limits the moving stroke of the guide sleeve is arranged at the.
The first end is provided with a mounting cover 402, the mounting cover 402 is provided with a shaft hole 4021, a first bolt 403 is mounted on the mounting cover 402 through the shaft hole 4021, the outer end of the first bolt 403 is connected with a first screwing piece 406, the inner end of the first bolt 403 is in threaded connection with the inner end of a guide sleeve 404 mounted in the mounting hole 401, the outer end of the guide sleeve 404 is provided with a pressing end 4041 for pressing the mobile phone, the outer wall of the guide sleeve 404 is provided with a groove (not shown) matched with a guide pin 405 in the horizontal direction, one end of the guide pin 405 is mounted on the inner wall of the mounting hole 401, and the other end of the guide pin 405 is mounted in. When a user rotates the first screwing piece 406, the first screw rod 403 is driven to rotate, the guide sleeve 404 is driven to rotate and move forwards/backwards, the guide sleeve only has forward or backward displacement due to the existence of the guide pin, the pressing end 4041 is pressed on the mobile phone and the inner frame 103, the process can realize slow output of the pressing end, the pressing force is adjustable, damage to a rear shell of the mobile phone can be avoided, the mobile phone is fixed through a point structure of the supporting end, the effect is superior to that of fixing of a clamping plate or a face shell in the prior art, the heat dissipation performance of the mobile phone is not affected, the structure is strong in adaptability, and the mobile phone is suitable for mobile phones with various screen sizes and thicknesses.
The applicant finds that part of mobile phones are not provided with functions of switching playing programs and zooming sounds in an AR scene, so that most users can only take the mobile phones out of the clamping mechanism for playing switching and adjusting sounds and pictures when needing the operations, so that the applicant designs the clamping part 1 and the lens part 2 to be in sliding fit, specifically, the lens part 2 is provided with the mounting plate 201, the clamping part 1 is mounted on the mounting plate 201, the mounting plate 201 is provided with a plurality of rollers 2011 at uniform intervals along the width direction of the mounting plate 201, and more favorably, the clamping part and the lens are in sliding fit, so that the mobile phones can be taken out when needing to operate the mobile phones, and the clamping part is pushed back to the original position for watching after the operations are finished, and the operation is convenient and fast.
Referring to fig. 8, in this embodiment, a locking structure 104 capable of locking the guide sleeve and the roller is further disposed on the clamping portion 1, and the locking structure 104 not only can prevent the first bolt from being reset, but also can lock the sliding fit between the clamping portion and the lens portion 2. Specifically, the locking structure 104 of this embodiment includes a return spring 1041, and a sleeve 1042 and a screw sleeve 1043 which are bilaterally symmetric with respect to the guide sleeve 404 and are disposed below the guide sleeve 404, an upper portion of an inner end of the sleeve 1042 and the screw sleeve 1043 has a first locking portion 1044 which is matched with the outer wall of the lower portion of the guide sleeve in size, a lower portion of the inner end of the sleeve 1042 and the screw sleeve 1043 has a second locking portion 1045 which is matched with the roller 2011 in size, the inner end of the sleeve 1042 is provided with a first spring slot 1046, the inner end of the screw sleeve 1043 is provided with a second spring slot 1047, one end of the return spring 1041 is mounted in the first spring slot 1046, the other end of the return spring is mounted in the second spring slot 1047, a second bolt 1048 is mounted in the sleeve 1042 and the screw sleeve 1043, the sleeve 1042 and the screw sleeve 1043 are connected by the second bolt 1048 and a locking nut 1049 which is matched with the second bolt 1048, and at least one end. The locking structure 104 can fix the guide sleeve 404, and can lock the sliding fit of the clamping part 1 and the lens part 2, thereby realizing the multifunction and simplified structure of one structure.
In addition, the applicant also finds that most of the existing AR helmets do not have a mobile phone heat dissipation structure, or the heat dissipation of the mobile phone is realized through a complex temperature sensor, a complex controller and other structures, the structure is complex, the manufacturing cost is high, the size of the AR helmet is greatly increased, and the light weight cannot be realized. Therefore, the applicant improves on the basis, referring to fig. 9, in this embodiment, a plurality of supporting bars 5 parallel to the mobile phone rear case extend from a pressing end 4041, a supporting point 501 connected to the mobile phone rear case is provided at an end of the supporting bar 5, a micro fan 6 is installed on the supporting bar 5, the micro fan 6 is provided with a touch switch (not shown in the figure), at least one through hole 502 is provided on the supporting bar 5, a driving member 503 made of a shape memory alloy is installed in the through hole 502, one end of the driving member 503 is connected to the touch switch, the other end of the driving member 503 abuts against the mobile phone rear case, the driving member 503 is in a martensite state when the temperature of the mobile phone rear case reaches an early warning value, the driving member 503 is in an austenite state when the temperature of the mobile phone rear case is lower than. The miniature fan is switched on and off by utilizing the shape change of the shape memory alloy under the temperature change, so that the precision is higher, the cooling of the mobile phone is facilitated, the loss of the mobile phone is avoided, a control structure is not needed, the cooling structure is simplified, and the production cost and the installation space are reduced.
In addition, a groove matched with the first screwing piece can be arranged on the base plate 101, and the first screwing piece 406 is positioned in the groove. The outer surface of the base plate can be in a plane structure by arranging the screwing piece in the groove, so that the appearance is simplified.
The method comprises the steps that a smart phone is arranged in a lens part of the AR helmet, the smart phone obtains attribute information in a real scene through a camera device carried by the smart phone, determines current light and shadow parameters of the real scene according to the attribute information, renders the light and shadow effect of a virtual object according to the current light and shadow parameters of the real scene, and synthesizes the virtual object with the rendered light and shadow effect into the real scene.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the embodiments of the present invention, and not to limit the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (8)

1. A rendering method of a virtual object shadow effect applied to augmented reality is characterized by comprising the following steps:
acquiring attribute information in a real scene;
determining the current light and shadow parameters of the real scene according to the attribute information;
rendering the shadow effect of the virtual object according to the current shadow parameter of the real scene;
synthesizing the virtual object with the rendered shadow effect into the real scene;
the method is applied to an AR helmet which comprises a clamping part, a lens part and a head-wearing part,
the clamping part comprises a base, a base plate and an inner frame, the base plate and the inner frame are both arranged on the base, the inner frame is arranged on one side close to the lens part, the base plate is arranged on one side far away from the lens part, a clamping device is arranged on the base plate and comprises an installation hole, an installation cover, a first bolt, a guide sleeve and a guide pin, the installation cover, the first bolt, the guide sleeve and the guide pin are arranged in the installation hole, the installation hole comprises a first section and a second section which are adjacent, the inner diameter of the first section is smaller than that of the second section, the installation cover is arranged on the outer end of the second section, the end part of the second section close to the first section is provided with an adjusting ring, the inner end of the guide sleeve is provided with a limit flange which is matched with the adjusting ring and limits the moving stroke of the guide sleeve, and the installation cover is provided with a, the first bolt is installed on the installation cover through the shaft hole, the outer end part of the first bolt is connected with a first screwing piece, the inner end part of the first bolt is in threaded connection with the inner end part of a guide sleeve installed in the installation hole, the outer end part of the guide sleeve is provided with a pressing end for pressing a mobile phone, the outer wall of the guide sleeve is provided with a groove matched with the guide pin along the horizontal direction, one end of the guide pin is installed on the inner wall of the installation hole, and the other end of the guide pin is installed in the groove;
the mobile phone acquires attribute information in a real scene through a camera device carried by the mobile phone, determines a current light and shadow parameter of the real scene according to the attribute information, renders a light and shadow effect of a virtual object according to the current light and shadow parameter of the real scene, and synthesizes the virtual object with the rendered light and shadow effect into the real scene.
2. The method according to claim 1, wherein the attribute information comprises location information and time information, and before determining the current light and shadow parameters of the real scene according to the attribute information, the method further comprises:
judging whether the attribute information meets a preset condition or not;
when the attribute information meets the preset condition, determining the current light and shadow parameters of the real scene according to the attribute information comprises:
determining the azimuth parameters of the sun according to the position information and the time information;
and determining the current light and shadow parameters of the real scene according to the azimuth parameters of the sun.
3. The method according to claim 2, wherein when the attribute information does not satisfy the preset condition, the determining, according to the attribute information, the current lighting parameter of the real scene comprises:
and determining the current light and shadow parameters of the real scene according to the position information.
4. The method of claim 1, wherein rendering the shadow effect of the virtual object according to the current shadow parameters of the real scene comprises:
acquiring the position of a virtual object in the real scene;
determining the light and shadow parameters of the virtual object according to the position of the virtual object in the real scene and the current light and shadow parameters of the real scene;
and rendering the shadow effect of the virtual object according to the shadow parameters of the virtual object.
5. The method according to any of claims 1-4, wherein after said rendering of the shadow effect of the virtual object according to the current shadow parameters of the real scene, the method further comprises:
analyzing the attribute information, and determining an adjustment parameter of the light and shadow effect of the virtual object;
and adjusting the light and shadow effect of the virtual object by combining the adjusting parameters.
6. The method of claim 1, wherein the holder of the AR helmet is slidably engaged with the lens, the lens being provided with a mounting plate, the holder being mounted on the mounting plate, the mounting plate being provided with a plurality of rollers at uniform intervals along a width thereof, the holder having a locking structure for locking the guide sleeve and the rollers.
7. The method of claim 6, wherein the locking structure of the AR helmet comprises a return spring and a sleeve and a threaded sleeve that are bilaterally symmetric about and disposed below a guide sleeve, the upper parts of the inner ends of the sleeve and the threaded sleeve are provided with first locking parts matched with the outer wall of the lower part of the guide sleeve in size, the lower parts of the inner ends of the sleeve and the thread sleeve are provided with second locking parts matched with the size of the roller, the inner end of the sleeve is provided with a first spring groove, the inner end of the threaded sleeve is provided with a second spring groove, one end of the return spring is arranged in the first spring groove, the other end of the return spring is arranged in the second spring groove, the sleeve and the threaded sleeve are internally provided with a second bolt, the sleeve and the threaded sleeve are connected through the second bolt and a locking nut matched with the second bolt, and at least one end part of the second bolt is provided with a second screwing piece.
8. The method as claimed in claim 1, wherein the pressing end of the AR helmet is extended with a plurality of support bars, the end of each support bar is provided with a support point connected with the rear shell of the mobile phone, the support bar is provided with a micro fan, the micro fan is provided with a touch switch, the support bar is provided with at least one through hole, a driving member made of shape memory alloy is installed in the through hole, one end of the driving member is connected with the touch switch, the other end of the driving member abuts against the rear shell of the mobile phone, the driving member is in a martensite state when the temperature of the rear shell of the mobile phone reaches an early warning value, the micro fan is turned on through the touch switch, the driving member is in an austenite state when the temperature of the rear shell of the mobile phone is lower than the early warning value; the base plate is provided with a groove matched with the first screwing piece, and the first screwing piece is located in the groove.
CN201711075801.4A 2017-11-06 2017-11-06 Rendering method and device for virtual object shadow effect applied to augmented reality Active CN107705353B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711075801.4A CN107705353B (en) 2017-11-06 2017-11-06 Rendering method and device for virtual object shadow effect applied to augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711075801.4A CN107705353B (en) 2017-11-06 2017-11-06 Rendering method and device for virtual object shadow effect applied to augmented reality

Publications (2)

Publication Number Publication Date
CN107705353A CN107705353A (en) 2018-02-16
CN107705353B true CN107705353B (en) 2020-02-11

Family

ID=61178170

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711075801.4A Active CN107705353B (en) 2017-11-06 2017-11-06 Rendering method and device for virtual object shadow effect applied to augmented reality

Country Status (1)

Country Link
CN (1) CN107705353B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020019132A1 (en) * 2018-07-23 2020-01-30 太平洋未来科技(深圳)有限公司 Method and apparatus for rendering virtual object on the basis of light information, and electronic device
CN110166760A (en) * 2019-05-27 2019-08-23 浙江开奇科技有限公司 Image treatment method and terminal device based on panoramic video image
CN111127624A (en) * 2019-12-27 2020-05-08 珠海金山网络游戏科技有限公司 Illumination rendering method and device based on AR scene
CN111833423A (en) * 2020-06-30 2020-10-27 北京市商汤科技开发有限公司 Presentation method, presentation device, presentation equipment and computer-readable storage medium
CN111862866B (en) * 2020-07-09 2022-06-03 北京市商汤科技开发有限公司 Image display method, device, equipment and computer readable storage medium
CN112509151B (en) * 2020-12-11 2021-08-24 华中师范大学 Method for generating sense of reality of virtual object in teaching scene
CN113223139A (en) * 2021-05-26 2021-08-06 深圳市商汤科技有限公司 Augmented reality shadow estimation method and device and computer storage medium
CN114385289B (en) * 2021-12-23 2024-01-23 北京字跳网络技术有限公司 Rendering display method and device, computer equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103500465A (en) * 2013-09-13 2014-01-08 西安工程大学 Ancient cultural relic scene fast rendering method based on augmented reality technology
GB2516242A (en) * 2013-07-15 2015-01-21 Martin Leonardo Moreno Ten Head mounted display
CN105488844A (en) * 2015-11-19 2016-04-13 中国电子科技集团公司第二十八研究所 Method for displaying real-time shadow of massive models in three-dimensional scene
CN105844714A (en) * 2016-04-12 2016-08-10 广州凡拓数字创意科技股份有限公司 Augmented reality based scenario display method and system
CN106526860A (en) * 2016-12-15 2017-03-22 华勤通讯技术有限公司 Head-mounted AR device
CN107134005A (en) * 2017-05-04 2017-09-05 网易(杭州)网络有限公司 Illumination adaptation method, device, storage medium, processor and terminal
CN107209385A (en) * 2015-01-20 2017-09-26 微软技术许可有限责任公司 Head mounted display equipment with protection goggles
CN107291225A (en) * 2017-06-11 2017-10-24 王天龙 A kind of wear-type virtual reality/augmented reality device based on mobile phone

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2516242A (en) * 2013-07-15 2015-01-21 Martin Leonardo Moreno Ten Head mounted display
CN103500465A (en) * 2013-09-13 2014-01-08 西安工程大学 Ancient cultural relic scene fast rendering method based on augmented reality technology
CN107209385A (en) * 2015-01-20 2017-09-26 微软技术许可有限责任公司 Head mounted display equipment with protection goggles
CN105488844A (en) * 2015-11-19 2016-04-13 中国电子科技集团公司第二十八研究所 Method for displaying real-time shadow of massive models in three-dimensional scene
CN105844714A (en) * 2016-04-12 2016-08-10 广州凡拓数字创意科技股份有限公司 Augmented reality based scenario display method and system
CN106526860A (en) * 2016-12-15 2017-03-22 华勤通讯技术有限公司 Head-mounted AR device
CN107134005A (en) * 2017-05-04 2017-09-05 网易(杭州)网络有限公司 Illumination adaptation method, device, storage medium, processor and terminal
CN107291225A (en) * 2017-06-11 2017-10-24 王天龙 A kind of wear-type virtual reality/augmented reality device based on mobile phone

Also Published As

Publication number Publication date
CN107705353A (en) 2018-02-16

Similar Documents

Publication Publication Date Title
CN107705353B (en) Rendering method and device for virtual object shadow effect applied to augmented reality
CN107845132B (en) Rendering method and device for color effect of virtual object
CN107749076B (en) Method and device for generating real illumination in augmented reality scene
CN107871339B (en) Rendering method and device for color effect of virtual object in video
CN107749075B (en) Method and device for generating shadow effect of virtual object in video
US10229544B2 (en) Constructing augmented reality environment with pre-computed lighting
US10559121B1 (en) Infrared reflectivity determinations for augmented reality rendering
US10395421B2 (en) Surround ambient light sensing, processing and adjustment
CN108391445B (en) Virtual reality display method and terminal
US10777010B1 (en) Dynamic environment mapping for augmented reality
KR20170052635A (en) Physically interactive manifestation of a volumetric space
US10762697B1 (en) Directional occlusion methods and systems for shading a virtual object rendered in a three-dimensional scene
CN114125310B (en) Photographing method, terminal device and cloud server
JP2013196616A (en) Information terminal device and information processing method
CN111179436A (en) Mixed reality interaction system based on high-precision positioning technology
CN114387445A (en) Object key point identification method and device, electronic equipment and storage medium
JP2013149029A (en) Information processor, information processing method
CN109118571A (en) Method, apparatus and electronic equipment based on light information rendering virtual objects
CN107728787B (en) Information display method and device in panoramic video
WO2018155235A1 (en) Control device, control method, program, and projection system
CN115761105A (en) Illumination rendering method and device, electronic equipment and storage medium
KR20150071595A (en) Constructing augmented reality environment with pre-computed lighting
CN112257653A (en) Method and device for determining space decoration effect graph, storage medium and electronic equipment
CN202661768U (en) Mobile multimedia ground projection interactive terminal
Wang et al. The Study of Virtual Reality Scene Making in Digital Station Management Application System Based on Unity3D

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant