CN111462339B - Display method and device in augmented reality, medium and electronic equipment - Google Patents

Display method and device in augmented reality, medium and electronic equipment Download PDF

Info

Publication number
CN111462339B
CN111462339B CN202010239234.7A CN202010239234A CN111462339B CN 111462339 B CN111462339 B CN 111462339B CN 202010239234 A CN202010239234 A CN 202010239234A CN 111462339 B CN111462339 B CN 111462339B
Authority
CN
China
Prior art keywords
entity
target
scene
virtual target
anchor point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010239234.7A
Other languages
Chinese (zh)
Other versions
CN111462339A (en
Inventor
于茜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202010239234.7A priority Critical patent/CN111462339B/en
Publication of CN111462339A publication Critical patent/CN111462339A/en
Application granted granted Critical
Publication of CN111462339B publication Critical patent/CN111462339B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure relates to the technical field of computers, and provides a display method and device in augmented reality, a computer readable storage medium and electronic equipment. The method comprises the following steps: acquiring an entity scene of the terminal equipment under the current view angle, and determining a target surface meeting preset conditions in the entity scene; determining an anchor point of the virtual target according to the target surface, wherein the anchor point is used for positioning the virtual target in the entity scene so that the virtual target is at least partially blocked by an entity in the entity scene under the current view angle; and rendering the virtual target in the physical scene based on the relative position relation between the terminal equipment and the anchor point. According to the technical scheme, the effect of the virtual target can be shielded visually by the player, so that the game interestingness is increased, and meanwhile, the immersion of the player and the substitution sense of the game are improved.

Description

Display method and device in augmented reality, medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer technology, and in particular, to a display method in augmented reality, a display device in augmented reality, a computer-readable storage medium, and an electronic apparatus.
Background
In a target-seeking game based on a terminal device, for example, a player seeks a certain virtual target in a game scene, such as a box, a red packet, etc., a processing scheme of an existing game scene generally creates a virtual target model directly in a visible range of the player. That is, the player generally can easily determine the location of the virtual object without experiencing the enjoyment of finding the virtual object. It can be seen that the game interest of the game scene created by the existing scheme needs to be improved so as to promote the substitution sense of the game.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The disclosure aims to provide a display method and device in augmented reality, a computer readable storage medium and an electronic device, so as to improve the interest of a game at least to a certain extent and further improve the substitution sense of the game.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to a first aspect of the present disclosure, there is provided a display method in augmented reality, including:
acquiring an entity scene of the terminal equipment under the current view angle, and determining a target surface meeting preset conditions in the entity scene; determining an anchor point of a virtual target according to the target surface, wherein the anchor point is used for positioning the virtual target in the entity scene so that the virtual target is at least partially blocked by an entity in the entity scene under the current view angle; and rendering the virtual target in the physical scene based on the relative position relation between the terminal equipment and the anchor point.
In some embodiments of the present disclosure, based on the foregoing,
the obtaining the entity scene of the terminal device under the current view angle includes: acquiring an entity under a current view angle according to a camera component of the terminal equipment to obtain an entity scene under the current view angle;
the determining the target surface meeting the preset condition in the entity scene includes: and based on a pre-trained scene recognition machine learning model, acquiring a solid surface with a plane area larger than a preset threshold value from the solid scene as the target surface.
In some embodiments of the present disclosure, based on the foregoing solution, determining the anchor point of the virtual target according to the target plane includes:
acquiring an endpoint set of a boundary line between the target surface and a basic reference surface in the entity scene; and acquiring adjacent entity surfaces with contact relation between the end points in the end point set and the target surface, and screening the end point set to obtain the anchor point according to the position relation among the adjacent entity surfaces, the target surface and the terminal equipment.
In some embodiments of the present disclosure, based on the foregoing solution, the rendering the virtual target in the physical scene based on a relative positional relationship between the terminal device and the anchor point includes:
creating the virtual target in the target direction of the anchor point, wherein the target direction is a direction away from the terminal equipment; and rendering the target surface by adopting a first priority, and rendering the virtual target by adopting a second priority lower than the first priority, so that the virtual target is at least partially blocked by the entity in the entity scene under the current view angle after rendering.
In some embodiments of the present disclosure, based on the foregoing solution, after rendering the virtual target in the physical scene based on the relative positional relationship between the terminal device and the anchor point, the method further includes:
acquiring an adjacent entity surface with a contact relation with the target surface at the anchor point, and acquiring position information of the adjacent entity surface in the entity scene to obtain environment information of the virtual target; and adjusting the placement angle of the virtual target according to the environment information so that the virtual target is not blocked by an entity in the live-action scene.
In some embodiments of the present disclosure, based on the foregoing solution, the rendering the virtual target in the physical scene based on a relative positional relationship between the terminal device and the anchor point includes:
and acquiring angle values among the terminal equipment, the anchor point and the target surface for multiple times, and adjusting the rendering area of the virtual target along with the change of the angle values.
In some embodiments of the disclosure, based on the foregoing solution, the adjusting the rendering area of the virtual target according to the change of the angle value includes:
In response to the angle value increasing, increasing a rendering area of the virtual target; or, in response to the angle value decreasing, decreasing the rendering area of the virtual object.
According to a second aspect of the present disclosure, there is provided a display device in augmented reality, the device comprising: the system comprises a target surface acquisition module, an anchor point determination module and a rendering module. Wherein:
the target surface acquisition module is configured to acquire an entity scene of the terminal equipment under the current view angle, and determine a target surface meeting preset conditions in the entity scene; the anchor point determining module is configured to determine an anchor point of a virtual target according to the target surface, and the anchor point is used for positioning the virtual target in the entity scene so that the virtual target is at least partially blocked by an entity in the entity scene under the current view angle; and the rendering module is configured to render the virtual target in the physical scene based on the relative position relation between the terminal equipment and the anchor point.
According to a third aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a display method in augmented reality as described in the first aspect of the above embodiments.
According to a fourth aspect of embodiments of the present disclosure, there is provided an electronic device, comprising: a processor; and storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the display method in augmented reality as described in the first aspect of the embodiments described above.
As can be seen from the above technical solutions, the display method in augmented reality, the display device in augmented reality, and the computer-readable storage medium and the electronic device for implementing the display method in augmented reality in the exemplary embodiments of the present disclosure have at least the following advantages and positive effects:
in the technical schemes provided by some embodiments of the present disclosure, an entity scene of a terminal device under a current view angle is obtained, a target surface meeting a preset condition is determined in the entity scene, and an anchor point of a virtual target is determined according to the target surface to realize positioning of the virtual target. Further, rendering is performed on the target surface and the virtual target based on the relative position relation between the terminal equipment and the anchor point, so that the virtual target is constructed and rendered in the entity scene. On one hand, the technical scheme firstly determines the target surface and positions the virtual target in the physical scene based on the target surface, thereby achieving the effect of hiding the virtual target in the visual sense of the player and being beneficial to increasing the game interestingness. On the other hand, the technical scheme is based on the relative position relation between the terminal equipment and the anchor point, and the target surface and the virtual target are rendered, so that the virtual target is rendered in the entity scene, and at least part of the virtual target is shielded by the entity in the entity scene under the current view angle. Wherein the position of the terminal device, i.e. the player, in the game scene is taken into account and rendered based thereon. Therefore, in the game scene construction process provided by the technical scheme, the position factors of the player are considered, so that the immersion of the player and the substitution sense of the game are facilitated to be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
FIG. 1 shows a flow diagram of a display method in augmented reality in an exemplary embodiment of the present disclosure;
FIG. 2 illustrates a schematic diagram of a build scenario of a terminal device game scenario in an exemplary embodiment of the present disclosure;
FIG. 3 illustrates a flow diagram of a method of anchor point determination in an exemplary embodiment of the present disclosure;
FIG. 4 illustrates a schematic diagram of a build scene of a game scene in an exemplary embodiment of the present disclosure;
FIG. 5 illustrates a flow diagram of a rendering method in an exemplary embodiment in accordance with the present disclosure;
FIG. 6a illustrates a schematic plan view of a virtual target creation location in an exemplary embodiment of the present disclosure;
FIG. 6b illustrates a schematic perspective view of a virtual target creation location in an exemplary embodiment of the present disclosure;
FIG. 7 illustrates a schematic diagram of a build scene of a game scene in another exemplary embodiment of the present disclosure;
fig. 8 is a schematic diagram showing the structure of a display device in augmented reality in an exemplary embodiment of the present disclosure;
FIG. 9 illustrates a schematic diagram of a computer storage medium in an exemplary embodiment of the present disclosure; the method comprises the steps of,
fig. 10 illustrates a schematic structure of an electronic device in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. One skilled in the relevant art will recognize, however, that the aspects of the disclosure may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
The terms "a," "an," "the," and "said" are used in this specification to denote the presence of one or more elements/components/etc.; the terms "comprising" and "having" are intended to be inclusive and mean that there may be additional elements/components/etc. in addition to the listed elements/components/etc.; the terms "first" and "second" and the like are used merely as labels, and are not intended to limit the number of their objects.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities.
In an embodiment of the present disclosure, a game scene synchronization method is provided first, which overcomes at least to some extent the drawbacks existing in the prior art. The technical scheme of the embodiment of the method of the present disclosure is described in detail below:
fig. 1 shows a flow diagram of a display method in augmented reality in an exemplary embodiment of the present disclosure. Specifically, referring to fig. 1, the method shown in this embodiment includes:
Step S110, acquiring an entity scene of the terminal equipment under the current view angle, and determining a target surface meeting preset conditions in the entity scene;
step S120, determining an anchor point of a virtual target according to the target surface, wherein the anchor point is used for positioning the virtual target in the entity scene so that the virtual target is at least partially blocked by an entity in the entity scene under the current view angle; the method comprises the steps of,
step S130, rendering the virtual target in the physical scene based on the relative positional relationship between the terminal device and the anchor point.
In the technical scheme provided by the embodiment shown in fig. 1, on one hand, the technical scheme firstly determines the target surface and locates the virtual target in the physical scene based on the target surface, so as to achieve the effect of hiding the virtual target in the visual sense of the player, and is beneficial to increasing the game interest. On the other hand, the technical scheme is based on the relative position relation between the terminal equipment and the anchor point, and the target surface and the virtual target are rendered, so that the virtual target is rendered in the entity scene, and at least part of the virtual target is shielded by the entity in the entity scene under the current view angle. Wherein the position of the terminal device, i.e. the player, in the game scene is taken into account and rendered based thereon. Therefore, in the game scene construction process provided by the technical scheme, the position factors of the player are considered, so that the immersion of the player and the substitution sense of the game are facilitated to be improved.
The application scenario of the technical scheme can be a virtual reality game, wherein the terminal equipment comprises a camera shooting component and a display, such as a mobile phone and the like. For example, the target game scene constructed by the technical scheme can be displayed on the display. Thus, the player can participate in the game through the screen displayed on the display.
Specifically, the following describes in detail the implementation of each step in the embodiment shown in fig. 1:
in step S110, the terminal device acquires the physical scene under the current view angle. By way of example, referring to the schematic diagram of the game scenario shown in fig. 2, after a player enters a room with a terminal device, various entities under the current viewing angle may be acquired in real time through the camera component of the mobile device, for example, the room includes at least the following entities: television, tea table, sofa, wall, etc. And displaying various entities to a display of the terminal device to determine the entity scene. Note that, the "physical scene" described in the present embodiment does not include the virtual target 22 and the virtual target 23. The virtual targets are arranged in the 'entity scene' through a further technical scheme, so that the game scene finally provided for the player to play is obtained.
Further, after determining the entity scene, determining a target surface meeting preset conditions in the entity scene.
In an exemplary embodiment, a machine learning model is identified based on a pre-trained scene in which a solid surface having a planar area greater than a preset threshold is identified as a target surface. The machine learning model for scene recognition may be a decision tree model, a bayesian model, a K-nearest neighbor (kNN) classification model, a random forest model or a support vector machine (Support Vector Machine, SVM).
The pre-trained scene recognition machine learning model can be stored in the terminal equipment or the server, so that the scheme implementation flexibility is improved. Specifically, when the scene recognition machine learning model is stored in the terminal device, the terminal device will acquire the entity scene, and then recognize the target surface based on the local processor of the terminal device. For example, when the pre-trained scene recognition machine learning model is also stored in the server, the terminal device sends the obtained entity scene to the server, and recognizes the target surface based on the server processor.
Illustratively, the object plane determined in the physical scene may be in a substantially planar state. That is, the target surface may include a protrusion, a recess, a step-like shape, etc. having a smaller size, for example, a protrusion having a smaller size than a predetermined value on a solid wall surface or a protrusion having a smaller size than a predetermined value on a solid furniture surface, a recess, etc. may be used as the target surface. Referring to fig. 2, if the obtained planar area of the wall surface 21 is greater than the preset threshold, the obtained planar area may be used as the target surface in the physical scene.
In step S120: and determining an anchor point of the virtual target according to the target surface, wherein the anchor point is used for positioning the virtual target in the entity scene so that the virtual target is at least partially blocked by an entity in the entity scene under the current view angle.
Specifically, fig. 3 shows a flowchart of a method for determining an anchor point in an exemplary embodiment of the present disclosure. Referring to fig. 3, the method of this embodiment includes step S310 and step S320.
In step S310, an endpoint set of an intersection line of the target surface and a base reference surface in the physical scene is acquired.
In an exemplary embodiment, the base reference plane refers to a plane on which a virtual object is placed, and in order to increase the sense of reality of a game, a plane with a larger ground or area in a physical scene may be used as the base reference plane, since the virtual object is generally placed on a horizontal plane in the physical scene. Referring to fig. 4, taking a basic plane in the physical scene of the ground as an example, if the boundary line between the target plane 31 and the ground is L, the set of end points on the boundary line may be obtained as follows: [ A, B ]. To further screen out the anchor points for locating the virtual target from the set of end points [ a, B ].
With continued reference to fig. 3, in step S320, adjacent solid surfaces having a contact relationship with the target surface at the end point set end points are acquired. For example, the target plane is a solid wall surface X, and the adjacent solid surface may be another wall surface Y having an angle with the solid wall surface X.
In an exemplary embodiment, adjacent solid faces are acquired that have a contacting relationship with the target face 21 at the above-described endpoint sets [ a, B ]. Referring to fig. 4, the boundary line between the adjacent solid surface and the base reference surface is M, and the boundary line between the other adjacent solid surface and the base reference surface is N.
It should be noted that, at any one of the above end points, an intersection line (e.g., a first intersection line) formed by the target surface and the base surface is included, and an intersection line (e.g., a second intersection line) formed by the adjacent solid surface and the base surface is included, and the first intersection line and the second intersection line intersect at the end point. The boundary line L formed by the target surface and the base surface is included at the end point a in fig. 4, and the boundary line M formed by the adjacent solid surface and the base surface is included, and the boundary line L and the boundary line N intersect at the end point a.
In step S320, the endpoint set is further filtered to obtain the anchor point according to the positional relationship among the adjacent entity plane, the target plane and the terminal device.
Illustratively, based on the current location of the player, the endpoints in the endpoint set [ A, B ] may be "concave points" (i.e., the player is currently in a smaller included angle formed by the corresponding first and second lines of intersection of the endpoints), and the endpoints in the endpoint set [ A, B ] may be "convex points" (i.e., the player is currently in a larger included angle formed by the corresponding first and second lines of intersection of the endpoints). And for the current position of the player, if the 'concave point' is used as an anchor point for positioning the virtual target, the target surface cannot play a role in shielding the visual effect of the virtual target. Therefore, it is necessary to screen out the end points, which are "salient points" with respect to the current position of the player, from the end point set as the anchor points.
Further, the judgment can be made through a pre-trained scene recognition machine learning model: whether the player is in a large angle formed by boundary line L and boundary line M is determined whether point A is a "bulge point" with respect to the player's current position.
Specifically, referring to fig. 4, the boundary line L forms two angles with the boundary line M: if the included angle beta is larger and the included angle alpha is smaller, the pre-trained scene recognition machine learning model judges whether the player is in the range of the included angle beta. If the player is in a range of larger angles β, then the intersection A of the boundary line L and the boundary line M is a "bulge" for the player; if the player is in the range of the smaller included angle α, the intersection point a of the boundary line L and the boundary line M is a "concave point" for the player. In this embodiment, the intersection point a of the boundary line L and the boundary line M is a "convex point" for the player 40, and is also an anchor point suitable for positioning virtual objects.
With continued reference to FIG. 4, the boundary line L forms two angles with the boundary line N: a larger included angle θ and a smaller included angle ζ, the player 40 is determined to be in the smaller included angle ζ based on the pre-trained scene recognition machine learning model described above. The intersection B, which illustrates the boundary line L and the boundary line N, is a "pit" for the player 40 and is not suitable as an anchor point for locating virtual objects.
Therefore, the technical scheme identifies a machine learning model based on a pre-trained scene, and screens an anchor point A for positioning a virtual target from the terminal point set [ A, B ] by combining the current position of a player. The current position of the player can be determined through the relative position relation between the camera component of the terminal device and the target surface, namely, the position factor of the player is considered in the construction process of the target scene, so that the immersion of the player and the substitution sense of the game are improved.
With continued reference to fig. 1, in step S230: rendering the virtual target in the entity scene based on the relative position relation between the terminal equipment and the anchor point. By way of example, fig. 5 shows a flow diagram of a rendering method in an exemplary embodiment according to the present disclosure. Referring to fig. 5, the embodiment shown in this figure includes step S510 and step S520.
In step S510, the virtual target is created in a target direction of the anchor point, wherein the target direction is a direction away from the terminal device.
In an exemplary embodiment, refer to FIG. 6a, where the boundary inflection point A of the target surface serves as an anchor point for locating a virtual target. Further, in step S510, the target direction of the anchor point refers to a direction away from the current position of the player 40 (which may be determined by the position of the terminal device). Examples of specific determination of this direction are as follows: a line P between the anchor point a and the current position of the player 40 is determined, and then a straight line Q perpendicular to the line P and passing through the anchor point a is determined, so that a side (i.e., angle epsilon) of the straight line Q excluding the player 40 is the above-described target direction. Thus, a virtual target 23 may be created around anchor point A and within the azimuth of angle ε.
In order to more clearly illustrate the location of creating the virtual target 23, it can be seen with reference to the perspective view of the virtual target creation location shown in fig. 6b that the wall body can act as a barrier to the virtual target 23 when viewed from the current location of the player after creating the virtual target 23 according to the above technical scheme. Referring to FIG. 2, a virtual object model 22 is created directly within the player's visibility range as compared to the prior art. According to the technical scheme, the effect of hiding the virtual target in the vision of the player can be achieved, so that the interest is improved.
In step S520, the target surface is rendered with a first priority, and the virtual target is rendered with a second priority lower than the first priority, so that the virtual target is at least partially blocked by the entity in the entity scene under the current view angle after rendering.
In an exemplary embodiment, in order to avoid the generation of cross, wall penetration and other unrealistic feelings among different objects in a game scene, the technical scheme adopts a layered rendering mode to render the virtual target and the target surface. Illustratively, to achieve an effect of visually hiding at least a portion of the virtual target 23 behind the wall by the player, the target surface is rendered with a first priority and the virtual target is rendered with a second priority that is lower than the first priority. That is, when the picture is rendered, the target surface closer to the terminal device is rendered on the top layer of the display interface, and the virtual target farther from the terminal device is rendered on the next layer of the top layer of the display interface, so that the visual effect that the far virtual target is blocked by the target surface (wall surface) close to the virtual target is formed.
In an exemplary embodiment, to make the player personally effective, the area of the virtual target that is obscured is constantly changing during the player's approach or separation from the virtual target. Therefore, the angle values (refer to fig. 6a, namely, the included angle sigma between the connecting line P, the anchor point a and the straight line L) among the terminal equipment, the anchor point and the target surface can be acquired for multiple times, and the rendering area of the virtual target can be adjusted along with the change of the angle values. Specifically, in response to the angle value increasing, increasing a rendering area for the virtual target; or, in response to the angle value decreasing, decreasing a rendering area for the virtual target. Thus, the area of the virtual target which is shielded should be smaller and smaller during the process that the player approaches the virtual target, whereas the area of the virtual target which is shielded should be larger and larger during the process that the player moves away from the virtual target.
With continued reference to fig. 5, in step S530: and acquiring adjacent entity surfaces with contact relation with the target surface at the anchor point, and acquiring the position information of the adjacent entity surfaces in the entity scene to obtain the environment information of the virtual target. In step S540: and adjusting the placement angle of the virtual target according to the environment information so that the virtual target is not separated by an entity in the live-action scene.
In the exemplary embodiment, the specific implementation of obtaining the adjacent solid surface having the contact relationship with the target surface is the same as the specific implementation of step S320 described above, and will not be described herein.
In this embodiment, because the position information of the entity located in the space behind the target surface is uncertain, the technical scheme obtains more back information and performs scene recognition according to the lens in the forward advancing process of the player, so that the virtual target is finely tuned along the horizontal angle of the anchor point according to the scene recognition result, and the fit degree of the game scene and reality is improved. For example, referring to fig. 7, an adjacent entity plane M 'of the target plane L at the anchor point a is determined, and then an angle between the adjacent entity plane M' and the target plane L is acquired, thereby acquiring the environmental information of the virtual target 23. Further, the placement angle of the virtual target 23 is adjusted according to the environmental information, so that the problems that the virtual target is not blocked by entities in the real scene or the unreal scene such as wall penetration and the like are avoided, and the substitution sense of the game is improved.
Those skilled in the art will appreciate that all or part of the steps implementing the above embodiments are implemented as a computer program executed by a processor (including a CPU and GPU). The above-described functions defined by the above-described methods provided by the present disclosure are performed when the computer program is executed by a CPU. The program may be stored in a computer readable storage medium, which may be a read-only memory, a magnetic disk or an optical disk, etc.
Furthermore, it should be noted that the above-described figures are merely illustrative of the processes involved in the method according to the exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
The following describes embodiments of a display device in augmented reality of the present disclosure that may be used to perform the display method in augmented reality described above in the present disclosure.
Fig. 8 shows a schematic structural diagram of a display device in augmented reality according to an embodiment of the present disclosure, and referring to fig. 8, a display device 800 in augmented reality provided in the present embodiment includes: a target surface acquisition module 801, an anchor point determination module 802, and a rendering module 803. Wherein:
The target surface obtaining module 801 is configured to obtain an entity scene of the terminal device under the current view angle, and determine a target surface meeting a preset condition in the entity scene;
the anchor point determining module 802 is configured to determine an anchor point of a virtual target according to the target surface, where the anchor point is used to locate the virtual target in the physical scene, so that the virtual target is at least partially blocked by an entity in the physical scene under the current view; the method comprises the steps of,
the rendering module 803 is configured to render the virtual target in the physical scene based on a relative positional relationship between the terminal device and the anchor point.
In some embodiments of the present disclosure, based on the foregoing solution, the target surface obtaining module 801 includes: an acquisition unit and a determination unit. Wherein:
the acquisition unit is configured to: acquiring an entity under a current view angle according to a camera component of the terminal equipment to obtain an entity scene under the current view angle;
the above-mentioned determination unit is configured to: and based on a pre-trained scene recognition machine learning model, acquiring a solid surface with a plane area larger than a preset threshold value from the solid scene as the target surface.
In some embodiments of the present disclosure, based on the foregoing scheme, the anchor point determining module 802 is specifically configured to:
acquiring an endpoint set of a boundary line between the target surface and a basic reference surface in the entity scene; and acquiring an adjacent entity surface with a contact relation with the target surface at the endpoint concentration endpoint, and screening the endpoint set according to the position relation among the adjacent entity surface, the target surface and the terminal equipment to obtain the anchor point.
In some embodiments of the present disclosure, based on the foregoing scheme, the rendering module 803 includes: a creation unit and a rendering unit. Wherein:
the creation unit is configured to: creating the virtual target in the target direction of the anchor point, wherein the target direction is a direction away from the terminal equipment; the method comprises the steps of,
the above-described rendering unit is configured to: and rendering the target surface by adopting a first priority, and rendering the virtual target by adopting a second priority lower than the first priority, so that the virtual target is at least partially blocked by the entity in the entity scene under the current view angle after rendering.
In some embodiments of the present disclosure, based on the foregoing solution, the display device 800 in augmented reality further includes: and the angle adjusting module. Wherein:
the angle adjustment module is configured to: after rendering the virtual target in the physical scene based on the relative positional relationship between the terminal device and the anchor point in the rendering unit: acquiring an adjacent entity surface with a contact relation with the target surface at the anchor point, and acquiring position information of the adjacent entity surface in the entity scene to obtain environment information of the virtual target; and adjusting the placement angle of the virtual target according to the environment information so that the virtual target is not blocked by an entity in the live-action scene.
In some embodiments of the present disclosure, based on the foregoing scheme, the rendering module 803 includes: an angle value acquisition unit and a rendering area adjustment unit. Wherein:
the above-mentioned angle value acquisition unit is configured to: acquiring angle values among the terminal equipment, the anchor point and the target surface for multiple times; the method comprises the steps of,
the angle rendering area adjustment unit is configured to: and adjusting the rendering area of the virtual target along with the change of the angle value.
In some embodiments of the present disclosure, based on the foregoing aspect, the angle rendering area adjusting unit is specifically configured to:
in response to the angle value increasing, increasing a rendering area of the virtual target; or, in response to the angle value decreasing, decreasing the rendering area of the virtual object.
Since each functional module of the display device in augmented reality according to the exemplary embodiment of the present disclosure corresponds to a step of the exemplary embodiment of the display method in augmented reality described above, for details not disclosed in the embodiment of the device in the present disclosure, please refer to the embodiment of the display method in augmented reality described above in the present disclosure.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Furthermore, although the steps of the methods in the present disclosure are depicted in a particular order in the drawings, this does not require or imply that the steps must be performed in that particular order or that all illustrated steps be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a mobile terminal, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, a computer storage medium capable of implementing the above method is also provided. On which a program product is stored which enables the implementation of the method described above in the present specification. In some possible embodiments, the various aspects of the present disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the disclosure as described in the "exemplary methods" section of this specification, when the program product is run on the terminal device.
Referring to fig. 9, a program product 900 for implementing the above-described method according to an embodiment of the present disclosure is described, which may employ a portable compact disc read-only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product described above may take the form of any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
In addition, in an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
Those skilled in the art will appreciate that the various aspects of the present disclosure may be implemented as a system, method, or program product. Accordingly, various aspects of the disclosure may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
An electronic device 1000 according to such an embodiment of the present disclosure is described below with reference to fig. 10. The electronic device 1000 shown in fig. 10 is merely an example and should not be construed as limiting the functionality and scope of use of the disclosed embodiments.
As shown in fig. 10, the electronic device 1000 is embodied in the form of a general purpose computing device. Components of electronic device 1000 may include, but are not limited to: the at least one processing unit 1010, the at least one memory unit 1020, and a bus 1030 that connects the various system components, including the memory unit 1020 and the processing unit 1010.
Wherein the storage unit stores program code that is executable by the processing unit 1010 such that the processing unit 1010 performs steps according to various exemplary embodiments of the present disclosure described in the section "exemplary methods" of the present specification. For example, the processing unit 1010 described above may perform the operations as shown in fig. 1: step S110, acquiring an entity scene of the terminal equipment under the current view angle, and determining a target surface meeting preset conditions in the entity scene; step S120, determining an anchor point of a virtual target according to the target surface, wherein the anchor point is used for positioning the virtual target in the entity scene so that the virtual target is at least partially blocked by an entity in the entity scene under the current view angle; and step S130, rendering the virtual target in the physical scene based on the relative positional relationship between the terminal device and the anchor point.
For example, the processing unit 1010 may also perform the display method in augmented reality as shown in any one of fig. 1 to 5.
The memory unit 1020 may include readable media in the form of volatile memory units such as Random Access Memory (RAM) 10201 and/or cache memory unit 10202, and may further include Read Only Memory (ROM) 10203.
The storage unit 1020 may also include a program/utility 10204 having a set (at least one) of program modules 10205, such program modules 10205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 1030 may be representing one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 1000 can also communicate with one or more external devices 1100 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 1000, and/or with any device (e.g., router, modem, etc.) that enables the electronic device 1000 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 1050. Also, electronic device 1000 can communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 1070. As shown, the network adapter 1060 communicates with other modules of the electronic device 1000 over the bus 1030. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with the electronic device 1000, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
Furthermore, the above-described figures are only schematic illustrations of processes included in the method according to the exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (9)

1. A method of display in augmented reality, the method comprising:
acquiring an entity scene of the terminal equipment under the current view angle, and determining a target surface meeting preset conditions in the entity scene, wherein the target surface meeting the preset conditions is an entity surface with a plane area larger than a preset threshold value in the entity scene;
determining an anchor point of a virtual target according to the target surface, wherein the anchor point is used for positioning the virtual target in the entity scene so that the virtual target is at least partially blocked by an entity in the entity scene under the current view angle;
and adjusting the rendering area of the virtual target based on the change of the angle values among the terminal equipment, the anchor point and the target surface which are acquired for many times, and rendering the virtual target in the physical scene based on the rendering area.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the obtaining the entity scene of the terminal equipment under the current view angle comprises the following steps:
acquiring an entity under a current view angle according to a camera component of the terminal equipment, and obtaining an entity scene under the current view angle;
the determining the target surface meeting the preset condition in the entity scene comprises the following steps:
And based on a pre-trained scene recognition machine learning model, acquiring a solid surface with a plane area larger than a preset threshold value from the solid scene as the target surface.
3. The method of claim 1, wherein determining an anchor point for a virtual target from the target surface comprises:
acquiring an endpoint set of a boundary line between the target surface and a basic reference surface in the entity scene;
and acquiring adjacent entity surfaces with contact relation with the target surface at the end points in the end point set, and screening the end point set to obtain the anchor points according to the position relation among the adjacent entity surfaces, the target surface and the terminal equipment.
4. A method according to any one of claims 1 to 3, wherein said adjusting the rendering area of the virtual target based on the change in the angle values between the terminal device, the anchor point, and the target surface acquired a plurality of times, and rendering the virtual target in the physical scene based on the rendering area, comprises:
creating the virtual target in a target direction of the anchor point, wherein the target direction is a direction away from the terminal equipment;
And rendering the target surface by adopting a first priority, and rendering the virtual target by adopting a second priority lower than the first priority, so that the virtual target is at least partially shielded by the entity in the entity scene under the current view angle after rendering.
5. The method according to claim 1 or 2, characterized in that after the adjusting of the rendering area of the virtual target based on the change of the angle values between the terminal device, the anchor point and the target surface acquired a plurality of times, and the rendering of the virtual target in the physical scene based on the rendering area, the method further comprises:
acquiring an adjacent entity surface with a contact relation with the target surface at the anchor point, and acquiring the position information of the adjacent entity surface in the entity scene to obtain the environment information of the virtual target;
and adjusting the placement angle of the virtual target according to the environment information so that the virtual target is not separated by the entity in the entity scene.
6. The method of claim 1, wherein the adjusting the rendering area of the virtual target based on the change in the angle values between the terminal device, the anchor point, and the target surface acquired multiple times comprises:
In response to the angle value increasing, increasing a rendering area for the virtual target; or alternatively, the first and second heat exchangers may be,
in response to the angle value decreasing, a rendering area for the virtual target is decreased.
7. An augmented reality display device, the device comprising:
the target surface acquisition module is configured to acquire an entity scene of the terminal equipment under the current view angle, and determine a target surface meeting preset conditions in the entity scene, wherein the target surface meeting the preset conditions is an entity surface with a plane area larger than a preset threshold value in the entity scene;
an anchor point determining module configured to determine an anchor point of a virtual target according to the target surface, the anchor point being used for positioning the virtual target in the entity scene so that the virtual target is at least partially occluded by an entity in the entity scene under the current view;
and the rendering module is configured to adjust the rendering area of the virtual target based on the change of the angle values among the terminal equipment, the anchor point and the target surface acquired for a plurality of times, and render the virtual target in the physical scene based on the rendering area.
8. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the display method in augmented reality according to any one of claims 1 to 6.
9. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which when executed by the one or more processors cause the one or more processors to implement the display method in augmented reality according to any one of claims 1 to 6.
CN202010239234.7A 2020-03-30 2020-03-30 Display method and device in augmented reality, medium and electronic equipment Active CN111462339B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010239234.7A CN111462339B (en) 2020-03-30 2020-03-30 Display method and device in augmented reality, medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010239234.7A CN111462339B (en) 2020-03-30 2020-03-30 Display method and device in augmented reality, medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111462339A CN111462339A (en) 2020-07-28
CN111462339B true CN111462339B (en) 2023-08-08

Family

ID=71681733

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010239234.7A Active CN111462339B (en) 2020-03-30 2020-03-30 Display method and device in augmented reality, medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111462339B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111921203A (en) * 2020-08-21 2020-11-13 腾讯科技(深圳)有限公司 Interactive processing method and device in virtual scene, electronic equipment and storage medium
CN111930240B (en) * 2020-09-17 2021-02-09 平安国际智慧城市科技股份有限公司 Motion video acquisition method and device based on AR interaction, electronic equipment and medium
CN112819969A (en) * 2021-02-08 2021-05-18 广东三维家信息科技有限公司 Virtual scene path generation method and device, electronic equipment and storage medium
CN114596348B (en) * 2021-12-08 2023-09-01 北京蓝亚盒子科技有限公司 Screen space-based ambient occlusion calculating method, device, operator and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830940A (en) * 2018-06-19 2018-11-16 广东虚拟现实科技有限公司 Hiding relation processing method, device, terminal device and storage medium
CN109471521A (en) * 2018-09-05 2019-03-15 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Virtual and real shielding interaction method and system in AR environment
CN110221690A (en) * 2019-05-13 2019-09-10 Oppo广东移动通信有限公司 Gesture interaction method and device, storage medium, communication terminal based on AR scene

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10592066B2 (en) * 2017-03-15 2020-03-17 Facebook, Inc. Visual editor for designing augmented-reality effects and configuring rendering parameters

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830940A (en) * 2018-06-19 2018-11-16 广东虚拟现实科技有限公司 Hiding relation processing method, device, terminal device and storage medium
CN109471521A (en) * 2018-09-05 2019-03-15 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Virtual and real shielding interaction method and system in AR environment
CN110221690A (en) * 2019-05-13 2019-09-10 Oppo广东移动通信有限公司 Gesture interaction method and device, storage medium, communication terminal based on AR scene

Also Published As

Publication number Publication date
CN111462339A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
CN111462339B (en) Display method and device in augmented reality, medium and electronic equipment
US10694175B2 (en) Real-time automatic vehicle camera calibration
CN107977141B (en) Interaction control method and device, electronic equipment and storage medium
WO2020248900A1 (en) Panoramic video processing method and apparatus, and storage medium
US9743040B1 (en) Systems and methods for facilitating eye contact during video conferences
EP3506213A1 (en) An apparatus and associated methods for presentation of augmented reality content
US11245887B2 (en) Electronic device and operation method therefor
EP3236336B1 (en) Virtual reality causal summary content
WO2016194441A1 (en) Three-dimensional advertising space determination system, user terminal, and three-dimensional advertising space determination computer
US11347306B2 (en) Method and apparatus for controlling a discrepant aiming direction of a camera
CN109831659B (en) VR video caching method and system
CN110286906B (en) User interface display method and device, storage medium and mobile terminal
CN113559501B (en) Virtual unit selection method and device in game, storage medium and electronic equipment
CN110163977B (en) Virtual channel rendering method and device in multi-world virtual scene
CN112565883A (en) Video rendering processing system and computer equipment for virtual reality scene
CN107248138B (en) Method for predicting human visual saliency in virtual reality environment
CN116501209A (en) Editing view angle adjusting method and device, electronic equipment and readable storage medium
CN113457144B (en) Virtual unit selection method and device in game, storage medium and electronic equipment
CN112473138B (en) Game display control method and device, readable storage medium and electronic equipment
CN112929685A (en) Interaction method and device for VR live broadcast room, electronic equipment and storage medium
CN109814703B (en) Display method, device, equipment and medium
WO2023070329A9 (en) Content display method, content display apparatus, storage medium, and electronic device
Besada et al. Design and user experience assessment of Kinect-based Virtual Windows
KR20190030565A (en) Electronic device and operating method for the same
CN114466218B (en) Live video character tracking method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant