CN113304471A - Virtual object display method, device and equipment - Google Patents

Virtual object display method, device and equipment Download PDF

Info

Publication number
CN113304471A
CN113304471A CN202110565573.9A CN202110565573A CN113304471A CN 113304471 A CN113304471 A CN 113304471A CN 202110565573 A CN202110565573 A CN 202110565573A CN 113304471 A CN113304471 A CN 113304471A
Authority
CN
China
Prior art keywords
virtual camera
virtual object
virtual
motion
authority
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110565573.9A
Other languages
Chinese (zh)
Other versions
CN113304471B (en
Inventor
陈瑽
裴萌
李嘉乐
房本旭
张峰
田吉亮
庄涛
徐丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Perfect Chijin Technology Co Ltd
Original Assignee
Beijing Perfect Chijin Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Perfect Chijin Technology Co Ltd filed Critical Beijing Perfect Chijin Technology Co Ltd
Priority to CN202110565573.9A priority Critical patent/CN113304471B/en
Publication of CN113304471A publication Critical patent/CN113304471A/en
Application granted granted Critical
Publication of CN113304471B publication Critical patent/CN113304471B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides a method, a device and equipment for displaying a virtual object. The method comprises the following steps: determining relative position information of the virtual camera and the virtual object; determining a motion authority of the virtual camera based on the relative position information; and responding to a touch instruction of the graphical interface, and controlling the virtual camera to move in a set three-dimensional space according to the touch instruction and the movement authority so as to adjust a display image of the virtual object. According to the method, the virtual camera is controlled to move in the set three-dimensional space according to the touch instruction and the movement permission, so that the virtual object is displayed at multiple angles in the set three-dimensional space, richer observation visual angles are provided for a user, abnormal display of the virtual object under certain visual angles is avoided by setting the three-dimensional space and the movement permission, and user experience is improved.

Description

Virtual object display method, device and equipment
Technical Field
The invention relates to the technical field of images, in particular to a method, a device and equipment for displaying a virtual object.
Background
The virtual objects include a variety of scene resources in the virtual scene. The virtual object is, for example, a virtual character or a prop. Taking a virtual character in a game as an example, a player selects the virtual character and participates in the game by controlling the behavior of the virtual character. Thus, the virtual character is an important carrier for the player to participate in the game and an important tool for the game producer to attract the player.
After the player enters the game, the appearance and the role setting of each virtual role can be checked in the virtual role display interface. Currently, in a virtual character display interface, a player generally controls a virtual character to perform single-axis rotation or perform a zoom operation on the virtual character by touching a screen, so as to view appearance details of the virtual character. However, because the viewing angle provided for the player is single, the appearance details of the virtual character which can be displayed are limited, and the viewing requirement of the player on the virtual character is difficult to meet.
How to display the virtual object to improve the observation experience of the virtual object becomes a technical problem to be solved urgently.
Disclosure of Invention
The embodiment of the invention provides a method, a device and equipment for displaying a virtual object, which are used for displaying the virtual object in a set three-dimensional space at multiple angles and improving the observation experience of a user.
In a first aspect, an embodiment of the present invention provides a method for displaying a virtual object, where the method is applied to a graphical interface, where the graphical interface is loaded with the virtual object and a virtual camera, and the virtual camera is used to collect a display image of the virtual object, and the method includes:
determining relative position information of the virtual camera and the virtual object;
determining a motion authority of the virtual camera based on the relative position information;
and responding to a touch instruction of the graphical interface, and controlling the virtual camera to move in a set three-dimensional space according to the touch instruction and the movement authority so as to adjust a display image of the virtual object.
In one possible design, the relative position information includes a distance of the virtual camera to the virtual object;
determining the motion authority of the virtual camera based on the relative position information, comprising: and if the distance from the virtual camera to the virtual object is smaller than a first threshold value, determining that the motion authority of the virtual camera comprises a first direction motion authority and a second direction motion authority, wherein the first direction and the second direction are preset according to the posture of the virtual object in the three-dimensional space.
In one possible design, setting the three-dimensional space includes surrounding a first surface boundary of the virtual object, where a distance from the first surface boundary to the virtual object is less than or equal to a first threshold.
Responding to a touch instruction of a graphical interface, controlling a virtual camera to move in a set three-dimensional space according to the touch instruction and the movement authority, and comprising the following steps: if the touch instruction is detected, acquiring a moving track of the touch instruction; determining a first motion path of the virtual camera on a first curved surface boundary according to the movement track and the motion authority of the touch instruction; the virtual camera is controlled to move on a first movement path.
In one possible design, the virtual object includes a plurality of first reference points, the first curved surface boundary includes a plurality of first circular tracks and a plurality of first curves passing through the plurality of first circular tracks, and each first circular track is centered on a corresponding first reference point of the plurality of first reference points.
In one possible design, determining a first motion path of the virtual camera on the first curved surface boundary according to the movement track and the motion authority of the touch instruction includes: if the direction of the moving track of the touch instruction is a first direction and the motion authority comprises first direction motion authority, selecting a first curve passing through the position where the virtual camera is located from the plurality of first curves as a first motion path, wherein the starting point of the first motion path is the position where the virtual camera is located.
In one possible design, determining a first motion path of the virtual camera on the first curved surface boundary according to the movement track and the motion authority of the touch instruction includes: if the direction of the moving track of the touch instruction is a second direction and the motion authority includes a second direction motion authority, taking a first circular track i where the virtual camera is located in the plurality of first circular tracks as a first motion path, and the second direction is parallel to the plane where the first circular track i is located.
In one possible design, the relative position information includes a distance of the virtual camera to the virtual object.
Determining the motion authority of the virtual camera based on the relative position information, comprising: and if the distance from the virtual camera to the virtual object is greater than a first threshold value and the distance from the virtual camera to the virtual object is less than a second threshold value, determining that the motion authority of the virtual camera comprises a first direction motion authority, a second direction motion authority and a zooming motion authority. The second threshold is larger than the first threshold, the first direction and the second direction are preset according to the posture of the virtual object in the three-dimensional space, and the zooming motion direction is perpendicular to the first direction and the second direction.
In one possible design, the setting of the three-dimensional space includes a first surface boundary and a second surface boundary surrounding the virtual object, and a three-dimensional space region between the second surface boundary and the first surface boundary; the distance from the first surface boundary to the virtual object is smaller than or equal to a first threshold value, the distance from the second surface boundary to the virtual object is larger than the first threshold value, the distance from the second surface boundary to the virtual object is smaller than or equal to a second threshold value, and the second threshold value is larger than the first threshold value.
Responding to a touch instruction of a graphical interface, controlling a virtual camera to move in a set three-dimensional space according to the touch instruction and the movement authority, and comprising the following steps: if the touch instruction is detected, acquiring a moving track of the touch instruction; determining a second motion path of the virtual camera according to the movement track and the motion authority of the touch instruction, wherein the second motion path is located on a second curved surface boundary, or the second motion path is located in a three-dimensional space area; and controlling the virtual camera to move on the second motion path.
In one possible design, the virtual object includes a plurality of second reference points, the second curved surface boundary includes a plurality of second circular tracks and a plurality of second curves passing through the plurality of second circular tracks, and each second circular track is centered on a corresponding second reference point of the plurality of second reference points.
In one possible design, determining a second motion path of the virtual camera according to the movement track and the motion authority of the touch instruction includes: and if the direction of the moving track of the touch instruction is a first direction and the motion authority comprises first direction motion authority, selecting a second curve passing through the position where the virtual camera is located from the plurality of second curves as a second motion path, wherein the starting point of the second motion path is the position where the virtual camera is located.
In one possible design, determining a second motion path of the virtual camera according to the movement track and the motion authority of the touch instruction includes: if the direction of the moving track of the touch instruction is a second direction and the motion authority includes a second direction motion authority, taking a second circular track i where the virtual camera is located in the plurality of second circular tracks as a second motion path, wherein the second direction is parallel to the plane where the second circular track i is located.
In one possible design, determining a second motion path of the virtual camera according to the movement track and the motion authority of the touch instruction includes: if the direction of the moving track of the touch instruction comprises a plurality of directions and the motion authority comprises a zooming motion authority, determining a second motion path of the virtual camera in the three-dimensional space area based on the position of the virtual camera and the moving directions of the plurality of tracks.
In one possible design, the relative position information includes a distance of the virtual camera to the virtual object;
determining the motion authority of the virtual camera based on the relative position information, comprising: determining that the motion permission of the virtual camera includes a first motion permission in a first range bucket if the distance from the virtual camera to the virtual object is equal to a first threshold; the three-dimensional space is set to comprise a first range bucket surrounding the virtual object, and the distance from the first range bucket to the virtual object is a first threshold value.
In one possible design, the virtual object includes a plurality of first viewpoints, the plurality of first viewpoints correspond to a plurality of first axis points in a first range bucket in a one-to-one manner, and the first range bucket is composed of a plurality of first circular tracks obtained through interpolation calculation of the plurality of first axis points. Responding to a touch instruction of a graphical interface, controlling a virtual camera to move in a set three-dimensional space according to the touch instruction and the movement authority, and comprising the following steps: if the touch instruction is detected, acquiring a moving track of the touch instruction; and controlling the virtual camera to move in the first range bucket according to the movement track of the touch instruction and the first movement authority. The optical axis of the virtual camera points to a first viewpoint corresponding to a first axis point where the virtual camera is located.
In one possible design, the relative position information includes a distance of the virtual camera to the virtual object. Determining the motion authority of the virtual camera based on the relative position information, comprising: if the distance of the virtual camera to the virtual object is greater than the first threshold and the distance of the virtual camera to the virtual object is less than the second threshold, determining that the motion authority of the virtual camera includes a second motion authority in the second range bucket and a third motion authority between the second range bucket and the first range bucket. The second threshold is greater than the first threshold, the three-dimensional space is set to include a second range bucket and a first range bucket which surround the virtual object, the distance from the second range bucket to the virtual object is the second threshold, and the distance from the first range bucket to the virtual object is the first threshold.
In one possible design, the virtual object includes a plurality of second viewpoints, the plurality of second viewpoints are in one-to-one correspondence with a plurality of second axis points in a second range bucket, and the second range bucket is composed of a plurality of second circular tracks obtained through interpolation calculation of the plurality of second axis points. Responding to a touch instruction of a graphical interface, controlling a virtual camera to move in a set three-dimensional space according to the touch instruction and the movement authority, and comprising the following steps: when a touch instruction is detected, acquiring a moving track of the touch instruction; controlling the virtual camera to move in the second range barrel according to the moving track of the touch instruction and the second motion authority; or controlling the virtual camera to move between the second range bucket and the first range bucket according to the movement track of the touch instruction and the third movement authority.
And the optical axis of the virtual camera points to a second viewpoint corresponding to a second axis point where the virtual camera is located.
In one possible design, the smaller the distance of the virtual camera to the virtual object, the larger the plurality of second viewpoints; and if the virtual camera is in the second range bucket, the plurality of second viewpoints are aggregated into a point.
In a second aspect, an embodiment of the present invention provides an apparatus for displaying a virtual object, where the apparatus is applied to a graphical interface, and the graphical interface is loaded with the virtual object and a virtual camera, and the virtual camera is used to collect a display image of the virtual object, and the apparatus includes:
the first determining module is used for determining the relative position information of the virtual camera and the virtual object;
a second determination module for determining the motion authority of the virtual camera based on the relative position information;
and the control module is used for responding to a touch instruction of the graphical interface, controlling the virtual camera to move in a set three-dimensional space according to the touch instruction and the movement authority so as to adjust the display image of the virtual object.
In one possible design, the relative position information includes a distance of the virtual camera to the virtual object.
The second determining module, when determining the motion authority of the virtual camera based on the relative position information, is specifically configured to: and if the distance from the virtual camera to the virtual object is smaller than a first threshold value, determining that the motion authority of the virtual camera comprises a first direction motion authority and a second direction motion authority, wherein the first direction and the second direction are preset according to the posture of the virtual object in the three-dimensional space.
In one possible design, setting the three-dimensional space includes surrounding a first surface boundary of the virtual object, where a distance from the first surface boundary to the virtual object is less than or equal to a first threshold. The control module is specifically configured to: if the touch instruction is detected, acquiring a moving track of the touch instruction; determining a first motion path of the virtual camera on a first curved surface boundary according to the movement track and the motion authority of the touch instruction; the virtual camera is controlled to move on a first movement path.
In one possible design, the virtual object includes a plurality of first reference points, the first curved surface boundary includes a plurality of first circular tracks and a plurality of first curves passing through the plurality of first circular tracks, and each first circular track is centered on a corresponding first reference point of the plurality of first reference points.
In a possible design, when the control module determines a first motion path of the virtual camera on the first curved surface boundary according to the movement track and the motion authority of the touch instruction, the control module is specifically configured to: if the direction of the moving track of the touch instruction is a first direction and the motion authority comprises first direction motion authority, selecting a first curve passing through the position where the virtual camera is located from the plurality of first curves as a first motion path, wherein the starting point of the first motion path is the position where the virtual camera is located.
In a possible design, when the control module determines a first motion path of the virtual camera on the first curved surface boundary according to the movement track and the motion authority of the touch instruction, the control module is specifically configured to: if the direction of the moving track of the touch instruction is a second direction and the motion authority includes a second direction motion authority, taking a first circular track i where the virtual camera is located in the plurality of first circular tracks as a first motion path, and the second direction is parallel to the plane where the first circular track i is located.
In one possible design, the relative position information includes a distance of the virtual camera to the virtual object.
When the second determining module determines the motion authority of the virtual camera based on the relative position information, the second determining module is specifically configured to: if the distance from the virtual camera to the virtual object is greater than a first threshold value and the distance from the virtual camera to the virtual object is less than a second threshold value, determining that the motion authority of the virtual camera comprises a first direction motion authority, a second direction motion authority and a zooming motion authority;
the second threshold is larger than the first threshold, the first direction and the second direction are preset according to the posture of the virtual object in the three-dimensional space, and the zooming motion direction is perpendicular to the first direction and the second direction.
In one possible design, the setting of the three-dimensional space includes a first surface boundary and a second surface boundary surrounding the virtual object, and a three-dimensional space region between the second surface boundary and the first surface boundary; the distance from the first surface boundary to the virtual object is smaller than or equal to a first threshold value, the distance from the second surface boundary to the virtual object is larger than the first threshold value, the distance from the second surface boundary to the virtual object is smaller than or equal to a second threshold value, and the second threshold value is larger than the first threshold value.
When the control module responds to a touch instruction for the graphical interface and controls the virtual camera to move in a set three-dimensional space according to the touch instruction and the movement authority, the control module is specifically configured to: if the touch instruction is detected, acquiring a moving track of the touch instruction; determining a second motion path of the virtual camera according to the movement track and the motion authority of the touch instruction, wherein the second motion path is located on a second curved surface boundary, or the second motion path is located in a three-dimensional space area; and controlling the virtual camera to move on the second motion path.
In one possible design, the virtual object includes a plurality of second reference points, the second curved surface boundary includes a plurality of second circular tracks and a plurality of second curves passing through the plurality of second circular tracks, and each second circular track is centered on a corresponding second reference point of the plurality of second reference points.
In a possible design, when the control module determines the second motion path of the virtual camera according to the movement track and the motion authority of the touch instruction, the control module is specifically configured to: and if the direction of the moving track of the touch instruction is a first direction and the motion authority comprises first direction motion authority, selecting a second curve passing through the position where the virtual camera is located from the plurality of second curves as a second motion path, wherein the starting point of the second motion path is the position where the virtual camera is located.
In a possible design, when the control module determines the second motion path of the virtual camera according to the movement track and the motion authority of the touch instruction, the control module is specifically configured to: if the direction of the moving track of the touch instruction is a second direction and the motion authority includes a second direction motion authority, taking a second circular track i where the virtual camera is located in the plurality of second circular tracks as a second motion path, wherein the second direction is parallel to the plane where the second circular track i is located.
In a possible design, when the control module determines the second motion path of the virtual camera according to the movement track and the motion authority of the touch instruction, the control module is specifically configured to: if the direction of the moving track of the touch instruction comprises a plurality of directions and the motion authority comprises a zooming motion authority, determining a second motion path of the virtual camera in the three-dimensional space area based on the position of the virtual camera and the moving directions of the plurality of tracks.
In one possible design, the relative position information includes a distance of the virtual camera to the virtual object. The second determining module is specifically configured to: determining that the motion permission of the virtual camera includes a first motion permission in a first range bucket if the distance from the virtual camera to the virtual object is equal to a first threshold; the three-dimensional space is set to comprise a first range bucket surrounding the virtual object, and the distance from the first range bucket to the virtual object is a first threshold value.
In one possible design, the virtual object includes a plurality of first viewpoints, the plurality of first viewpoints correspond to a plurality of first axis points in a first range bucket in a one-to-one manner, and the first range bucket is composed of a plurality of first circular tracks obtained through interpolation calculation of the plurality of first axis points. When the control module responds to a touch instruction for the graphical interface and controls the virtual camera to move in a set three-dimensional space according to the touch instruction and the movement authority, the control module is specifically configured to: if the touch instruction is detected, acquiring a moving track of the touch instruction; and controlling the virtual camera to move in the first range bucket according to the movement track of the touch instruction and the first movement authority. The optical axis of the virtual camera points to a first viewpoint corresponding to a first axis point where the virtual camera is located.
In one possible design, the relative position information includes a distance of the virtual camera to the virtual object. The second determining module is specifically configured to: if the distance of the virtual camera to the virtual object is greater than the first threshold and the distance of the virtual camera to the virtual object is less than the second threshold, determining that the motion authority of the virtual camera includes a second motion authority in the second range bucket and a third motion authority between the second range bucket and the first range bucket. The second threshold is greater than the first threshold, the three-dimensional space is set to include a second range bucket and a first range bucket which surround the virtual object, the distance from the second range bucket to the virtual object is the second threshold, and the distance from the first range bucket to the virtual object is the first threshold.
In one possible design, the virtual object includes a plurality of second viewpoints, the plurality of second viewpoints are in one-to-one correspondence with a plurality of second axis points in a second range bucket, and the second range bucket is composed of a plurality of second circular tracks obtained through interpolation calculation of the plurality of second axis points. When the control module responds to a touch instruction for the graphical interface and controls the virtual camera to move in a set three-dimensional space according to the touch instruction and the movement authority, the control module is specifically configured to: when a touch instruction is detected, acquiring a moving track of the touch instruction; controlling the virtual camera to move in the second range barrel according to the moving track of the touch instruction and the second motion authority; or controlling the virtual camera to move between the second range bucket and the first range bucket according to the movement track of the touch instruction and the third movement authority.
And the optical axis of the virtual camera points to a second viewpoint corresponding to a second axis point where the virtual camera is located.
In one possible design, the smaller the distance of the virtual camera to the virtual object, the larger the plurality of second viewpoints; and if the virtual camera is in the second range bucket, the plurality of second viewpoints are aggregated into a point.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor and a memory, where the memory stores executable code, and when the executable code is executed by the processor, the processor is enabled to implement at least the method for presenting a virtual object in the first aspect.
An embodiment of the present invention further provides a system, including a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or an instruction set, and the at least one instruction, at least one program, a code set, or an instruction set is loaded and executed by the processor to implement the above-described method for displaying a virtual object.
Embodiments of the present invention further provide a computer-readable medium having at least one instruction, at least one program, code set, or instruction set stored thereon, which is loaded and executed by a processor to implement a method for exposing a virtual object as described above.
In the technical scheme provided by the embodiment of the invention, the graphical interface is loaded with the virtual object and the virtual camera, and the virtual camera is used for collecting the display image of the virtual object in the graphical interface. Firstly, the motion authority of the virtual camera is determined based on the relative position information of the virtual camera and the virtual object, the motion authority represents the authorized motion mode of the virtual camera in the set three-dimensional space, for example, if the motion authority of the virtual camera includes the longitudinal motion authority, the virtual camera can perform longitudinal motion in the set three-dimensional space. And responding to a touch instruction of the graphical interface, and controlling the virtual camera to move in the set three-dimensional space according to the touch instruction and the movement authority, so that the position of the virtual camera in the set three-dimensional space can be changed through the touch instruction, and a display image in the graphical interface is adjusted. According to the technical scheme, the virtual camera is controlled to move according to the touch instruction and the movement permission in the set three-dimensional space, so that the virtual object is displayed at multiple angles in the set three-dimensional space, richer observation visual angles are provided for a user, abnormal display of the virtual object under certain visual angles is avoided by setting the three-dimensional space and the movement permission, and user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a flowchart of a method for displaying a virtual object according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a virtual object according to an embodiment of the present invention;
fig. 3 to fig. 7 are schematic diagrams illustrating a process of displaying a virtual object according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating a graphical interface according to an embodiment of the present invention;
FIG. 9 is a schematic view of another graphical interface provided by an embodiment of the present invention;
FIG. 10 is a schematic diagram of another virtual object provided in accordance with an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a virtual object display apparatus according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of an electronic device corresponding to the display apparatus of the virtual object provided in the embodiment shown in fig. 11.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and "a plurality" typically includes at least two.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
In addition, the sequence of steps in each method embodiment described below is only an example and is not strictly limited.
The display scheme of the virtual object provided by the embodiment of the invention can be executed by an electronic device, and the electronic device can be a terminal device such as a smart phone, a tablet computer, a PC (personal computer), a notebook computer and the like. In an alternative embodiment, the electronic device may have a service program installed thereon for executing the presentation scheme of the virtual object. Such as a game client, a three-dimensional scene editor, a game editor.
The display scheme of the virtual object provided by the embodiment of the invention is suitable for display scenes of various virtual objects. The virtual object is, for example, a virtual character or a prop. For example, assuming that the virtual object is a virtual character in a game, the display scene of the virtual character may be a virtual character display interface seen by a player after entering the game. Or, assuming that the virtual object is a game item, the display scene of the virtual character may also be an item display interface called when the player intends to view the game item in the game process.
The following describes an implementation procedure of the virtual object display method with reference to the following embodiments.
Fig. 1 is a flowchart of a method for displaying a virtual object according to an embodiment of the present invention. As shown in fig. 1, the method comprises the steps of:
101. determining relative position information of the virtual camera and the virtual object;
102. determining a motion authority of the virtual camera based on the relative position information;
103. and responding to a touch instruction of the graphical interface, and controlling the virtual camera to move in a set three-dimensional space according to the touch instruction and the movement authority so as to adjust a display image of the virtual object.
The method for displaying the virtual object in the embodiment of the invention is applied to a graphical interface, for example, a virtual role display interface loaded in a service program. The graphical interface is loaded with virtual objects. The virtual object is in three-dimensional space.
In order to show the posture of the virtual object in the three-dimensional space, a virtual camera is loaded in a graphical interface, and the virtual camera is used for acquiring a display image of the virtual object. In essence, the viewing angle of the virtual camera represents the viewing angle of the user using the graphical interface to the virtual object, and therefore, adjusting the position of the virtual camera in the three-dimensional space changes the viewing angle of the user to the virtual object and adjusts the displayed image of the virtual object. In order to change the viewing angle of the virtual object from the user, the position of the virtual camera in the three-dimensional space needs to be adjusted. In short, the virtual camera is controlled to move in a three-dimensional space, so that the virtual camera moves from an initial position to another position. Of course, the virtual camera is also referred to as a virtual camera or a game camera, and the viewing angle of the virtual camera is also referred to as a cropping area of the virtual camera.
After loading the virtual object and the virtual camera in the graphical interface, in 101, the relative position information of the virtual camera and the virtual object is determined, thereby determining the position of the virtual camera relative to the virtual object.
Further, in 102, the motion authority of the virtual camera is determined based on the relative position information of the virtual camera and the virtual object.
In practical applications, taking a virtual object as a virtual character as an example, if the virtual character can be observed in any direction of a three-dimensional space, it may cause an abnormal display of the virtual character, such as an observation angle penetrating into a model of the virtual character or an observation angle pointing to an inelegant part of the virtual character.
In order to avoid the abnormal situation, the motion range of the virtual camera is limited to a set three-dimensional space, and the authority is set for the motion of the virtual camera in the set three-dimensional space. The movement authority represents an authorized movement mode of the virtual camera in the set three-dimensional space. The motion authority of the virtual camera includes, but is not limited to, a direction motion authority and a zoom motion authority. In essence, the directional movement rights indicate that the virtual camera is authorized to be displaced in all directions. Each direction may be set in advance based on the posture of the virtual object in the three-dimensional space. For example, if the motion authority of the virtual camera includes a longitudinal motion authority, the virtual camera may perform a longitudinal motion in a longitudinal direction in the set three-dimensional space, such as an upward motion along the longitudinal direction or a downward motion along the longitudinal direction.
In some possible cases, the virtual camera is closer to the virtual object, in such cases, in order to avoid the viewing perspective of the virtual camera from penetrating inside the model of the virtual object, or other abnormal situations. The motion authority can be defined when the virtual camera is in the vicinity of the virtual object. For example, to prevent the viewing angle of the virtual camera from penetrating inside the model of the virtual object, the zoom motion of the virtual camera is prohibited.
Specifically, the relative position information of the virtual camera and the virtual object includes a distance of the virtual camera to the virtual object. 102, determining the motion authority of the virtual camera based on the relative position information of the virtual camera and the virtual object may be implemented as:
and if the distance from the virtual camera to the virtual object is smaller than a first threshold value, determining that the motion authority of the virtual camera comprises a first direction motion authority and a second direction motion authority, wherein the first direction and the second direction are preset according to the posture of the virtual object in the three-dimensional space.
If the distance from the virtual camera to the virtual object is less than the first threshold, the distance from the virtual camera to the virtual object may be considered to be closer, that is, the virtual camera is located near the virtual object. In this case, the virtual camera may be authorized to perform the first direction movement and the second direction movement, that is, it is determined that the movement authority of the virtual camera includes the first direction movement authority and the second direction movement authority. Meanwhile, in this case, in order to prevent the observation angle of the virtual camera from penetrating into the model of the virtual object or other abnormal situations, the virtual camera is not authorized to perform zooming movement, that is, the virtual camera is not granted the zooming movement authority. In practice, the zooming motion includes a motion of zooming out the virtual object and a motion of zooming in the virtual object.
For example, assume that the virtual object is a virtual character as shown in fig. 2. In fig. 2, the virtual character is standing on the ground in the posture in the three-dimensional space, and the first direction is set to the vertical axis direction shown in fig. 2 and the second direction is set to the horizontal axis direction shown in fig. 2 based on the posture of the virtual character. And if the distance from the virtual camera to the virtual object is smaller than a first threshold value, determining that the motion authority of the virtual camera comprises longitudinal motion authority and transverse motion authority. The longitudinal movement authority means that the virtual camera is authorized to move up along the longitudinal axis or down along the longitudinal axis. The lateral movement authority means that the virtual camera is authorized to perform a clockwise frame movement around the virtual character along the horizontal axis direction, and the virtual camera is authorized to perform a counterclockwise frame movement around the virtual character along the horizontal axis direction.
Further, after the motion authority of the virtual camera is determined in 102, 103, the virtual camera is controlled to move in the set three-dimensional space according to the touch instruction and the motion authority in response to the touch instruction to the graphical interface. In an alternative embodiment, 103 may be implemented as:
if the touch instruction is detected, acquiring a moving track of the touch instruction; determining a first motion path of the virtual camera on a first curved surface boundary according to the movement track and the motion authority of the touch instruction; the virtual camera is controlled to move on a first movement path.
The three-dimensional space is set to comprise a curved surface boundary surrounding the virtual object, and the distance between the curved surface boundary and the virtual object is smaller than or equal to a first threshold value. For the sake of distinction, this surface boundary is referred to herein as the first surface boundary. Through the steps, the virtual object can be observed in a close range on the boundary of the first curved surface.
In particular, the virtual object includes a plurality of reference points. For the sake of differentiation, the plurality of reference points are referred to herein as first reference points. The first reference point may be on the model surface of the virtual object or inside the model of the virtual object. The first curved surface boundary includes a plurality of first circular tracks and a plurality of first curves passing through the plurality of first circular tracks, and each first circular track is centered on a corresponding first reference point in the plurality of first reference points. The first surface boundary includes setting a closest distance to the virtual object in the three-dimensional space.
In practical applications, the first reference point may be referred to as a viewpoint, and a plurality of viewpoints may be set according to visual characteristic information of the virtual object. Alternatively, the radius of each first circular track is set according to visual feature information of the virtual object, such as size information. The radii of the first circular tracks may be the same or different. Optionally, the first curved surface boundary further includes a plurality of curves connected between the plurality of first circular tracks. These curves are referred to herein as the first curves for the sake of distinction. The first curves can be obtained by performing interpolation operation on the radii of the first circular tracks.
In an optional embodiment, determining a first motion path of the virtual camera on the first curved surface boundary according to the movement track and the motion authority of the touch instruction may be implemented as:
if the direction of the moving track of the touch instruction is a first direction and the motion authority comprises first direction motion authority, selecting a first curve passing through the position where the virtual camera is located from the plurality of first curves as a first motion path, wherein the starting point of the first motion path is the position where the virtual camera is located.
For example, assume that the virtual object is a virtual character as shown in fig. 3. In fig. 3, the virtual character is standing on the ground in the posture in the three-dimensional space, and the first direction is set to the vertical axis direction shown in fig. 3 based on the posture of the virtual character. The graphical interface is assumed to be an interface in a game client. It is assumed that the motion right includes a longitudinal motion right. Assume that the plurality of first curves includes L1 and L2 as shown in fig. 3. Assume that the virtual camera is located at point d on L1.
Based on the above assumptions, if a touch point when the user touches the graphical interface is detected, a movement trajectory of the touch point is obtained. If the moving track of the touch point is along the vertical axis direction, the first curve L1 passing through the point d is selected from the plurality of first curves as the first motion path, wherein the starting point of the first motion path is the point d.
If the moving track of the touch point is directed upward along the vertical axis, the virtual camera is controlled to move from the point d along the L1 to the first circular track r 1. Optionally, the optical axis of the virtual camera in this case points to the center of the first circular trajectory r1, i.e., point a.
If the moving track of the touch point is downward along the vertical axis, the virtual camera is controlled to move from the point d to the first circular track r2 or r3 along the line L1. Alternatively, in this case, when the virtual camera moves to the first circular track r2, the optical axis of the virtual camera points to the center of the first circular track r2, i.e., point b; when the virtual camera moves to the first circular track r3 through the first circular track r2, the optical axis of the virtual camera points to the center of the circle that is switched from the point b to the first circular track r3, i.e., the point c.
In another optional embodiment, determining a first motion path of the virtual camera on the first curved surface boundary according to the movement track and the motion authority of the touch instruction may be implemented as follows:
if the direction of the moving track of the touch instruction is a second direction and the motion authority includes a second direction motion authority, taking a first circular track i where the virtual camera is located in the plurality of first circular tracks as a first motion path, and the second direction is parallel to the plane where the first circular track i is located.
For example, assume that the virtual object is a virtual character as shown in fig. 4. In fig. 4, the virtual character is standing on the ground in the posture in the three-dimensional space, and the second direction is set to the horizontal axis direction shown in fig. 4 based on the posture of the virtual character. The graphical interface is assumed to be an interface in a game client. It is assumed that the motion right includes a lateral motion right. Assume that the virtual camera is located at point e on the first circular trajectory r 2.
Based on the above assumptions, if a touch point when the user touches the graphical interface is detected, the movement track of the touch point is obtained. If the moving track of the touch point is in the horizontal axis direction, the first circular track r2 of the plurality of first circular tracks where the virtual camera is located is taken as the first moving path, and the plane of the first circular track r2 is parallel to the horizontal axis direction.
If the moving track of the touch point is left along the horizontal axis, the virtual camera is controlled to move clockwise along the first circular track r2 from the point e. Optionally, the optical axis of the virtual camera in this case points to the center of the first circular trajectory r2, i.e., point b. The angle to be rotated by the clockwise movement can be set according to the distance of the touch point moving along the horizontal axis direction.
If the moving track of the touch point is directed to the right along the horizontal axis, the virtual camera is controlled to move counterclockwise along the first circular track r2 from the point e. Optionally, the optical axis of the virtual camera in this case points to the center of the first circular trajectory r2, i.e., point b. The angle to be rotated by the counterclockwise motion can be set according to the distance of the touch point moving along the horizontal axis direction.
Through the steps, the virtual object can be observed on the boundary of the first curved surface.
Of course, the visual characteristic information of the virtual object is different, and the shape of the first curved surface boundary constructed based on the visual characteristic information of the virtual object is also different. In practical application, the shape of the first curved surface boundary can be adjusted by adjusting the position of the first reference point in the three-dimensional space, the parameter of the first circular track and the parameter of the first curve according to the display effect of the virtual object.
In another alternative embodiment, in 102, determining the motion authority of the virtual camera based on the relative position information of the virtual camera and the virtual object may be implemented as:
and if the distance from the virtual camera to the virtual object is greater than a first threshold value and the distance from the virtual camera to the virtual object is less than a second threshold value, determining that the motion authority of the virtual camera comprises a first direction motion authority, a second direction motion authority and a zooming motion authority.
Wherein the second threshold is greater than the first threshold. Similarly to the above description, the first direction and the second direction may also be set in advance according to the posture of the virtual object in the three-dimensional space. The direction of the zoom motion is perpendicular to the first direction and the second direction.
If the distance from the virtual camera to the virtual object is greater than the first threshold value and the distance from the virtual camera to the virtual object is less than the second threshold value, it can be considered that the distance from the virtual camera to the virtual object is farther, that is, the virtual camera is located far away from the virtual object. In this case, the virtual camera may be authorized to perform the authority to move in each direction, such as the authority to move in the first direction and the authority to move in the second direction. To enable the user to observe more details of the virtual object, the virtual camera may also be granted zoom motion rights. In practice, the zooming movement includes a movement of zooming out on the virtual object and a movement of zooming in on the virtual object, and thus, the direction of the zooming movement is generally toward or away from the virtual character. In short, the direction toward the virtual character is the direction toward the virtual object from the virtual camera, and the direction away from the virtual character is the direction away from the virtual object from the virtual camera. Accordingly, the motion for zooming in on the virtual object is the motion of the virtual camera toward the virtual object, and the motion for zooming out on the virtual object is the motion of the virtual camera away from the virtual object.
For example, assume that the virtual object is a virtual character as shown in fig. 5. In fig. 5, the virtual character is standing on the ground in the posture in the three-dimensional space, and the first direction is set to the vertical axis direction shown in fig. 5 and the second direction is set to the horizontal axis direction shown in fig. 5 based on the posture of the virtual character. And if the distance from the virtual camera to the virtual object is greater than a first threshold value, determining that the motion authority of the virtual camera comprises a longitudinal motion authority, a transverse motion authority and a zooming motion authority. The meaning of the longitudinal motion authority and the lateral motion authority is similar to that described above and is not described herein again.
The zoom movement authority means that the virtual camera is authorized to move towards the direction of the virtual object and the virtual camera is authorized to move away from the direction of the virtual object. For example, assume that the virtual camera is located at the point f shown in fig. 6. In fig. 6, the movement of the virtual camera towards the virtual object is: the virtual camera moves from point f towards point b, where the direction of movement of the virtual camera is shown by the arrow in fig. 6.
Furthermore, after determining the motion authority of the virtual camera in 102, in another optional embodiment, in 103, in response to a touch instruction to the graphical interface, controlling the virtual camera to move in the set three-dimensional space according to the touch instruction and the motion authority may be implemented as:
if the touch instruction is detected, acquiring a moving track of the touch instruction; determining a second motion path of the virtual camera according to the movement track and the motion authority of the touch instruction, wherein the second motion path is located on a second curved surface boundary, or the second motion path is located in a three-dimensional space area; and controlling the virtual camera to move on the second motion path.
The three-dimensional space setting method comprises the steps of setting a first curved surface boundary and a second curved surface boundary which surround a virtual object, and setting a three-dimensional space region between the second curved surface boundary and the first curved surface boundary. Optionally, the first curved surface boundary is disposed inside the second curved surface boundary. Here the inner side is the side on which the virtual object is located. That is, the distance from the first surface boundary to the virtual object is less than or equal to a first threshold, the distance from the second surface boundary to the virtual object is greater than the first threshold, the distance from the second surface boundary to the virtual object is less than or equal to a second threshold, and the second threshold is greater than the first threshold. For convenience of understanding, it is assumed that the first curved surface boundary and the second curved surface boundary are two barrel-shaped structures, and the barrel-shaped structure corresponding to the first curved surface boundary is arranged on the inner side of the barrel-shaped structure corresponding to the second curved surface boundary. Based on this, the two barrel structures and the three-dimensional space region located in the middle of the two barrel structures can constitute a set three-dimensional space. It is to be understood that the shape of the set three-dimensional space is determined by the visual feature information of the virtual object. Such as setting the height of the three-dimensional space may be determined by the height of the virtual object.
In particular, the virtual object includes a plurality of reference points. For purposes of differentiation, these reference points are referred to herein as second reference points. Similar to the first reference point, the second reference point may be on the model surface of the virtual object or inside the model of the virtual object. The second curved surface boundary includes a plurality of second circular tracks and a plurality of second curves passing through the plurality of second circular tracks, and each second circular track is centered on a corresponding second reference point in the plurality of second reference points. The second surface boundary includes a position in the three-dimensional space that is set to be farthest from the virtual object.
In practical applications, according to the visual characteristic information of the virtual object, the centers of the first circular track and the second circular track in the same plane may be the same or different, and the present invention is not limited herein. That is, the plurality of first reference points and the plurality of second reference points may or may not coincide.
In practical applications, the second reference point may also be referred to as a viewpoint, and the setting rule is similar to the above and is not expanded here. The two viewpoints are different in that the curved surface boundaries for the two viewpoints are different.
Alternatively, the radius of each second circular track is set according to visual characteristic information of the virtual object, such as size information. The radii of the second circular tracks may be the same or different. Optionally, the second curved surface boundary further comprises a plurality of curves connected between the plurality of second circular tracks. These curves are referred to herein as second curves for the sake of distinction. The plurality of second curves may be obtained by interpolating radii of the plurality of second circular tracks.
Taking the virtual character shown in fig. 5 as an example, for convenience of description, it is still assumed that the virtual character includes three points a, b, and c, and the three points a, b, and c of the virtual character are taken as second reference points. Three second circular tracks and second curves L3 and L4 shown in FIG. 5 are obtained by respectively taking three points a, b and c as centers of circles and according to the track radiuses r11, r22 and r33 corresponding to the three points a, b and c in the size information of the virtual object. Thus, the second curved surface boundary is established based on the three second circular tracks and the second curves L3, L4. For ease of viewing, the first curved surface boundary is not shown in FIG. 5.
For ease of comparative illustration of the relationship of the first surface boundary and the second surface boundary, FIG. 6 shows a first circular trajectory r2 in the first surface boundary and a second circular trajectory r22 in the second surface boundary. In fig. 6, the center of the second circular orbit which is coplanar with the first circular orbit r2 is also assumed to be point b. In this case, the second curved surface boundary includes a second circular track r22 centered at point b and having a radius r22 of a second circular track r22, wherein r22 is greater than r 2.
Based on the second curved surface boundary described above, in an optional embodiment, the determining the second motion path of the virtual camera according to the movement track and the motion authority of the touch instruction may be implemented as:
and if the direction of the moving track of the touch instruction is a first direction and the motion authority comprises first direction motion authority, selecting a second curve passing through the position where the virtual camera is located from the plurality of second curves as a second motion path, wherein the starting point of the second motion path is the position where the virtual camera is located.
For example, assume that the virtual object is a virtual character as shown in fig. 5. In fig. 5, the virtual character is standing on the ground in the posture in the three-dimensional space, and the first direction is set to the vertical axis direction shown in fig. 5 based on the posture of the virtual character. The graphical interface is assumed to be an interface in a game client. It is assumed that the motion right includes a longitudinal motion right. Assume that the plurality of second curves includes L3 and L4 as shown in fig. 5. Assume that the virtual camera is located at point f on L3.
Based on the above assumptions, if a touch point when the user touches the graphical interface is detected, a movement trajectory of the touch point is obtained. If the moving track of the touch point is along the vertical axis direction, the second curve L3 passing through the point f is selected from the plurality of second curves as the second motion path, wherein the starting point of the second motion path is the point f. If the moving track of the touch point is directed upward along the vertical axis, the virtual camera is controlled to move from the point d along the L3 to the second circular track r 11. Alternatively, the optical axis of the virtual camera in this case points to the center of the second circular orbit r11, i.e., point a, from the point b of the center of the second circular orbit r 22. If the moving track of the touch point is directed downward along the vertical axis, the virtual camera is controlled to move from the point f to the second circular track r33 along the line L3. In this case, when the virtual camera moves to the second circular orbit r33, the optical axis of the virtual camera is directed to the center of the circle that is switched from the point b to the point c of the second circular orbit r 33.
In another optional embodiment, the determining the second motion path of the virtual camera according to the movement track and the motion authority of the touch instruction may be implemented as:
if the direction of the moving track of the touch instruction is a second direction and the motion authority includes a second direction motion authority, taking a second circular track i where the virtual camera is located in the plurality of second circular tracks as a second motion path, wherein the second direction is parallel to the plane where the second circular track i is located.
For example, assume that the virtual object is a virtual character as shown in fig. 5. In fig. 5, the virtual character is standing on the ground in the posture in the three-dimensional space, and the second direction is set to the horizontal axis direction shown in fig. 5 based on the posture of the virtual character. The graphical interface is assumed to be an interface in a game client. It is assumed that the motion right includes a lateral motion right. Assume that the virtual camera is located at point f on the second circular trajectory r 22.
Based on the above assumptions, if a touch point when the user touches the graphical interface is detected, the movement track of the touch point is obtained. If the direction of the moving track of the touch point is the horizontal axis direction, the second circular track r22 where the virtual camera is located in the plurality of second circular tracks is taken as the second motion path, and the plane of the second circular track r22 is parallel to the horizontal axis direction. If the moving track of the touch point is left along the horizontal axis, the virtual camera is controlled to move clockwise along the second circular track r22 from the point f. Optionally, the optical axis of the virtual camera in this case points to the center of the second circular trajectory r22, i.e., point b. The angle to be rotated by the clockwise movement can be set according to the distance of the touch point moving along the horizontal axis direction. Alternatively, if the movement trajectory of the touch point is directed rightward along the horizontal axis, the virtual camera is controlled to move counterclockwise along the second circular trajectory r22 from the point f. The angle to be rotated by the counterclockwise motion can be set according to the distance of the touch point moving along the horizontal axis direction.
It will be appreciated that in addition to the lateral and longitudinal movements according to the movement rules shown in the two embodiments, in practical applications the virtual camera may also perform movements in the three-dimensional spatial region between the first curved surface boundary and the second curved surface boundary. The following describes the movement rule of the virtual camera in the three-dimensional space region with reference to a specific embodiment:
in another optional embodiment, determining the second motion path of the virtual camera according to the movement track and the motion authority of the touch instruction includes:
if the direction of the moving track of the touch instruction comprises a plurality of directions and the motion authority comprises a zooming motion authority, determining a second motion path of the virtual camera in the three-dimensional space area based on the position of the virtual camera and the moving directions of the plurality of tracks.
The moving track of the touch instruction comprises a plurality of tracks formed by a plurality of touch points. The direction of the moving track of the touch instruction includes a plurality of directions, which is to say, the directions of the plurality of tracks are different.
Of course, the virtual camera can also perform horizontal and vertical movements in the three-dimensional space region, and the specific rule is similar to the above-described horizontal and vertical movement rule, and is not expanded here.
Also taking the virtual character shown in fig. 5 described above as an example, in fig. 5, the virtual character is in a posture standing on the ground in three-dimensional space, the first direction is set to the vertical axis direction shown in fig. 5 based on the posture of the virtual character, and the second direction is set to the horizontal axis direction shown in fig. 5 based on the posture of the virtual character. The graphical interface is assumed to be an interface in a game client. Assume that the motion permission comprises a zoom motion permission. Assume that the plurality of second curves includes L3 and L4 as shown in fig. 5. Assume that the virtual camera is located at point f on the second circular trajectory r 22.
Based on the assumption, if it is detected that the user touches the graphical interface, at least two touch points are formed, and the movement tracks of the touch points are obtained. And if the directions of the plurality of movement tracks corresponding to the touch points are different, determining a second movement path of the virtual camera in the three-dimensional space area based on the position of the virtual camera, namely the f point, and the movement directions of the plurality of tracks.
Assume that the graphical interface is interface 700 shown in fig. 7. Assume that the touch points include m1 and m 2. In fig. 7, the direction of the movement locus formed by m1 and the direction of the movement locus formed by m2 are opposite, and it can be judged from these movement loci that the user's operation is: the hands slide outward. Assume that the virtual object p in fig. 7 is a virtual character in fig. 5. Suppose that the user needs to zoom in on the virtual object p when both hands slide outward. Of course, in practical applications, the corresponding relationship between the user operation and the zooming motion may be obtained in advance, for example, the inward sliding of the user's two hands corresponds to zooming out the virtual object, and the outward sliding of the user's two hands corresponds to zooming in the virtual object. And the zooming times are set according to the moving track of the touch points.
It is also assumed that a straight line determined based on the f-point and the b-point intersects the first circular orbit r2 at the n-point, as shown in fig. 6. Based on the assumptions of fig. 5, 6, and 7, the line segment fn determined based on the f-point and the n-point is taken as the second motion path. That is, the virtual camera is controlled to move from the point f to a point b on the center of the circle common to the second circular orbit r22 and the first circular orbit r2 along the line fn in the three-dimensional space region. Through the steps, the virtual object can be observed on the second curved surface boundary and in the three-dimensional space region between the first curved surface boundary and the second curved surface boundary.
Alternatively, the motion parameter of the virtual camera during the zooming motion can be calculated by using the proportional difference. The virtual camera acquires corresponding points from the first curved surface boundary and the second curved surface boundary, so that the motion parameters, such as the scaling ratio and the motion speed, of the virtual camera during the scaling motion are determined based on the positions of the corresponding points and the display scale of the corresponding points.
It will be appreciated that the second reference points of the virtual object will also move as the virtual object is scaled. When the virtual camera moves from the second curved surface boundary to the first curved surface boundary, the distance between the virtual camera and the virtual object gradually decreases, at the moment, the size of the virtual object gradually increases, and the distance between the second reference points also gradually increases. Conversely, when the virtual camera moves from the first curved surface boundary to the second curved surface boundary, the distance between the virtual camera and the virtual object gradually increases, at this time, the size of the virtual object gradually decreases, and the distance between the second reference points also gradually decreases. In short, as the virtual camera moves from the second curved surface boundary to the first curved surface boundary, the second reference points are gradually dispersed; and gradually gathering the second reference points along with the movement of the virtual camera from the first curved surface boundary to the second curved surface boundary. Optionally, when the virtual camera is farthest from the virtual object, the second reference points are gathered to be one second reference point.
It should be noted that, in practical applications, when the virtual camera is located on the second curved surface boundary, the longitudinal distances of the three second reference points a, b, and c are not necessarily shown in fig. 5. Based on the above zoom motion and the description of the plurality of second reference points, when the virtual camera is located on the second curved surface boundary, the longitudinal distance of each second reference point may vary with the distance between the virtual camera and the virtual object.
Based on the second reference point movement rule, the corresponding relationship between the zoom factor and the plurality of second reference points may also be obtained in advance. Still based on the assumptions above for fig. 5, 6, and 7, as the virtual camera moves from point f to point b, in interface 700, virtual object p is gradually enlarged as the virtual camera moves. As the virtual object p is gradually enlarged, the longitudinal distances of the three second reference points a, b, and c are gradually increased according to the corresponding relationship between the zoom factor and the plurality of second reference points. Conversely, when the virtual camera moves from point b to point f, the virtual object p gradually shrinks in the interface 700 as the virtual camera moves. As the virtual object p is gradually reduced, the longitudinal distances of the three second reference points a, b and c are gradually reduced according to the corresponding relationship between the scaling factor and the plurality of second reference points. In practice, the zoom factor is determined by the distance of the virtual camera from the virtual object.
Through the steps, the virtual object can be observed in a multi-angle mode on the second curved surface boundary and in the three-dimensional space region between the first curved surface boundary and the second curved surface boundary.
Of course, the visual characteristic information of the virtual object is different, and the shape of the boundary of the second curved surface constructed based on the visual characteristic information of the virtual object is also different. In practical application, the shape of the second curved surface boundary can be adjusted by adjusting the position parameter of the second reference point in the three-dimensional space, the parameter of the second circular track and the parameter of the second curve according to the display effect of the virtual object.
For example, the parameters of each reference point and each circular track may be set in the editor shown in fig. 8. In fig. 8, the parameters to be set include a Binding model (Binding Mode), a Spline Curvature (Spline Curvature), and a radius and a height of each circular track. Such as toprigs, middlelirg, BottomRig, correspond to the parameter setting options of the upper, middle, and lower circular tracks, respectively.
In practical applications, the setting can also be performed through the parameters of each circular track in the editor shown in fig. 9. The follow option is used for setting parameters corresponding to the reference points, and the Freebook option is used for setting axis points in the set three-dimensional space (namely, moving points where the virtual camera moves in the set three-dimensional space).
In practical applications, the curved surface boundary may also include other shaped tracks, such as an elliptical track, which is not limited in the embodiments of the present invention. The curved surface boundary shown in fig. 10 includes two elliptical tracks and 3 circular tracks.
In addition to the set three-dimensional space (i.e. the first curved surface boundary, the second curved surface boundary, and the three-dimensional space region between the first curved surface boundary and the second curved surface boundary) set in the above steps, in order to realize multi-angle observation of the virtual object, the set three-dimensional space and the observation angle where the virtual camera is located may be set in the following manner:
it is assumed that the method for presenting virtual objects is applied to a graphical interface in which virtual objects are loaded. The virtual object is assumed to be in three-dimensional space. In order to show the posture of a virtual object in the three-dimensional space, a virtual camera needs to be loaded to acquire a display image of the virtual object.
In order to prevent the user from observing the abnormal region of the virtual character, the observation angle of the virtual camera needs to be limited. In short, the position and viewing angle of the virtual camera need to be defined. Similarly to the above, the three-dimensional space in which the virtual camera can move is set as a three-dimensional space. One setting method for setting the three-dimensional space may be:
first, a first range bucket corresponding to the virtual camera is set, and the first range bucket comprises three axis points.
Specifically, three viewpoints (i.e., the above first reference points) on the virtual object are set. For example, when the virtual object is a character, the three viewpoints may be located at the neck, waist, and knee of the virtual character, respectively. For the sake of distinction, this viewpoint is referred to as a first viewpoint. The moving point of the virtual camera is set based on the three first viewpoints, i.e., the three axis points corresponding to the three first viewpoints, respectively. For the sake of distinction, this axis point is referred to as the first axis point. Optionally, a distance between the first axis and the corresponding first viewpoint is greater than or equal to a preset distance, in order to avoid that the user observes an abnormal area of the virtual character and also avoid that the virtual camera collides with the virtual model. Three circular tracks (namely the first circular track above) which correspond to the three axis points are calculated through interpolation, so that the three circular tracks form a range bucket corresponding to the virtual camera. For differentiation, this range bucket is referred to as the first range bucket.
Determining the motion authority of the virtual camera based on the relative position information in the first range bucket 102 may also be specifically implemented as:
if the distance of the virtual camera to the virtual object is equal to the first threshold, determining that the motion permission of the virtual camera includes a first motion permission in the first range bucket. The three-dimensional space is set to comprise a first range bucket surrounding the virtual object, and the distance from the first range bucket to the virtual object is a first threshold value.
Further, it is assumed that the virtual object includes a plurality of first viewpoints. Assume that the plurality of first viewpoints correspond one-to-one to a plurality of first axis points in the first range bin. And assuming that the first range bucket is composed of a plurality of first circular trajectories interpolated from a plurality of first axis points. Based on the above assumptions, in 103, in response to a touch instruction for the graphical interface, controlling the virtual camera to move in the set three-dimensional space according to the touch instruction and the movement authority, which may be further specifically implemented as:
if the touch instruction is detected, acquiring a moving track of the touch instruction; and controlling the virtual camera to move in the first range bucket according to the movement track of the touch instruction and the first movement authority.
The optical axis of the virtual camera points to a first viewpoint corresponding to a first axis point where the virtual camera is located. Specifically, in the first range bucket, the virtual camera can move between each first axis point and the interpolation point corresponding to each first axis point. In fact, the direction of movement of the virtual camera in the first range bin includes the longitudinal direction, the lateral direction, and any direction superimposed laterally and longitudinally. However, no matter what moving direction the virtual camera moves along, the optical axis of the virtual camera always points to the first viewpoint corresponding to the first axis point from the first axis point.
For example, the course of motion of the virtual camera in the first range bucket is, for example: the circular trajectory formed by the first axis point is assumed to be parallel to the ground. Assuming that the first axis points include x, y, z, the corresponding first viewpoints are x ', y ', z '. Assume that the virtual camera is at the first axis point x. Based on the above assumption, when the virtual camera moves laterally, the virtual camera moves around the virtual object along the circular track corresponding to the first axis point x, and the optical axis of the virtual camera always points to the first viewpoint x' corresponding to the first axis point x from the first axis point x. When the virtual camera moves longitudinally, the virtual camera moves longitudinally at an interpolation point between a first axis point x and a first axis point y, and the optical axis of the virtual camera points to a first viewpoint x '(or a first viewpoint y') corresponding to the interpolation point from the interpolation point. The method of switching the optical axis orientation during the longitudinal movement is similar to the above method of switching the optical axis orientation, and is not expanded here.
Further, a second range bucket corresponding to the virtual camera is set, and the second range bucket comprises three axis points.
Specifically, three viewpoints (i.e., the above second reference point) on the virtual object are set. For the sake of distinction, this viewpoint is referred to as a second viewpoint. It will be appreciated that the second viewpoint may or may not coincide with the first viewpoint. The moving point of the virtual camera, i.e., three axis points corresponding to the three second viewpoints, is set based on the three second viewpoints. For the sake of distinction, this axis point is referred to as the second axis point. Three circular tracks (namely, the second circular track above) corresponding to the three axis points are calculated by interpolation, so that the three circular tracks form a range bucket corresponding to the virtual camera. For differentiation, this range bucket is referred to as the second range bucket.
In practical applications, it should be noted that the distance from the second axis point to the second viewpoint is greater than the distance from the first axis point to the first viewpoint. Thus, the second range bucket surrounds the outside of the first range bucket, which surrounds the outside of the virtual object.
In the second range bucket, the virtual camera may move between each second axis point and the interpolation point corresponding to each second axis point. Movement may also be made in a three-dimensional spatial region between the second range bin and the first range bin. In fact, the direction of movement of the virtual camera in the second range bin also includes any direction superimposed longitudinally, transversely, and transversely and longitudinally. However, no matter what moving direction the virtual camera moves along, the optical axis of the virtual camera always points to the second viewpoint corresponding to the second axis point from the second axis point.
It should be noted that the viewpoint moves as the virtual camera moves between the two range buckets.
Optionally, the smaller the distance of the virtual camera to the virtual object, the larger the plurality of second viewpoints. In particular, as the virtual camera moves from the second range bucket to the first range bucket, the three second viewpoints may be observed to gradually separate as the relative distance of the virtual camera to the virtual object gradually decreases. If the virtual camera is on the second range bucket, since the relative distance between the virtual camera and the virtual object is the farthest, it can be observed that the second viewpoints corresponding to the three second axis points are closer to each other, for example, it can be observed that the three second viewpoints are aggregated into one point. This is because: when the virtual camera is positioned on the second range bucket, no matter which second axis point the virtual camera moves to, the optical axis direction of the virtual camera always points to the corresponding second viewpoint on the virtual object, and because the size of the virtual object observed at the moment is smaller, the distances among the second viewpoints corresponding to the three second axis points in the virtual object are observed to be closer, even the second viewpoints are converged into one point; as the relative distance between the virtual camera and the virtual object decreases, the size of the observed virtual object increases, and the second viewpoints corresponding to the three second axis points in the virtual object need to be gradually dispersed to ensure that the entire virtual object can be observed.
Based on the second range bucket 102, determining the motion authority of the virtual camera based on the relative position information may be further implemented as:
if the distance of the virtual camera to the virtual object is greater than the first threshold and the distance of the virtual camera to the virtual object is less than the second threshold, determining that the motion authority of the virtual camera includes a second motion authority in the second range bucket and a third motion authority between the second range bucket and the first range bucket. The second threshold is greater than the first threshold, the three-dimensional space is set to include a second range bucket and a first range bucket which surround the virtual object, the distance from the second range bucket to the virtual object is the second threshold, and the distance from the first range bucket to the virtual object is the first threshold.
Further, it is assumed that the virtual object includes a plurality of second viewpoints. The plurality of second viewpoints are assumed to correspond one-to-one with a plurality of second axis points in the second range bucket. And the second range bucket is assumed to be composed of a plurality of second circular trajectories interpolated from a plurality of second axis points. Based on the above assumption, in 103, in response to a touch instruction for the graphical interface, the virtual camera is controlled to move in the set three-dimensional space according to the touch instruction and the movement authority, and the following steps may also be implemented:
when a touch instruction is detected, acquiring a moving track of the touch instruction; controlling the virtual camera to move in the second range barrel according to the moving track of the touch instruction and the second motion authority; or controlling the virtual camera to move between the second range bucket and the first range bucket according to the movement track of the touch instruction and the third movement authority. And the optical axis of the virtual camera points to a second viewpoint corresponding to a second axis point where the virtual camera is located.
Thus, two range buckets can be set by six viewpoints and six axis points corresponding to each. Thus, multi-angle observation of virtual objects is achieved through the movement of the virtual camera on and between the two range buckets. And the specified part of the virtual object, such as the inner side part of the skirt of the virtual object, is limited by the space between the two range buckets and the position of the axis point.
In the execution process of the virtual object display method shown in fig. 1, the virtual camera is controlled to move in the set three-dimensional space according to the touch instruction and the motion permission, so that multi-angle display of the virtual object in the set three-dimensional space is realized, richer observation visual angles are provided for users, abnormal display of the virtual object under certain visual angles is avoided by setting the three-dimensional space and the motion permission, and user experience is improved.
The presentation apparatus of the virtual object of one or more embodiments of the present invention will be described in detail below. Those skilled in the art will appreciate that the presentation of these virtual objects can be configured using commercially available hardware components through the steps taught in this scheme.
Fig. 6 is a schematic structural diagram of a virtual object display apparatus according to an embodiment of the present invention. The device is applied to a graphical interface loaded with a virtual object and a virtual camera, where the virtual camera is used to collect a display image of the virtual object, as shown in fig. 11, and the display device of the virtual object includes: the device comprises a first determination module 11, a second determination module 12 and a control module 13.
A first determining module 11, configured to determine relative position information of the virtual camera and the virtual object;
a second determining module 12, configured to determine a motion authority of the virtual camera based on the relative position information;
and the control module 13 is configured to respond to a touch instruction for the graphical interface, and control the virtual camera to move in a set three-dimensional space according to the touch instruction and the movement permission, so as to adjust the display image.
Optionally, the relative position information includes a distance of the virtual camera to the virtual object.
The second determining module 12, when determining the motion authority of the virtual camera based on the relative position information, is specifically configured to: and if the distance from the virtual camera to the virtual object is smaller than a first threshold value, determining that the motion authority of the virtual camera comprises a first direction motion authority and a second direction motion authority, wherein the first direction and the second direction are preset according to the posture of the virtual object in the three-dimensional space.
Optionally, the setting of the three-dimensional space includes a first surface boundary surrounding the virtual object, and a distance from the first surface boundary to the virtual object is smaller than or equal to the first threshold.
The control module 13 is specifically configured to: if the touch instruction is detected, acquiring a moving track of the touch instruction; determining a first motion path of the virtual camera on the first curved surface boundary according to the movement track of the touch instruction and the motion authority; and controlling the virtual camera to move on the first motion path.
Optionally, the virtual object includes a plurality of first reference points, the first curved surface boundary includes a plurality of first circular tracks and a plurality of first curves passing through the plurality of first circular tracks, and each first circular track respectively takes a corresponding first reference point of the plurality of first reference points as a center.
Optionally, when the control module 13 determines the first motion path of the virtual camera on the first curved surface boundary according to the movement trajectory of the touch instruction and the motion permission, the control module is specifically configured to: if the direction of the moving track of the touch instruction is a first direction and the motion authority includes first direction motion authority, selecting a first curve passing through the position where the virtual camera is located from the plurality of first curves as the first motion path, wherein the starting point of the first motion path is the position where the virtual camera is located.
Optionally, when the control module 13 determines the first motion path of the virtual camera on the first curved surface boundary according to the movement trajectory of the touch instruction and the motion permission, it is specifically configured to: if the direction of the moving track of the touch instruction is a second direction and the motion authority includes a second direction motion authority, taking a first circular track i where the virtual camera is located in the plurality of first circular tracks as the first motion path, and the second direction is parallel to the plane where the first circular track i is located.
Optionally, the relative position information includes a distance of the virtual camera to the virtual object.
When the second determining module 12 determines the motion authority of the virtual camera based on the relative position information, it is specifically configured to: if the distance from the virtual camera to the virtual object is greater than a first threshold value and the distance from the virtual camera to the virtual object is less than a second threshold value, determining that the motion authority of the virtual camera comprises a first direction motion authority, a second direction motion authority and a zooming motion authority.
Wherein the second threshold is greater than the first threshold, the first direction and the second direction are preset according to the posture of the virtual object in the three-dimensional space, and the direction of the zooming motion is perpendicular to the first direction and the second direction.
Optionally, the setting of the three-dimensional space includes a first surface boundary and a second surface boundary surrounding the virtual object, and a three-dimensional space region between the second surface boundary and the first surface boundary. The distance from the first surface boundary to the virtual object is less than or equal to the first threshold, the distance from the second surface boundary to the virtual object is greater than the first threshold, the distance from the second surface boundary to the virtual object is less than or equal to the second threshold, and the second threshold is greater than the first threshold.
When the control module 13 responds to the touch instruction for the graphical interface and controls the virtual camera to move in the set three-dimensional space according to the touch instruction and the movement permission, the control module is specifically configured to: if the touch instruction is detected, acquiring a moving track of the touch instruction; determining a second motion path of the virtual camera according to the movement track of the touch instruction and the motion authority, wherein the second motion path is located on the second curved surface boundary, or the second motion path is located in the three-dimensional space area; and controlling the virtual camera to move on the second motion path.
Optionally, the virtual object includes a plurality of second reference points, the second curved surface boundary includes a plurality of second circular tracks and a plurality of second curves passing through the plurality of second circular tracks, and each second circular track respectively takes a corresponding second reference point of the plurality of second reference points as a center.
Optionally, when the control module 13 determines the second motion path of the virtual camera according to the movement track of the touch instruction and the motion permission, the control module is specifically configured to: if the direction of the moving track of the touch instruction is a first direction and the motion authority includes a first direction motion authority, selecting a second curve passing through the position of the virtual camera from the plurality of second curves as the second motion path, wherein the starting point of the second motion path is the position of the virtual camera.
Optionally, when the control module 13 determines the second motion path of the virtual camera according to the movement track of the touch instruction and the motion permission, the control module is specifically configured to: if the direction of the moving track of the touch instruction is a second direction and the motion authority includes a second direction motion authority, taking a second circular track i where the virtual camera is located in the plurality of second circular tracks as the second motion path, where the second direction is parallel to the plane where the second circular track i is located.
Optionally, when the control module 13 determines the second motion path of the virtual camera according to the movement track of the touch instruction and the motion permission, the control module is specifically configured to: if the direction of the movement track of the touch instruction comprises a plurality of directions and the motion authority comprises a zooming motion authority, determining the second motion path of the virtual camera in the three-dimensional space area based on the position of the virtual camera and the movement directions of the plurality of tracks.
Optionally, the relative position information includes a distance of the virtual camera to the virtual object. The second determining module 12 is specifically configured to: determining that the motion permission of the virtual camera includes a first motion permission in a first range bucket if the distance from the virtual camera to the virtual object is equal to a first threshold; the three-dimensional space is set to comprise a first range bucket surrounding the virtual object, and the distance from the first range bucket to the virtual object is a first threshold value.
Optionally, the virtual object includes a plurality of first viewpoints, the plurality of first viewpoints are in one-to-one correspondence with a plurality of first axis points in a first range bucket, and the first range bucket is composed of a plurality of first circular tracks obtained by interpolation calculation of the plurality of first axis points. When the control module 13 responds to a touch instruction for the graphical interface and controls the virtual camera to move in the set three-dimensional space according to the touch instruction and the movement permission, the control module is specifically configured to: if the touch instruction is detected, acquiring a moving track of the touch instruction; and controlling the virtual camera to move in the first range bucket according to the movement track of the touch instruction and the first movement authority. The optical axis of the virtual camera points to a first viewpoint corresponding to a first axis point where the virtual camera is located.
Optionally, the relative position information includes a distance of the virtual camera to the virtual object. The second determining module 12 is specifically configured to: if the distance of the virtual camera to the virtual object is greater than the first threshold and the distance of the virtual camera to the virtual object is less than the second threshold, determining that the motion authority of the virtual camera includes a second motion authority in the second range bucket and a third motion authority between the second range bucket and the first range bucket. The second threshold is greater than the first threshold, the three-dimensional space is set to include a second range bucket and a first range bucket which surround the virtual object, the distance from the second range bucket to the virtual object is the second threshold, and the distance from the first range bucket to the virtual object is the first threshold.
Optionally, the virtual object includes a plurality of second viewpoints, the plurality of second viewpoints are in one-to-one correspondence with a plurality of second axis points in a second range bucket, and the second range bucket is composed of a plurality of second circular tracks obtained by interpolation calculation of the plurality of second axis points. When the control module 13 responds to a touch instruction for the graphical interface and controls the virtual camera to move in the set three-dimensional space according to the touch instruction and the movement permission, the control module is specifically configured to: when a touch instruction is detected, acquiring a moving track of the touch instruction; controlling the virtual camera to move in the second range barrel according to the moving track of the touch instruction and the second motion authority; or controlling the virtual camera to move between the second range bucket and the first range bucket according to the movement track of the touch instruction and the third movement authority.
And the optical axis of the virtual camera points to a second viewpoint corresponding to a second axis point where the virtual camera is located.
Optionally, the smaller the distance from the virtual camera to the virtual object, the larger the plurality of second viewpoints; and if the virtual camera is in the second range bucket, the plurality of second viewpoints are aggregated into a point.
The display apparatus of the virtual object shown in fig. 11 may execute the methods provided in the foregoing embodiments, and portions not described in detail in this embodiment may refer to the related descriptions of the foregoing embodiments, which are not described herein again.
In one possible design, the structure of the virtual object display apparatus shown in fig. 11 may be implemented as an electronic device.
As shown in fig. 12, the electronic device may include: a processor 21 and a memory 22. Wherein the memory 22 has stored thereon executable code, which when executed by the processor 21, at least makes the processor 21 capable of implementing the method of presenting a virtual object as provided in the previous embodiments. The electronic device may further include a communication interface 23 for communicating with other devices or a communication network.
In addition, an embodiment of the present invention provides a non-transitory machine-readable storage medium, on which executable code is stored, and when the executable code is executed by a processor of a wireless router, the processor is caused to execute the method for exposing a virtual object provided in the foregoing embodiments.
The system, method and apparatus of the embodiments of the present invention can be implemented as pure software (e.g., a software program written in Java), as pure hardware (e.g., a dedicated ASIC chip or FPGA chip), or as a system combining software and hardware (e.g., a firmware system storing fixed code or a system with a general-purpose memory and a processor), as desired.
Another aspect of the invention is a computer-readable medium having computer-readable instructions stored thereon that, when executed, perform a method of embodiments of the invention.
While various embodiments of the present invention have been described above, the above description is intended to be illustrative, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The scope of the claimed subject matter is limited only by the attached claims.

Claims (10)

1. A method for displaying a virtual object is applied to a graphical interface loaded with the virtual object and a virtual camera, wherein the virtual camera is used for acquiring a display image of the virtual object, and the method comprises the following steps:
determining a distance of the virtual camera to the virtual object;
and if the distance from the virtual camera to the virtual object is smaller than a first threshold value, determining that the motion authority of the virtual camera comprises a first direction motion authority and a second direction motion authority, wherein the first direction and the second direction are preset according to the posture of the virtual object in the three-dimensional space.
2. The method of claim 1, further comprising:
and responding to a touch instruction of the graphical interface, and controlling the virtual camera to move in a set three-dimensional space according to the touch instruction and the movement authority so as to adjust the display image.
3. The method according to claim 2, wherein the setting the three-dimensional space comprises a first surface boundary surrounding the virtual object, and a distance from the first surface boundary to the virtual object is less than or equal to the first threshold;
the responding to the touch instruction of the graphical interface, and controlling the virtual camera to move in a set three-dimensional space according to the touch instruction and the movement authority, comprises:
if the touch instruction is detected, acquiring a moving track of the touch instruction;
determining a first motion path of the virtual camera on the first curved surface boundary according to the movement track of the touch instruction and the motion authority;
and controlling the virtual camera to move on the first motion path.
4. The method of claim 3, wherein the virtual object comprises a plurality of first reference points, wherein the first curved surface boundary comprises a plurality of first circular tracks and a plurality of first curves passing through the plurality of first circular tracks, and wherein each first circular track is centered at a corresponding first reference point of the plurality of first reference points.
5. A method for displaying a virtual object is applied to a graphical interface loaded with the virtual object and a virtual camera, wherein the virtual camera is used for acquiring a display image of the virtual object, and the method comprises the following steps:
determining a distance of the virtual camera to the virtual object;
if the distance from the virtual camera to the virtual object is greater than a first threshold value and the distance from the virtual camera to the virtual object is less than a second threshold value, determining that the motion authority of the virtual camera comprises a first direction motion authority, a second direction motion authority and a zooming motion authority;
wherein the second threshold is greater than the first threshold, the first direction and the second direction are preset according to the posture of the virtual object in the three-dimensional space, and the direction of the zooming motion is perpendicular to the first direction and the second direction.
6. The method of claim 5, further comprising:
and responding to a touch instruction of the graphical interface, and controlling the virtual camera to move in a set three-dimensional space according to the touch instruction and the movement authority so as to adjust the display image.
7. The method according to claim 6, wherein the setting of the three-dimensional space comprises a first surface boundary and a second surface boundary surrounding the virtual object, and a three-dimensional space region between the second surface boundary and the first surface boundary;
the distance from the first surface boundary to the virtual object is less than or equal to the first threshold, the distance from the second surface boundary to the virtual object is greater than the first threshold, the distance from the second surface boundary to the virtual object is less than or equal to the second threshold, and the second threshold is greater than the first threshold;
the responding to the touch instruction of the graphical interface, and controlling the virtual camera to move in a set three-dimensional space according to the touch instruction and the movement authority, comprises:
if the touch instruction is detected, acquiring a moving track of the touch instruction;
determining a second motion path of the virtual camera according to the movement track of the touch instruction and the motion authority, wherein the second motion path is located on the second curved surface boundary, or the second motion path is located in the three-dimensional space area;
and controlling the virtual camera to move on the second motion path.
8. The method of claim 7, wherein the virtual object comprises a plurality of second reference points, wherein the second curved boundary comprises a plurality of second circular tracks and a plurality of second curves passing through the plurality of second circular tracks, and wherein each second circular track is centered on a corresponding second reference point of the plurality of second reference points.
9. An electronic device, comprising: a memory, a processor; wherein the memory has stored thereon executable code which, when executed by the processor, causes the processor to perform a method of presenting a virtual object as claimed in any one of claims 1 to 8.
10. A computer readable medium having stored thereon at least one instruction, at least one program, set of codes or set of instructions, which is loaded and executed by a processor to implement a method of exposing a virtual object according to any one of claims 1 to 8.
CN202110565573.9A 2020-08-26 2020-08-26 Virtual object display method, device and equipment Active CN113304471B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110565573.9A CN113304471B (en) 2020-08-26 2020-08-26 Virtual object display method, device and equipment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110565573.9A CN113304471B (en) 2020-08-26 2020-08-26 Virtual object display method, device and equipment
CN202010871773.2A CN112076470B (en) 2020-08-26 2020-08-26 Virtual object display method, device and equipment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202010871773.2A Division CN112076470B (en) 2020-08-26 2020-08-26 Virtual object display method, device and equipment

Publications (2)

Publication Number Publication Date
CN113304471A true CN113304471A (en) 2021-08-27
CN113304471B CN113304471B (en) 2023-01-10

Family

ID=73728647

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110565573.9A Active CN113304471B (en) 2020-08-26 2020-08-26 Virtual object display method, device and equipment
CN202010871773.2A Active CN112076470B (en) 2020-08-26 2020-08-26 Virtual object display method, device and equipment

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202010871773.2A Active CN112076470B (en) 2020-08-26 2020-08-26 Virtual object display method, device and equipment

Country Status (2)

Country Link
CN (2) CN113304471B (en)
WO (1) WO2022041514A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003334380A (en) * 2002-05-20 2003-11-25 Konami Computer Entertainment Japan Inc Virtual camera position control program for three- dimensional game
US20040176164A1 (en) * 2003-03-05 2004-09-09 Kabushiki Kaisha Square Enix ( Also Trading As Square Enix Co., Ltd.) Virtual camera control method in three-dimensional video game
WO2018074821A1 (en) * 2016-10-19 2018-04-26 (주)잼투고 User terminal apparatus and computer-implemented method for synchronizing movement path and movement time of camera by using touch user interface
CN110045827A (en) * 2019-04-11 2019-07-23 腾讯科技(深圳)有限公司 The observation method of virtual objects, device and readable storage medium storing program for executing in virtual environment
WO2020050103A1 (en) * 2018-09-06 2020-03-12 キヤノン株式会社 Virtual viewpoint control device and method for controlling same
CN110944727A (en) * 2017-09-19 2020-03-31 佳能株式会社 System and method for controlling virtual camera

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5622447B2 (en) * 2010-06-11 2014-11-12 任天堂株式会社 Information processing program, information processing apparatus, information processing system, and information processing method
EP2497547B1 (en) * 2011-03-08 2018-06-27 Nintendo Co., Ltd. Information processing program, information processing apparatus, information processing system, and information processing method
US9541417B2 (en) * 2012-06-05 2017-01-10 Apple Inc. Panning for three-dimensional maps
JP6319952B2 (en) * 2013-05-31 2018-05-09 任天堂株式会社 Information processing program, information processing apparatus, information processing system, and information processing method
CN105488839A (en) * 2015-12-07 2016-04-13 上海市政工程设计研究总院(集团)有限公司 Interactive operation system for three-dimensional scene and operation method thereof
US20180210628A1 (en) * 2017-01-23 2018-07-26 Snap Inc. Three-dimensional interaction system
CN108905212B (en) * 2017-03-27 2019-12-31 网易(杭州)网络有限公司 Game screen display control method and device, storage medium and electronic equipment
CN107198879B (en) * 2017-04-20 2020-07-03 网易(杭州)网络有限公司 Movement control method and device in virtual reality scene and terminal equipment
CN107895399A (en) * 2017-10-26 2018-04-10 广州市雷军游乐设备有限公司 A kind of omnibearing visual angle switching method, device, terminal device and storage medium
CN107930114A (en) * 2017-11-09 2018-04-20 网易(杭州)网络有限公司 Information processing method and device, storage medium, electronic equipment
CN108920084B (en) * 2018-06-29 2021-06-18 网易(杭州)网络有限公司 Visual field control method and device in game
CN110025953B (en) * 2019-03-15 2022-06-10 网易(杭州)网络有限公司 Game interface display method and device, storage medium and electronic device
CN110162236B (en) * 2019-04-28 2020-12-29 深圳市思为软件技术有限公司 Display method and device between virtual sample boards and computer equipment
CN110209325A (en) * 2019-05-07 2019-09-06 高新兴科技集团股份有限公司 A kind of 3D scene display control method, system and equipment
CN110478903B (en) * 2019-09-09 2023-05-26 珠海金山数字网络科技有限公司 Control method and device for virtual camera
CN110548280B (en) * 2019-09-11 2023-02-17 珠海金山数字网络科技有限公司 Control method and device of virtual camera
CN110597389B (en) * 2019-09-12 2021-04-09 腾讯科技(深圳)有限公司 Virtual object control method in virtual scene, computer device and storage medium
CN111161396B (en) * 2019-11-19 2023-05-16 广东虚拟现实科技有限公司 Virtual content control method, device, terminal equipment and storage medium
CN111467803B (en) * 2020-04-02 2023-07-14 网易(杭州)网络有限公司 Display control method and device in game, storage medium and electronic equipment
CN111494940B (en) * 2020-04-17 2023-03-31 网易(杭州)网络有限公司 Display control method and device for virtual object in game

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003334380A (en) * 2002-05-20 2003-11-25 Konami Computer Entertainment Japan Inc Virtual camera position control program for three- dimensional game
US20040176164A1 (en) * 2003-03-05 2004-09-09 Kabushiki Kaisha Square Enix ( Also Trading As Square Enix Co., Ltd.) Virtual camera control method in three-dimensional video game
WO2018074821A1 (en) * 2016-10-19 2018-04-26 (주)잼투고 User terminal apparatus and computer-implemented method for synchronizing movement path and movement time of camera by using touch user interface
CN110944727A (en) * 2017-09-19 2020-03-31 佳能株式会社 System and method for controlling virtual camera
WO2020050103A1 (en) * 2018-09-06 2020-03-12 キヤノン株式会社 Virtual viewpoint control device and method for controlling same
CN110045827A (en) * 2019-04-11 2019-07-23 腾讯科技(深圳)有限公司 The observation method of virtual objects, device and readable storage medium storing program for executing in virtual environment

Also Published As

Publication number Publication date
CN113304471B (en) 2023-01-10
WO2022041514A1 (en) 2022-03-03
CN112076470B (en) 2021-05-28
CN112076470A (en) 2020-12-15

Similar Documents

Publication Publication Date Title
US11600046B2 (en) Selecting two-dimensional imagery data for display within a three-dimensional model
CN106339093B (en) Cloud deck control method and device
JP7045486B2 (en) Viewing angle adjustment method, electronic devices, and computer programs
JP7166708B2 (en) VIRTUAL OBJECT CONTROL METHOD, APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
US20240091654A1 (en) Display mode in virtual scene
KR101663452B1 (en) Screen Operation Apparatus and Screen Operation Method
JP6597235B2 (en) Image processing apparatus, image processing method, and image processing program
CN111135556B (en) Virtual camera control method and device, electronic equipment and storage medium
US9779702B2 (en) Method of controlling head-mounted display system
US20150040073A1 (en) Zoom, Rotate, and Translate or Pan In A Single Gesture
JP7045218B2 (en) Information processing equipment and information processing methods, programs
EP2911393B1 (en) Method and system for controlling virtual camera in virtual 3d space and computer-readable recording medium
CN112230836B (en) Object moving method and device, storage medium and electronic device
US20240066404A1 (en) Perspective rotation method and apparatus, device, and storage medium
CN110575671A (en) Method and device for controlling view angle in game and electronic equipment
US20190041974A1 (en) Image processing apparatus, image processing method, and program
US7982753B2 (en) Information display apparatus
CN112076470B (en) Virtual object display method, device and equipment
CN112870714A (en) Map display method and device
CN109976533B (en) Display control method and device
EP3712750A1 (en) Image display system, image display program, and image display method
JP6388270B1 (en) Information processing apparatus, information processing apparatus program, head mounted display, and display system
CN111265866A (en) Control method and device of virtual camera, electronic equipment and storage medium
KR101741149B1 (en) Method and device for controlling a virtual camera's orientation
CN111078044A (en) Terminal interaction method, terminal and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant