CN112156467A - Control method and system of virtual camera, storage medium and terminal equipment - Google Patents

Control method and system of virtual camera, storage medium and terminal equipment Download PDF

Info

Publication number
CN112156467A
CN112156467A CN202011104053.XA CN202011104053A CN112156467A CN 112156467 A CN112156467 A CN 112156467A CN 202011104053 A CN202011104053 A CN 202011104053A CN 112156467 A CN112156467 A CN 112156467A
Authority
CN
China
Prior art keywords
virtual camera
game
collision
game scene
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011104053.XA
Other languages
Chinese (zh)
Inventor
张辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202011104053.XA priority Critical patent/CN112156467A/en
Publication of CN112156467A publication Critical patent/CN112156467A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • A63F13/577Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/64Methods for processing data by generating or executing the game program for computing dynamical parameters of game objects, e.g. motion determination or computation of frictional forces for a virtual car
    • A63F2300/643Methods for processing data by generating or executing the game program for computing dynamical parameters of game objects, e.g. motion determination or computation of frictional forces for a virtual car by determining the impact between objects, e.g. collision detection
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6661Methods for processing data by generating or executing the game program for rendering three dimensional images for changing the position of the virtual camera

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure provides a control method and system of a virtual camera, a storage medium and a terminal device, and belongs to the technical field of games. Providing a graphical user interface by a terminal device, the graphical user interface comprising a game scene determined by a virtual camera, the method comprising: controlling the virtual camera to move in the game scene in response to an operation instruction for the virtual camera; when the virtual camera is controlled to move in the game scene, a collision body colliding with the virtual camera in the game scene is acquired, and the collision body is subjected to transparentization processing or the position of the virtual camera is adjusted. The method and the device can improve the switching fluency of the game pictures and enhance the operation flexibility of the virtual camera.

Description

Control method and system of virtual camera, storage medium and terminal equipment
Technical Field
The present disclosure relates to the field of game technologies, and in particular, to a method for controlling a virtual camera, a system for controlling a virtual camera, a computer-readable storage medium, and a terminal device.
Background
With the continuous development of computer technology, the types and contents of electronic games are becoming rich, and for example, 2.5-dimensional or 3-dimensional games have become a main development direction of game makers because the reality of scenes can bring immersive game experiences to players.
In the above games, the game screen is often displayed based on the virtual camera pose, and the virtual camera may move as the virtual object moves in the game scene. At present, the movement of a virtual camera in a game scene is mainly designed and packaged by a developer through experience on the movement and display of the virtual camera, but with the continuous richness of the game scene, the method is difficult to meet the requirement of detail expression degree of the virtual camera, for example, when the virtual camera moves in the game scene, the switching of game pictures is easily affected by a network environment and the like, the switching smoothness of the game pictures is not high, and the flexibility of picture switching is low; for another example, when a collision body exists in a game scene, the display of the game screen is blocked by the collision body, and it is difficult for the player to specify the shooting angle of the virtual camera, which results in a high probability of erroneous operation.
Therefore, it is desirable to provide a virtual camera control method that can satisfy the level of expression of details of a game screen and improve the flexibility of screen switching.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure provides a control method of a virtual camera, a control system of a virtual camera, a computer-readable storage medium, and a terminal device, thereby improving a problem of low smoothness of game screen switching in the prior art at least to a certain extent.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of the present disclosure, there is provided a method for controlling a virtual camera, which provides a graphical user interface through a terminal device, the graphical user interface including a game screen determined in a game scene by the virtual camera, the method including: controlling the virtual camera to move in the game scene in response to an operation instruction for the virtual camera; when the virtual camera is controlled to move in the game scene, a collision body colliding with the virtual camera in the game scene is acquired, and the collision body is subjected to transparentization processing or the position of the virtual camera is adjusted.
In an exemplary embodiment of the present disclosure, the controlling the virtual camera to move in the game scene in response to the operation instruction for the virtual camera includes: and determining the moving speed of the virtual camera according to the frame time of the game picture so as to move the virtual camera in the game scene according to the moving speed.
In an exemplary embodiment of the present disclosure, the determining a moving speed of the virtual camera according to a frame time of the game screen includes: and adopting an interpolation function, and taking the frame time of the game picture at each moment as the input of the interpolation function to obtain the moving speed of the virtual camera at the corresponding moment.
In an exemplary embodiment of the present disclosure, the acquiring a collision volume in the game scene, the collision volume colliding with the virtual camera, and performing a transparentization process on the collision volume or adjusting a position of the virtual camera, includes: acquiring a collision body which collides with the virtual camera in the game scene; determining a transparency condition of the collision volume; and performing transparency processing on the collision body or adjusting the position of the virtual camera according to the transparency condition.
In an exemplary embodiment of the present disclosure, the transparentizing the collision volume or adjusting the position of the virtual camera according to the transparency condition includes: acquiring label data of the collision body to determine whether the collision body meets a transparency condition according to the label data; when the collision body is determined to meet the transparency condition, setting the transparency of the collision body to a preset value; adjusting a position of the virtual camera upon determining that the collision volume does not satisfy the transparency condition.
In an exemplary embodiment of the present disclosure, the game screen includes a virtual object, and the adjusting the position of the virtual camera upon determining that the collision volume does not satisfy the transparency condition includes: when the virtual object is detected to be close to the collision body, a plane figure is established at a preset position of the virtual object, and the plane figure is parallel to a projection plane of the game picture; determining a plurality of vertexes in the plane graph, and acquiring a plurality of rays emitted to each vertex by the virtual camera; determining an intersecting ray that intersects the collider among the plurality of rays, and a location of intersection of the intersecting ray with the collider; calculating the distance between the intersection position and the virtual object, and determining the corresponding intersection position as a target position when the distance is shortest; moving the virtual camera from a current position to the target position.
In an exemplary embodiment of the present disclosure, after moving the virtual camera from the current position to the target position, the method further comprises: when it is detected that the virtual object is away from the collision volume, the position of the virtual camera is restored to the current position.
In an exemplary embodiment of the present disclosure, in controlling the virtual camera to move in the game scene, the method further includes: and displaying observation information of the virtual camera in the game picture, wherein the observation information comprises any one or more of the position, the motion trail and the projection plane of the game picture of the virtual camera.
In an exemplary embodiment of the present disclosure, the movement of the virtual camera in the game scene comprises translation and/or rotation.
According to a second aspect of the present disclosure, there is provided a control system of a virtual camera, which provides a graphical user interface through a terminal device, the graphical user interface including a game screen determined in a game scene by the virtual camera, the system including: the receiving module is used for receiving an operation instruction of the terminal equipment for the virtual camera; a moving module for controlling the virtual camera to move in the game scene in response to the operation instruction for the virtual camera; the collision processing module is used for acquiring a collision body which collides with the virtual camera in the game scene, and performing transparentization processing on the collision body or adjusting the position of the virtual camera; and the display module is used for displaying the game picture determined by the virtual camera.
In an exemplary embodiment of the disclosure, the moving module is configured to determine a moving speed of the virtual camera according to a frame time of the game screen to move the virtual camera in the game scene according to the moving speed.
In an exemplary embodiment of the disclosure, the moving module is further configured to use an interpolation function, and use a frame time of the game screen at each time as an input of the interpolation function to obtain a moving speed of the virtual camera at the corresponding time.
In an exemplary embodiment of the disclosure, the collision processing module is configured to acquire a collision volume in the game scene, which collides with the virtual camera; determining a transparency condition of the collision volume; and performing transparency processing on the collision body or adjusting the position of the virtual camera according to the transparency condition.
In an exemplary embodiment of the disclosure, the collision processing module is further configured to acquire tag data of the collision volume, to determine whether the collision volume satisfies a transparency condition according to the tag data, to set a transparency of the collision volume to a preset value when it is determined that the collision volume satisfies the transparency condition, and to adjust a position of the virtual camera when it is determined that the collision volume does not satisfy the transparency condition.
In an exemplary embodiment of the present disclosure, the game screen includes a virtual object, and the collision processing is further configured to, when it is detected that the virtual object is close to the collision volume, create a planar graph at a preset position of the virtual object, the planar graph being parallel to a projection plane of the game screen, determine a plurality of vertices in the planar graph, and acquire a plurality of rays emitted by the virtual camera to each vertex, determine an intersecting ray intersecting the collision volume among the plurality of rays, and an intersecting position of the intersecting ray with the collision volume, calculate a distance between the intersecting position and the virtual object, to determine a corresponding intersecting position as a target position when the distance is shortest, and move the virtual camera from a current position to the target position.
In an exemplary embodiment of the disclosure, after moving the virtual camera from the current position to the target position, the collision processing module is further configured to restore the position of the virtual camera to the current position when the virtual object is detected to be away from the collision volume.
In an exemplary embodiment of the disclosure, when the virtual camera is controlled to move in the game scene, the display module is further configured to display observation information of the virtual camera in the game screen, where the observation information includes any one or more of a position, a motion trajectory, and a projection plane of the game screen of the virtual camera.
In an exemplary embodiment of the present disclosure, the movement of the virtual camera in the game scene comprises translation and/or rotation.
According to a third aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements any one of the above-described virtual camera control methods.
According to a fourth aspect of the present disclosure, there is provided a terminal device comprising: a processor; and a memory for storing executable instructions of the processor; a display screen for displaying a graphical user interface; wherein the processor is configured to execute any one of the above control methods of the virtual camera via execution of the executable instructions.
The present disclosure has the following beneficial effects:
according to the control method of the virtual camera, the control system of the virtual camera, the computer-readable storage medium, and the terminal device in the present exemplary embodiment, it is possible to control the virtual camera to move in the game scene in response to an operation instruction for the virtual camera, and to acquire a collision volume that collides with the virtual camera in the game scene while controlling the virtual camera to move in the game scene, and to perform a transparentization process on the collision volume or to adjust the position of the virtual camera. On one hand, the exemplary embodiment can determine the moving mode of the virtual camera in real time by controlling the virtual camera to move in the game scene in response to the operation instruction for the virtual camera, thereby improving the switching fluency of the game picture and the detail expression degree of the game picture and also enhancing the operation flexibility of the virtual camera; on the other hand, by acquiring a collision body which collides with the virtual camera in the game scene, and carrying out transparency processing on the collision body or adjusting the position of the virtual camera, smooth switching of game pictures can be further realized, and the game experience of the player is also improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is apparent that the drawings in the following description are only some embodiments of the present disclosure, and that other drawings can be obtained from those drawings without inventive effort for a person skilled in the art.
Fig. 1 shows a flowchart of a control method of a virtual camera in the present exemplary embodiment;
FIG. 2 shows a schematic diagram of an interpolation function in the present exemplary embodiment;
fig. 3 shows a schematic diagram of a transparentization process in the present exemplary embodiment;
FIG. 4 is a diagram showing a game screen in the present exemplary embodiment;
FIG. 5 illustrates an imaging schematic of a virtual camera in the present exemplary embodiment;
FIG. 6 shows a schematic diagram of a collision process in the present exemplary embodiment;
fig. 7 is a schematic diagram showing another game screen in the present exemplary embodiment;
fig. 8 shows a flowchart of another control method of the virtual camera in the present exemplary embodiment;
fig. 9 shows a block diagram of a control system of a virtual camera in the present exemplary embodiment;
FIG. 10 illustrates a computer-readable storage medium for implementing the above-described method in the present exemplary embodiment;
fig. 11 shows a terminal device for implementing the above method in the present exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Exemplary embodiments of the present disclosure first provide a control method of a virtual camera, which may be applied to a terminal device and may obtain a graphic user interface by executing software on a processor of the terminal device and rendering in a display screen of the terminal device, and the graphic user interface may display a game screen determined in a game scene by the virtual camera.
The terminal device may be an electronic device with a display screen, such as a computer, a tablet computer, a smart phone, a game machine, a VR (Virtual Reality) device, and other indoor terminal devices, and includes a memory for storing data and a processor for processing data, and the game application software is installed through the memory and the processor executes a corresponding game program to implement the operation of the game program on the terminal device; the virtual camera can capture a game picture in a game through a series of camera parameters, the game picture can usually comprise all or part of a game scene, generally, only one virtual camera is arranged in the game scene, and particularly, when the effects of split screen, picture-in-picture and the like need to be made, a plurality of virtual cameras can be arranged in the game scene; the graphical user interface can be used for displaying a game picture determined in a game scene through the virtual camera, game controls operated by a player and the like; the game picture refers to a picture including a game scene and a virtual object in the game scene, and the display of the game picture depends on the position of the virtual camera in the game scene, for example, when the virtual camera is set to move along with a certain virtual object, the game picture can be always displayed as a picture of the virtual object in a certain range of the game scene.
Referring to fig. 1, a flowchart illustrating a control method of a virtual camera in the present exemplary embodiment may include the following steps S110 to S120:
step S110, responding to an operation instruction aiming at the virtual camera, and controlling the virtual camera to move in a game scene.
Wherein, the movement of the virtual camera in the game picture can comprise any one or more of translation, rotation and the like; the operation instruction may be an instruction for controlling the virtual camera, which is input by the player through the terminal device, for example, the operation instruction may include an instruction for controlling the virtual camera to move, which is input by the player through the terminal device, and taking the touch terminal as an example, the operation instruction may be operations such as sliding, gathering or releasing of two fingers, and the like of the player on the touch terminal, and each of the operations may be used to control the virtual camera to move in a corresponding direction in the game scene; if the terminal device is a computer terminal, the operation command may be a specific function key, for example, when the player presses the "W" key, the virtual camera may be controlled to move linearly along the current direction.
The player can input a corresponding operation instruction in the graphical user interface, and meanwhile, the terminal device can sequentially receive the operation instruction through the corresponding interface and control the virtual camera to move in a game scene according to the content of the operation instruction in response to the operation instruction. For example, when an operation instruction that the player slides in a certain direction in the graphical user interface is received, the terminal device may control the virtual camera to move forward in the corresponding direction, or when the operation instruction is a coordinate of a certain position in the graphical user interface, the terminal device may control the virtual camera to move to a position corresponding to the coordinate in a straight line or a curved line. In addition, the interface of the terminal device for receiving the operation instruction may be an interface for communication between devices or modules, and may be a physical interface or a virtual interface, where the virtual interface refers to an interface inside the computer, which is not usually visible, such as 80 ports, 21 ports, 23 ports, etc. in the computer, and the physical interface is usually a visible interface, such as an RJ45 network port of a back panel of the computer, an RJ45 port of a switch router hub, etc.
In some cases, the rendering of the game screen is affected by the network environment and other factors, and therefore, in order to reduce the problem of discontinuous screen caused by unstable rendering time of the game screen in the process of controlling the movement of the virtual camera, the problem of discontinuous screen is solved. Thus, in an alternative embodiment, step S110 may control the movement of the virtual camera in the game scene by:
and determining the moving speed of the virtual camera according to the frame time of the game picture so as to move the virtual camera in the game scene according to the moving speed.
The frame time refers to the time required by the terminal device to render each game frame, and the time is usually related to the game screen display mode set by the player, the network environment at the current moment, and the like, and the value of the time can be generally acquired through a corresponding data interface.
According to the frame time of the terminal device rendering the game picture at each moment, the moving speed of the virtual camera at the corresponding moment can be determined, so that the virtual camera can be moved in the game scene according to the moving speed. According to the mode, the moving speed of the virtual camera can be changed along with the rendering time of the game picture, and compared with a method for moving the virtual camera at a constant speed, the moving flexibility of the virtual camera can be enhanced, and the switching experience of the game picture is improved.
Further, in order to determine the relationship between the frame time of the game screen and the moving speed of the virtual camera, in an alternative embodiment, the moving speed of the virtual camera may be determined by:
and adopting an interpolation function, and taking the frame time of the game picture at each moment as the input of the interpolation function to obtain the moving speed of the virtual camera at the corresponding moment.
The interpolation function is a method for interpolating a continuous function according to the sample points to obtain other sample point data through the continuous function, and may include a linear interpolation function, a polynomial interpolation function, a newton interpolation function, and the like.
Take a quadratic function as an example, i.e. y ═ ax2+ bx + c, where y represents the distance between the current position of the virtual camera and the target position, x is the frame time of the current time, and a and b are respectively a determination constant; if y 'is derived to be 2ax + b, then y' is the moving speed of the virtual camera, and it can be seen that b actually represents the speed increment of the virtual camera. Referring to fig. 2, as the frame time x increases, the farther the y value, i.e., the distance between the current position of the virtual camera and the target position, is, the greater the moving speed y' of the virtual camera is, that is, as the rendering time of the game screen increases, the moving speed of the virtual camera increases, and the smoothness of the visual switching of the game screen increases.
By adopting the interpolation function and taking the frame time of each moment as the input of the interpolation function, the moving speed of the virtual camera at the corresponding moment is obtained through fitting, the relationship between the frame time of the game picture and the moving speed of the virtual camera can be better embodied, the moving speed of the virtual camera can be conveniently adjusted by the terminal equipment in real time, and the switching fluency of the game picture can be increased; meanwhile, the developer is also helped to determine the game picture and the state of the virtual camera at the corresponding moment so as to further adjust the moving mode of the virtual camera. It should be noted that the above method for fitting the moving speed of the virtual camera by using the quadratic function is only an exemplary illustration, and a method for obtaining the moving speed of the virtual camera according to another interpolation function also belongs to the protection scope of the present exemplary embodiment.
And S120, when the virtual camera is controlled to move in a game scene, acquiring a collision body which collides with the virtual camera in the game scene, and carrying out transparency processing on the collision body or adjusting the position of the virtual camera.
The collision object refers to an obstacle in a game scene, and may include an object, a character, a vehicle, and the like.
Generally, in the game screen, various virtual objects, such as objects and environments, for example, roads, ravines, trees, and the like, may be included, and in order to create a real game scene experience when controlling the virtual camera to move in the game screen, the terminal device may obtain a collision body in the game scene, which collides with the virtual camera, and perform a transparency process on the collision body or adjust the position of the virtual camera. For example, for a transparent collision volume in a game scene, such as running water, the transparent collision volume can be subjected to transparency processing; in a game scene, a large building or the like can be positioned so as to be easily observed by adjusting the position of the virtual camera without performing the transparentization processing.
In view of the diversity of the types of collision volumes in the game scene, in an alternative embodiment, step S120 can be implemented by:
acquiring a collision body which collides with the virtual camera in a game scene;
determining a transparency condition of the collider;
and performing transparency processing on the collision body or adjusting the position of a virtual camera according to the transparency condition.
The transparent condition may include whether the collision body may be set to be translucent or fully transparent, or may include a specific numerical value of transparency, and the like, and may be set by a developer in advance according to the type of the collision body in the game, for example, the transparent condition may be set to determine whether the collision body is a transparent collision body, and when the collision body is a transparent collision body, the transparency value of the collision body may be further set; when the collision volume is not a transparent collision volume, it may be further determined whether the volume of the collision volume is greater than a volume threshold.
When the virtual camera is controlled to move in the game scene, a collision body colliding with the virtual camera in the game scene can be acquired in real time or at certain time intervals, and the transparency condition of the collision body is determined. For example, when the acquired collision volume is not a transparent collision volume, the transparency condition for the collision volume may be determined as determining whether the volume of the collision volume is greater than a volume threshold; when the acquired collision volume is a transparent collision volume, the transparency condition of the collision volume may be determined to determine the transparency value of the collision volume, or the like. After the transparency condition of the collision body is determined, the collision body may be subjected to a transparentization process or the position of the virtual camera may be adjusted according to the transparency condition, for example, when it is determined that the collision body is a transparent collision body or the collision body is not a transparent collision body and the volume thereof is less than or equal to a volume threshold value, a transparency value of the collision body may be further determined to perform a transparentization process on the collision body; when it is determined that the collision volume is not a transparent collision volume and its volume is greater than a volume threshold, the position of the virtual camera may be adjusted so that the virtual camera is in a position where the player can view the game scene in which the collision volume is located.
Furthermore, since the virtual camera does not actually exist in the game scene, in order to facilitate the determination of the collider with the virtual camera, a certain distance threshold may be set so that the collider is determined to be the collider with the virtual camera when the distance between the virtual camera and the collider is less than the distance threshold.
In a game application, each collision volume in a game scene may have its own attribute information, and therefore, in order to determine whether an acquired collision volume satisfies a transparency condition, in an alternative embodiment, whether a collision volume satisfies a transparency condition may be determined by acquiring tag data of a collision volume configured in advance, which may be implemented by:
acquiring label data of a collision body to determine whether the collision body meets a transparency condition according to the label data;
when the collision body is determined to meet the transparency condition, setting the transparency of the collision body as a preset value;
and adjusting the position of the virtual camera when the collision body is determined not to meet the transparent condition.
The tag data of the collision volume may be used to indicate the transparency attribute of the collision volume, such as whether the collision volume can be subjected to transparency processing, or may also include the transparency value of the collision volume during transparency processing; whether the collision body meets the transparency condition or not can be determined according to the tag data of the collision body, specifically, whether the collision body meets the transparency condition or not can be determined according to the acquired tag data of the collision body, and when the collision body is determined to meet the transparency condition, the transparency value of the collision body is determined; the preset value of transparency of the collision body can be generally set by a developer according to experience, and can be set to 0.5, 0.7 and the like, for example. It should be noted that the above transparent condition is only an exemplary illustration, and the protection scope of the present exemplary embodiment is not limited thereto.
By acquiring the tag data of the collider, it may be determined whether the collider acquired by the terminal device satisfies the transparency condition, for example, the tag data of the collider may be boolean values of "True" and "False", the collider whose tag data is "True" may be determined to satisfy the transparency condition according to the boolean value, and the collider whose tag data is "False" may not satisfy the transparency condition, so that when it is determined that the collider satisfies the above transparency condition, the transparency of the collider may be set to a preset value, and when the collider does not satisfy the transparency condition, the position of the virtual camera may be adjusted so that the virtual camera may be in a position convenient for the player to observe the game scene. Fig. 3 is a schematic diagram illustrating a process of transparentizing a collision body, as shown in the figure, the virtual camera 310 may capture a game scene in which the collision body 320 and a virtual object are located, and when it is determined that the collision body 320 satisfies the transparency condition, the collision body 320 may be set to be semitransparent, so as to obtain a game screen as shown in fig. 4, and it can be seen that, after the transparentizing process, the collision body and the game scene blocked by the collision body may be displayed in the game screen.
As previously described, one or more virtual objects may be present in the game screen. In the present exemplary embodiment, the virtual camera may move along with the movement of a virtual object, which may be a game character controlled by a player, such as a character, an animal, a vehicle, or the like. When it is determined that the collision volume does not satisfy the transparent condition, in order to enable the game screen to present the game scene corresponding to the virtual object and the collision volume, and facilitate observation of the position, the area, and the like of the virtual object, in an alternative embodiment, the position of the collision volume may be adjusted by:
when the virtual object is detected to be close to the collision body, a plane figure is established at the preset position of the virtual object, and the plane figure is parallel to the projection plane of the game picture;
determining a plurality of vertexes in the plane graph, and acquiring a plurality of rays emitted to each vertex by the virtual camera;
determining an intersecting ray that intersects the collider among the plurality of rays, and an intersection position of the intersecting ray with the collider;
calculating the distance between the intersection position and the virtual object, and determining the corresponding intersection position as a target position when the distance is shortest;
the virtual camera is moved from the current position to the target position.
For example, if the viewing angle of the virtual object is a first viewing angle, that is, the viewing angle observed by the virtual object is a camera viewing angle, the preset position may be set as the viewing position of the virtual object controlled by the player, for example, when the virtual object is a character, the preset position may be the eye position of the virtual object; the projection plane of the game screen may also be referred to as a near clip plane (near clip plane) of the virtual camera. Specifically, as shown in fig. 5, an imaging structure diagram of a virtual camera is shown, where 510 is the virtual camera, and 520 is a projection plane of a game screen; the preset pattern may be in the shape of a rectangle, a circle, a polygon, etc., and this exemplary embodiment is not particularly limited thereto; the multiple vertexes of the preset graph can be set according to the shape of the preset graph, for example, when the preset graph is a rectangle, the vertexes can be four vertexes of the rectangle, or can also include the central point of each side of the rectangle, or the multiple vertexes can also be set at any position of the preset graph according to actual requirements.
When the collision body does not meet the transparent condition, the situation that the collision body obstructs the view of the player is shown, therefore, when the virtual object is detected to be close to the collision body, a plane figure parallel to the projection plane of the game picture is established at the preset position of the virtual object, and the size of the plane figure can be generally smaller than or equal to the maximum area captured by the virtual camera at the corresponding position. Determining a plurality of vertexes in the plane graph, acquiring rays emitted to each vertex by the virtual camera, and judging whether each ray is intersected with the collision body; if a plurality of rays intersect with the collision body, determining the distance between the intersection position and the virtual object by determining the intersection position of the corresponding ray and the collision body, and determining the intersection position corresponding to the minimum value in the obtained distances as a target position so as to move the position of the virtual camera from the current position to the target position; if only one ray intersects with the collision body, determining the intersection position of the ray and the collision body as a target position, and moving the virtual camera to the target position; if no ray intersects the collision volume, it is said to some extent that the collision volume does not affect the player's field of view, and therefore the position of the virtual camera may not be adjusted, i.e., the virtual camera may be maintained at the current position.
Fig. 6 shows a schematic diagram for adjusting the position of the virtual camera, and when it is determined that the collision volume 320 does not satisfy the transparency condition, the target position of the virtual camera 310 can be determined by the above-described method, so that the position of the virtual camera 310 is moved from the position shown on the left side in the figure to the target position shown on the right side in the figure. Fig. 7 shows a schematic diagram of a game screen obtained by the above method, and it can be seen that by moving the virtual camera from position 710 to position 720, the virtual camera can display the game screen on which the virtual object is located without being visually affected by the collision volume.
In an alternative embodiment, in the method for determining the target position of the virtual camera by using multiple rays, a number threshold of the multiple rays may be set, so that when the number of intersecting rays that intersect with the collision volume in the multiple rays is greater than the number threshold, the intersection position of each intersecting ray with the collision volume is determined, and the position where each intersecting position is closest to the virtual object is determined as the target position.
Further, after the virtual camera is moved to the target position by the above method, in order to maintain the integrity and continuity of the game screen after the virtual object is moved to a position away from the collision volume, in an alternative embodiment, the position of the virtual camera may be restored to the current position when the virtual object is detected to be away from the collision volume.
In some cases, for example, when there is no collision object in the moving direction of the virtual camera in the game scene, but the player has triggered a zoom-in operation through the graphical user interface, that is, an operation of moving the virtual camera closer to a certain position in the game scene, the position where the zoom-in operation is not triggered may be a current position, the position where the virtual camera is to be moved closer may be a target position, and the virtual camera may be moved according to the zoom-in operation, so that the game screen may present a visual effect of moving the virtual camera closer to the target position at a certain speed; meanwhile, after the zoom-in operation is completed, the position of the virtual camera can be restored to the current position from the target position after the zoom-in operation, that is, the position where the zoom-in operation is not triggered. It should be noted that, in response to the zoom-in operation of the virtual camera by the player, the terminal device may also calculate the moving speed of the virtual camera by using the interpolation function in step S110 to move the virtual camera according to the moving speed in the process of zooming in the virtual camera from the current position to the target position.
Furthermore, in order to facilitate the developer to observe the position of the virtual camera, the display of the game screen, and the like, when the virtual camera is controlled to move in the game scene, in an alternative embodiment, the observation information of the virtual camera may be displayed in the game screen. The observation information of the virtual camera may include any one or more of a position of the virtual camera at the current time, a motion trajectory, a projection plane of the game screen, and the like.
By displaying the observation information of the virtual camera in the game picture when the virtual camera is controlled to move in the game scene, the visualization degree of the virtual camera in the game picture can be improved, developers can determine the state of the virtual camera according to the observation information of the virtual camera, and the development efficiency of the game is improved to a certain extent.
The present exemplary embodiment further provides a flowchart of another control method for a virtual camera, which, as shown in fig. 8, may include the following steps S801 to S810:
step S801, receiving an operation instruction of the terminal equipment for the virtual camera.
Step S802, responding to the operation instruction aiming at the virtual camera, and calculating the moving speed of the virtual camera by adopting an interpolation function.
And S803, controlling the virtual camera to move in the game scene according to the moving speed.
Step S804, when the virtual camera is controlled to move in the game scene, a collision body which collides with the virtual camera in the game scene is obtained, and when the collision body is obtained, step S805 is executed, the transparent condition of the collision body is determined, and whether the collision body meets the transparent condition is judged; when the collision volume is not acquired, step S808 is executed to detect whether the virtual camera triggers an over-zoom operation.
Step S805, determining a transparency condition of the collision body, judging whether the collision body meets the transparency condition, and executing step S806 when the collision body meets the transparency condition, wherein the transparency of the collision body is set to a preset value; upon determining that the collision volume does not satisfy the transparency condition, step S807 is executed to calculate the target position of the virtual camera.
And step S806, setting the transparency of the collision body to be a preset value.
Step S807, calculating the target position of the virtual camera.
Specifically, a planar graph may be created at a preset position of a virtual object, a plurality of vertices may be determined in the planar graph to obtain a plurality of rays emitted from a virtual camera to each vertex, an intersecting ray intersecting with the collision volume and an intersecting position of the intersecting ray and the collision volume may be determined among the plurality of rays, and a distance between the intersecting position and the virtual object may be calculated to determine a corresponding intersecting position as a target position when the distance is shortest.
Step S808, detecting whether the virtual camera triggers the over-zoom operation, executing step S809 after determining that the virtual camera triggers the over-zoom operation, and moving the virtual camera from the current position to a target position, namely the position corresponding to the over-zoom operation; when it is determined that the virtual camera does not trigger the over-zoom operation, step S802 is performed, in which a moving speed of the virtual camera is calculated using an interpolation function to control the virtual camera to move in the game scene according to the moving speed.
Step S809, moving the virtual camera from the current position to the target position.
In calculating the target position of the virtual camera through step S807, the virtual camera may be moved from the current position to the target position; when it is determined through step S808 that the terminal device triggers the zoom-in operation of the virtual camera, the virtual camera may be moved from the current position to the target position to which the zoom-in operation is to be moved. By moving the position of the virtual camera, the virtual camera can be brought into a position convenient for observing the game scene in which the collision body is located.
And step S810, restoring the position of the virtual camera to the current position.
Specifically, after detecting the collision volume and moving the position of the virtual camera to the target position, if it is detected that the virtual object is away from the collision volume, the virtual camera may be restored from the target position to the current position; after the collision body is not acquired and the zoom-in operation of the virtual camera is triggered by the player at the terminal device, for example, after the completion of the operation is detected, the position of the virtual camera can be automatically restored to the current position, that is, the position when the zoom-in operation is not triggered.
In summary, according to the control method of the virtual camera in the present exemplary embodiment, the virtual camera may be controlled to move in the game scene in response to the operation instruction for the virtual camera, and when the virtual camera is controlled to move in the game scene, a collision body colliding with the virtual camera in the game scene may be acquired, and the collision body may be subjected to the transparentization processing or the position of the virtual camera may be adjusted. On one hand, the exemplary embodiment can determine the moving mode of the virtual camera in real time by controlling the virtual camera to move in the game scene in response to the operation instruction for the virtual camera, thereby improving the switching fluency of the game picture and the detail expression degree of the game picture and also enhancing the operation flexibility of the virtual camera; on the other hand, by acquiring a collision body which collides with the virtual camera in the game scene, and carrying out transparency processing on the collision body or adjusting the position of the virtual camera, smooth switching of game pictures can be further realized, and the game experience of the player is also improved.
Further, the present exemplary embodiment also provides a control system of a virtual camera, which may be applied to a terminal device, and a graphical user interface may be provided in a display screen of the terminal device, and the graphical user interface may be used to display a game picture determined in a game scene by the virtual camera. Fig. 9 shows a block diagram of a control system of the virtual camera in the present exemplary embodiment, and as shown, the control system 900 of the virtual camera may include: a receiving module 910, configured to receive an operation instruction of a terminal device for a virtual camera; a moving module 920, configured to control the virtual camera to move in the game scene in response to the operation instruction for the virtual camera; a collision processing module 930, configured to acquire a collision object that collides with the virtual camera in a game scene, and perform a transparency process on the collision object or adjust a position of the virtual camera; the display module 940 may be configured to display the game screen determined by the virtual camera.
In an exemplary embodiment of the present disclosure, the moving module 920 may be configured to determine a moving speed of the virtual camera according to a frame time of the game screen to move the virtual camera in the game scene according to the moving speed.
In an exemplary embodiment of the disclosure, the moving module 920 may be further configured to use an interpolation function, and use the frame time of the game screen at each time as an input of the interpolation function to obtain the moving speed of the virtual camera at the corresponding time.
In an exemplary embodiment of the present disclosure, the collision processing module 930 may be configured to acquire a collision volume colliding with the virtual camera in the game scene, determine a transparency condition of the collision volume, and perform a transparentization process on the collision volume or adjust the position of the virtual camera according to the transparency condition.
In an exemplary embodiment of the present disclosure, the collision processing module 930 may be further configured to acquire tag data of the collision volume, to determine whether the collision volume satisfies the transparency condition according to the tag data, to set the transparency of the collision volume to a preset value when it is determined that the collision volume satisfies the transparency condition, and to adjust the position of the virtual camera when it is determined that the collision volume does not satisfy the transparency condition.
In an exemplary embodiment of the present disclosure, the game screen may include a virtual object, and the collision processing module 930 may be further configured to establish a planar graph at a preset position of the virtual object when it is detected that the virtual object is close to the collision volume, the planar graph being parallel to a projection plane of the game screen, determine a plurality of vertices in the planar graph, acquire a plurality of rays emitted by the virtual camera to the vertices, determine an intersecting ray intersecting the collision volume among the plurality of rays, and an intersecting position of the intersecting ray with the collision volume, calculate a distance between the intersecting position and the virtual object, determine a corresponding intersecting position as a target position when the distance is shortest, and move the virtual camera from the current position to the target position.
In an exemplary embodiment of the present disclosure, after moving the virtual camera from the current position to the target position, the collision processing module 930 may also be configured to restore the position of the virtual camera to the current position when the virtual object is detected to be away from the collision volume.
In an exemplary embodiment of the present disclosure, when controlling the virtual camera to move in the game scene, the display module 940 may be further configured to display observation information of the virtual camera in the game screen, the observation information including any one or more of a position, a motion trajectory, and a projection plane of the game screen of the virtual camera.
In an exemplary embodiment of the present disclosure, the movement of the virtual camera in the game scene includes translation and/or rotation.
According to the control system of the virtual camera in the present exemplary embodiment, an operation instruction of the terminal device for the virtual camera may be received through the receiving module, the virtual camera is controlled to move in the game scene through the moving module in response to the operation instruction, and a collision body colliding with the virtual camera in the game scene is acquired through the collision processing module, so as to perform a transparentization process on the collision body or adjust the position of the virtual camera, and at the same time, a game screen is displayed through the display module. On one hand, the system can control the virtual camera in the game, for example, the virtual camera can be controlled to move in a game scene, so that the smooth switching of game pictures is realized; on the other hand, through the modules, the control logic of the virtual camera and the function modules of the game can be decoupled, so that the control code of the virtual camera can be migrated to other applications, and meanwhile, when a developer maintains the game, the developer can only maintain the control logic of the virtual camera without maintaining the function modules of the game, so that the development efficiency of the game can be improved.
The specific details of each module in the system are described in detail in the method section, and details of an undisclosed scheme may refer to the method section, and thus are not described again.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
Exemplary embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, various aspects of the disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the steps according to various exemplary embodiments of the disclosure described in the above-mentioned "exemplary methods" section of this specification, when the program product is run on the terminal device.
Referring to fig. 10, a program product 1000 for implementing the above method according to an exemplary embodiment of the present disclosure is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Program product 1000 may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The exemplary embodiment of the present disclosure also provides a terminal device capable of implementing the above method. A terminal device 1100 according to this exemplary embodiment of the present disclosure is described below with reference to fig. 11. The terminal device 1100 shown in fig. 11 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 11, terminal device 1100 may take the form of a general purpose computing device. The components of terminal device 1100 may include, but are not limited to: the at least one processing unit 1110, the at least one memory unit 1120, a bus 1130 connecting different system components (including the memory unit 1120 and the processing unit 1110), and a display unit 1140.
Among them, the display unit 1140 may be a display screen; the memory unit 1120 stores program code that may be executed by the processing unit 1110 to cause the processing unit 1110 to perform steps according to various exemplary embodiments of the present disclosure as described in the "exemplary methods" section above in this specification. For example, processing unit 1110 may perform the method steps shown in fig. 1 and 8, and so on.
The storage unit 1120 may include readable media in the form of volatile storage units, such as a random access memory unit (RAM)1121 and/or a cache memory unit 1122, and may further include a read-only memory unit (ROM) 1123.
The storage unit 1120 may also include a program/utility 1124 having a set (at least one) of program modules 1125, such program modules 1125 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 1130 may be representative of one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
Terminal device 1100 can also communicate with one or more external devices 1200 (e.g., keyboard, pointing device, Bluetooth device, etc.), with one or more devices that enable a user to interact with the terminal device 1100, and/or with any devices (e.g., router, modem, etc.) that enable the terminal device 1100 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 1150. Also, terminal device 1100 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network such as the Internet) via network adapter 1160. As shown, the network adapter 1160 communicates with the other modules of the terminal device 1100 over a bus 1130. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with terminal device 1100, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit, according to exemplary embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Furthermore, the above-described figures are merely schematic illustrations of processes included in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the exemplary embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to make a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the exemplary embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (12)

1. A method for controlling a virtual camera, wherein a graphical user interface including a game screen determined in a game scene by the virtual camera is provided by a terminal device, the method comprising:
controlling the virtual camera to move in the game scene in response to an operation instruction for the virtual camera;
when the virtual camera is controlled to move in the game scene, a collision body colliding with the virtual camera in the game scene is acquired, and the collision body is subjected to transparentization processing or the position of the virtual camera is adjusted.
2. The control method according to claim 1, wherein the controlling the virtual camera to move in the game scene in response to the operation instruction for the virtual camera includes:
and determining the moving speed of the virtual camera according to the frame time of the game picture so as to move the virtual camera in the game scene according to the moving speed.
3. The control method according to claim 2, wherein the determining the moving speed of the virtual camera according to the frame time of the game screen includes:
and adopting an interpolation function, and taking the frame time of the game picture at each moment as the input of the interpolation function to obtain the moving speed of the virtual camera at the corresponding moment.
4. The control method according to claim 1, wherein the acquiring a collision volume in the game scene, which collides with the virtual camera, and performing transparency processing on the collision volume or adjusting the position of the virtual camera comprises:
acquiring a collision body which collides with the virtual camera in the game scene;
determining a transparency condition of the collision volume;
and performing transparency processing on the collision body or adjusting the position of the virtual camera according to the transparency condition.
5. The control method according to claim 1, wherein the transparentizing the collision volume or adjusting the position of the virtual camera according to the transparency condition includes:
acquiring label data of the collision body to determine whether the collision body meets a transparency condition according to the label data;
when the collision body is determined to meet the transparency condition, setting the transparency of the collision body to a preset value;
adjusting a position of the virtual camera upon determining that the collision volume does not satisfy the transparency condition.
6. The control method of claim 5, wherein the game screen includes a virtual object, and wherein adjusting the position of the virtual camera upon determining that the collision volume does not satisfy the transparency condition comprises:
when the virtual object is detected to be close to the collision body, a plane figure is established at a preset position of the virtual object, and the plane figure is parallel to a projection plane of the game picture;
determining a plurality of vertexes in the plane graph, and acquiring a plurality of rays emitted to each vertex by the virtual camera;
determining an intersecting ray that intersects the collider among the plurality of rays, and a location of intersection of the intersecting ray with the collider;
calculating the distance between the intersection position and the virtual object, and determining the corresponding intersection position as a target position when the distance is shortest;
moving the virtual camera from a current position to the target position.
7. The control method according to claim 6, wherein after moving the virtual camera from a current position to the target position, the method further comprises:
when it is detected that the virtual object is away from the collision volume, the position of the virtual camera is restored to the current position.
8. The method according to any one of claims 1 to 7, wherein, when controlling the virtual camera to move in the game scene, the method further comprises:
and displaying observation information of the virtual camera in the game picture, wherein the observation information comprises any one or more of the position, the motion trail and the projection plane of the game picture of the virtual camera.
9. The control method of claim 1, wherein the movement of the virtual camera in the game scene comprises translation and/or rotation.
10. A control system of a virtual camera, characterized in that a graphical user interface is provided by a terminal device, the graphical user interface including a game scene determined by the virtual camera in a game scene, the system comprising:
the receiving module is used for receiving an operation instruction of the terminal equipment for the virtual camera;
a moving module for controlling the virtual camera to move in the game scene in response to the operation instruction for the virtual camera;
the collision processing module is used for acquiring a collision body which collides with the virtual camera in the game scene, and performing transparentization processing on the collision body or adjusting the position of the virtual camera;
and the display module is used for displaying the game picture determined by the virtual camera.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1-9.
12. A terminal device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
a display screen for displaying a graphical user interface;
wherein the processor is configured to perform the method of any of claims 1-9 via execution of the executable instructions.
CN202011104053.XA 2020-10-15 2020-10-15 Control method and system of virtual camera, storage medium and terminal equipment Pending CN112156467A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011104053.XA CN112156467A (en) 2020-10-15 2020-10-15 Control method and system of virtual camera, storage medium and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011104053.XA CN112156467A (en) 2020-10-15 2020-10-15 Control method and system of virtual camera, storage medium and terminal equipment

Publications (1)

Publication Number Publication Date
CN112156467A true CN112156467A (en) 2021-01-01

Family

ID=73867120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011104053.XA Pending CN112156467A (en) 2020-10-15 2020-10-15 Control method and system of virtual camera, storage medium and terminal equipment

Country Status (1)

Country Link
CN (1) CN112156467A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112891926A (en) * 2021-02-19 2021-06-04 西安万像电子科技有限公司 Game terminal, cloud server, cloud game control method, and storage medium
CN113476840A (en) * 2021-07-06 2021-10-08 网易(杭州)网络有限公司 Special effect processing method, device, equipment and storage medium in game
US11748939B1 (en) * 2022-09-13 2023-09-05 Katmai Tech Inc. Selecting a point to navigate video avatars in a three-dimensional environment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101025628A (en) * 2007-03-23 2007-08-29 北京大学 Flow field based intelligent robot obstacle-avoiding method
US20130109468A1 (en) * 2011-10-28 2013-05-02 Nintendo Co., Ltd. Game processing system, game processing method, game processing apparatus, and computer-readable storage medium having game processing program stored therein
CN104866695A (en) * 2015-06-24 2015-08-26 武汉大学 GPU-accelerated fluid-structure coupling simulation method through immersion boundary and lattice Boltzmann methods
CN108139807A (en) * 2015-11-20 2018-06-08 谷歌有限责任公司 Stablized using the electronical display of pixel speed
CN110166758A (en) * 2019-06-24 2019-08-23 京东方科技集团股份有限公司 Image processing method, device, terminal device and storage medium
CN111052735A (en) * 2017-09-07 2020-04-21 索尼公司 Image processing apparatus, image processing method, and image display system
CN111467801A (en) * 2020-04-20 2020-07-31 网易(杭州)网络有限公司 Model blanking method and device, storage medium and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101025628A (en) * 2007-03-23 2007-08-29 北京大学 Flow field based intelligent robot obstacle-avoiding method
US20130109468A1 (en) * 2011-10-28 2013-05-02 Nintendo Co., Ltd. Game processing system, game processing method, game processing apparatus, and computer-readable storage medium having game processing program stored therein
CN104866695A (en) * 2015-06-24 2015-08-26 武汉大学 GPU-accelerated fluid-structure coupling simulation method through immersion boundary and lattice Boltzmann methods
CN108139807A (en) * 2015-11-20 2018-06-08 谷歌有限责任公司 Stablized using the electronical display of pixel speed
CN111052735A (en) * 2017-09-07 2020-04-21 索尼公司 Image processing apparatus, image processing method, and image display system
CN110166758A (en) * 2019-06-24 2019-08-23 京东方科技集团股份有限公司 Image processing method, device, terminal device and storage medium
CN111467801A (en) * 2020-04-20 2020-07-31 网易(杭州)网络有限公司 Model blanking method and device, storage medium and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112891926A (en) * 2021-02-19 2021-06-04 西安万像电子科技有限公司 Game terminal, cloud server, cloud game control method, and storage medium
CN113476840A (en) * 2021-07-06 2021-10-08 网易(杭州)网络有限公司 Special effect processing method, device, equipment and storage medium in game
US11748939B1 (en) * 2022-09-13 2023-09-05 Katmai Tech Inc. Selecting a point to navigate video avatars in a three-dimensional environment

Similar Documents

Publication Publication Date Title
CN107913520B (en) Information processing method, information processing device, electronic equipment and storage medium
CN112156467A (en) Control method and system of virtual camera, storage medium and terminal equipment
US20220249949A1 (en) Method and apparatus for displaying virtual scene, device, and storage medium
US10191612B2 (en) Three-dimensional virtualization
CN108776544B (en) Interaction method and device in augmented reality, storage medium and electronic equipment
JP7422876B2 (en) Display method and device based on augmented reality, and storage medium
KR20090084900A (en) Interacting with 2d content on 3d surfaces
US10789766B2 (en) Three-dimensional visual effect simulation method and apparatus, storage medium, and display device
CN111481923B (en) Rocker display method and device, computer storage medium and electronic equipment
CN110286906B (en) User interface display method and device, storage medium and mobile terminal
Maslych et al. Toward intuitive acquisition of occluded vr objects through an interactive disocclusion mini-map
EP3594906B1 (en) Method and device for providing augmented reality, and computer program
US10878618B2 (en) First-person perspective-mediated reality
CN110215686B (en) Display control method and device in game scene, storage medium and electronic equipment
US20230267667A1 (en) Immersive analysis environment for human motion data
CN112473138B (en) Game display control method and device, readable storage medium and electronic equipment
CN113457144B (en) Virtual unit selection method and device in game, storage medium and electronic equipment
CN108499102B (en) Information interface display method and device, storage medium and electronic equipment
CN111973984A (en) Coordinate control method and device for virtual scene, electronic equipment and storage medium
CN111950053B (en) Building model roaming function optimization method and related device based on Threejs
CN116129085B (en) Virtual object processing method, device, storage medium, and program product
CN113112613B (en) Model display method and device, electronic equipment and storage medium
KR102392675B1 (en) Interfacing method for 3d sketch and apparatus thereof
US20240112405A1 (en) Data processing method, electronic device, and stroage medium
CN114791766A (en) AR device-based operation method, device, medium and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination