CN117950547A - Display method, apparatus, device, storage medium, and program product - Google Patents

Display method, apparatus, device, storage medium, and program product Download PDF

Info

Publication number
CN117950547A
CN117950547A CN202211338462.5A CN202211338462A CN117950547A CN 117950547 A CN117950547 A CN 117950547A CN 202211338462 A CN202211338462 A CN 202211338462A CN 117950547 A CN117950547 A CN 117950547A
Authority
CN
China
Prior art keywords
virtual object
starting
terminal
speed
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211338462.5A
Other languages
Chinese (zh)
Inventor
李铭
林全胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202211338462.5A priority Critical patent/CN117950547A/en
Publication of CN117950547A publication Critical patent/CN117950547A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The present disclosure relates to a display method, apparatus, device, storage medium, and program product, the method comprising: determining a start position, a start speed and a start time of the virtual object in response to an operation on the virtual object; determining a motion trail according to the initial position and the initial speed, and displaying the virtual scene according to the motion trail; transmitting the starting position, the starting speed and the starting time; after receiving a predicted position, determining the current position of the virtual object, wherein the predicted position is predicted by a second terminal based on the starting position, the starting speed, the starting time and a preset network delay; and displaying the virtual scene based on the current position, the predicted position and the motion trail. In the embodiment of the disclosure, the motion trail of the virtual object in the virtual scene is controlled and displayed, so that the motion trail is smoother, and the viewing experience of a user is improved.

Description

Display method, apparatus, device, storage medium, and program product
Technical Field
The present disclosure relates to the field of computer processing technology, and in particular, to a display method, apparatus, device, storage medium, and program product.
Background
With the continuous development of internet technology, terminal devices such as smart phones, personal computers, tablet computers and the like are widely used, and virtual objects are controlled to move by two interactive terminals, so that the method is a common application scene.
However, due to the existence of network delay, the motion track formed by the virtual object in the moving process is not smooth, and the look and feel is poor.
Disclosure of Invention
In order to solve the technical problems, the embodiments of the present disclosure provide a display method, apparatus, device, storage medium, and program product, which control and display a motion trail of a virtual object in a virtual scene, so that the motion trail is smoother, and the interactive experience of a user is improved.
In a first aspect, an embodiment of the present disclosure provides a display method, where the method is applied to a first terminal, where the first terminal is configured to display a virtual scene, where the virtual scene includes a virtual object, and the method includes:
Determining a start position, a start speed and a start time of the virtual object in response to an operation on the virtual object;
determining a motion trail according to the initial position and the initial speed, and displaying the virtual scene according to the motion trail;
Transmitting the starting position, the starting speed and the starting time;
after receiving a predicted position, determining the current position of the virtual object, wherein the predicted position is predicted by a second terminal based on the starting position, the starting speed, the starting time and a preset network delay;
and displaying the virtual scene based on the current position, the predicted position and the motion trail.
In a second aspect, an embodiment of the present disclosure provides a display method, where the method is applied to a second terminal, the method includes:
receiving a starting position, a starting speed and a starting time of a virtual object;
predicting a predicted position of the virtual object based on the starting position, the starting speed, the starting time and a preset network delay;
and sending the predicted position so that the first terminal can display the virtual scene after receiving the predicted position.
In a third aspect, an embodiment of the present disclosure provides a display apparatus, where the apparatus configures a first terminal, where the first terminal is configured to display a virtual scene, where the virtual scene includes a virtual object, and the method includes:
A virtual object parameter determining module, configured to determine a start position, a start speed, and a start time of the virtual object in response to an operation on the virtual object;
The virtual scene first display module is used for determining a motion trail according to the initial position and the initial speed and displaying the virtual scene according to the motion trail;
the virtual object parameter sending module is used for sending the starting position, the starting speed and the starting time;
The current position determining module is used for determining the current position of the virtual object after receiving a predicted position, wherein the predicted position is obtained by predicting a second terminal based on the starting position, the starting speed, the starting time and preset network delay;
And the virtual scene second display module is used for displaying the virtual scene based on the current position, the predicted position and the motion trail.
In a fourth aspect, an embodiment of the present disclosure provides a display device configured in a second terminal, the device including:
the first parameter receiving module is used for receiving the starting position, the starting speed and the starting time of the virtual object;
The position prediction module is used for predicting the predicted position of the virtual object based on the starting position, the starting speed, the starting time and the preset network delay;
and the predicted position sending module is used for sending the predicted position so that the first terminal can display the virtual scene after receiving the predicted position.
In a fifth aspect, embodiments of the present disclosure provide an electronic device, including:
one or more processors;
a storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the display method of any of the first aspects described above.
In a sixth aspect, an embodiment of the present disclosure provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the display method according to any one of the first aspects above.
In a seventh aspect, embodiments of the present disclosure provide a computer program product comprising a computer program or instructions which, when executed by a processor, implement a display method as described in any one of the first aspects above.
Embodiments of the present disclosure provide a display method, apparatus, device, storage medium, and program product, the method including: determining a start position, a start speed and a start time of the virtual object in response to an operation on the virtual object; determining a motion trail according to the initial position and the initial speed, and displaying the virtual scene according to the motion trail; transmitting the starting position, the starting speed and the starting time; after receiving a predicted position, determining the current position of the virtual object, wherein the predicted position is predicted by a second terminal based on the starting position, the starting speed, the starting time and a preset network delay; and displaying the virtual scene based on the current position, the predicted position and the motion trail. In the embodiment of the disclosure, after the first terminal responds to the operation on the virtual object, the starting position, the position speed and the starting time are obtained, the motion trail is predicted and displayed, the starting position, the starting speed and the starting time are sent to the second terminal, and the second terminal sends the predicted position to the first terminal, so that the first terminal can display the virtual scene according to the predicted position and the motion trail, control and display the motion trail of the virtual object in the virtual scene, the motion trail is smoother, and the viewing experience of a user is improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a schematic illustration of a displayed scene of an embodiment of the present disclosure;
FIG. 2 is a flow chart of a display method according to an embodiment of the disclosure;
FIG. 3 is a schematic diagram of a motion profile in an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a virtual object location in an embodiment of the present disclosure;
FIG. 5 is a flow chart of a display method according to an embodiment of the disclosure;
fig. 6 is a schematic structural diagram of a display device according to an embodiment of the disclosure;
fig. 7 is a schematic structural diagram of a display device according to an embodiment of the disclosure;
Fig. 8 is a schematic structural diagram of an electronic device in an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Before explaining the embodiments of the present disclosure in further detail, terms and terminology involved in the embodiments of the present disclosure are explained, and the terms and terminology involved in the embodiments of the present disclosure are applicable to the following explanation.
With the continuous development of internet technology, terminal devices such as smart phones, personal computers, tablet computers and the like are widely used, and virtual objects are controlled to move by two interactive terminals, so that the method is a common application scene.
In this embodiment, first, a simple description is given of an operation mode of a game, where an operation main body of a game program and a game screen presentation main body are separated, and a game client is used for receiving and sending data and presenting a game screen, for example, the game client may be a display device with a data transmission function near a user side, such as a mobile terminal, a television, a computer, a palm computer, and the like; but the terminal device performing the game data processing is a game server. When playing the game, the player operates the game client to send an operation instruction to the game server, the game server runs the game according to the operation instruction, codes and compresses data such as a game picture and the like, returns the data to the game client through a network, and finally decodes the data through the game client and outputs the game picture.
The virtual object is simply introduced by taking a ball game as an example in a scene that two interactive terminals control movement, two users hold different terminals, and the exchange of the control right of the virtual object is realized through terminal operation. After the first terminal clicks the service, the first user sends a control right switching request to a second terminal where the second user is located, and the second user successfully holds the control right of the virtual object. The control right switching request carries the starting position of the virtual ball when the first user performs the service operation, the starting time when the first user performs the service operation and the starting speed of the virtual ball, and after the second user successfully holds the control right of the virtual object, the motion trail of the virtual object is calculated based on the starting position, the starting time and the starting speed and is sent to the first terminal where the first user is located, so that the first user can synchronously watch the motion trail of the virtual object. The motion trajectory of the virtual tennis ball displayed in the virtual scene is determined by the user terminal having the control right of the virtual tennis ball. Specifically, if the first terminal holds the control right of the virtual sphere, the running track of the virtual sphere displayed by the first terminal is calculated by the first terminal, and the running track of the virtual sphere displayed by the second terminal interacted with the first terminal is calculated by the first terminal and then sent to the second terminal. For example: after the user A clicks the ball, the starting position, the starting time and the starting speed are sent to the user B, the user B obtains the starting position, the starting time and the starting speed and the control right of the virtual tennis, at the moment, the starting position, the starting time and the starting speed of the user B are calculated to obtain a motion track, a virtual scene is displayed according to the motion track, and meanwhile, the motion track is sent to the user A, so that the user A displays the virtual scene according to the motion track.
After the first terminal performs the service operation, the starting position, the starting time and the starting speed are required to be sent to the second terminal, the second terminal is required to send the calculated motion trail to the first terminal, and the first terminal can display the virtual scene corresponding to the motion trail. Because the first terminal sends data to the second terminal and the second terminal sends data to the first terminal, a certain time, namely network delay, is needed. And after receiving the motion trail of the virtual object sent by the second terminal, the virtual object in the virtual scene starts to move according to the motion trail. This can lead to a serious delay, i.e. "why is tennis started to move after a period of time has passed after i clearly hit the ball? ", resulting in a poor user gaming experience.
In order to solve the above problems, in the embodiments of the present disclosure, after acquiring a start position, a position speed and a start time, a first terminal predicts a motion track and displays the motion track, and sends the start position, the start speed and the start time to a second terminal, and the second terminal predicts the predicted position and sends the predicted position to the first terminal, so that the first terminal can display a virtual scene according to the predicted position and the motion track, and control and display the motion track of the virtual object in the virtual scene, so that the motion track is smoother, and the viewing experience of a user is improved.
Hereinafter, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. It should be noted that the same reference numerals in different drawings will be used to refer to the same elements already described.
Fig. 1 is a system that may be used to implement the display method provided by embodiments of the present disclosure. As shown in fig. 1, the system 100 may include a user terminal 110, a network 120, a server 130, and a database 140. For example, the system 100 may be used to implement the display method described in any of the embodiments of the present disclosure.
It is understood that user terminal 110 may be any other type of electronic device capable of performing data processing, which may include, but is not limited to: mobile handsets, sites, units, devices, multimedia computers, multimedia tablets, internet nodes, communicators, desktop computers, laptop computers, notebook computers, netbook computers, tablet computers, personal Communications Systems (PCS) devices, personal navigation devices, personal Digital Assistants (PDAs), audio/video players, digital cameras/video cameras, locating devices, television receivers, radio broadcast receivers, electronic book devices, gaming devices, or any combination thereof, including accessories and peripherals for these devices, or any combination thereof.
The user may operate through an application installed on the user terminal 110, the application transmits a batting instruction input by the user to the server 130 through the network 120, and the user terminal 110 may also receive data transmitted from the server 130 through the network 120.
The embodiments of the present disclosure are not limited to the hardware system and the software system of the user terminal 110, for example, the user terminal 110 may be based on an ARM, an X86, or the like, may be provided with an input/output device such as a camera, a touch screen, a microphone, or the like, and may be operated with an operating system such as Windows, iOS, linux, android, hong OS, or the like.
The user terminal 110 may implement the display method provided in the embodiments of the present disclosure by running a process or a thread. In some examples, user terminal 110 may perform the display method using its built-in application. In other examples, user terminal 110 may perform the display method by invoking an application program stored external to user terminal 110.
Network 120 may be a single network or a combination of at least two different networks. For example, network 120 may include, but is not limited to, one or a combination of several of a local area network, a wide area network, a public network, a private network, and the like. The network 120 may be a computer network such as the Internet and/or various telecommunications networks (e.g., 3G/4G/5G mobile communication networks, WIFI, bluetooth, zigBee, etc.), as embodiments of the present disclosure are not limited in this regard.
The server 130 may be a single server, or a group of servers, or a cloud server, with each server within the group of servers being connected via a wired or wireless network. A server farm may be centralized, such as a data center, or distributed. The server 130 may be local or remote. The server 130 may communicate with the user terminal 110 through a wired or wireless network. Embodiments of the present disclosure are not limited to the hardware system and software system of server 130.
Database 140 may refer broadly to a device having a storage function. The database 140 is mainly used to store various data utilized, generated, and outputted by the user terminal 110 and the server 130 in operation. Database 140 may be local or remote. The database 140 may include various memories, such as random access Memory (Random Access Memory, RAM), read Only Memory (ROM), and the like. The above-mentioned storage devices are merely examples and the storage devices that may be used by the system 100 are not limited in this regard. Embodiments of the present disclosure are not limited to hardware systems and software systems of database 140, and may be, for example, a relational database or a non-relational database.
Database 140 may be interconnected or in communication with server 130 or a portion thereof via network 120, or directly with server 130, or a combination thereof.
In some examples, database 140 may be a stand-alone device. In other examples, database 140 may also be integrated in at least one of user terminal 110 and server 130. For example, the database 140 may be provided on the user terminal 110 or on the server 130. For another example, the database 140 may be distributed, with one portion being provided on the user terminal 110 and another portion being provided on the server 130.
Fig. 2 is a flowchart of a display method in an embodiment of the disclosure, where the embodiment may be applicable to a case of optimizing a motion trajectory displayed in a virtual scene, and the method may be performed by a display device, and the display device may be implemented in a software and/or hardware manner.
As shown in fig. 1, the display method provided in the embodiment of the present disclosure mainly includes steps S101 to S105.
S101, responding to the operation of the virtual object, and determining the starting position, the starting speed and the starting time of the virtual object.
The method is applied to a first terminal, wherein the first terminal is terminal equipment held by the first user and is used for displaying a game scene, and the game scene comprises at least one or more virtual objects.
In an alternative embodiment, the first terminal may be a local terminal device. The local terminal device stores a game program and is used for presenting game pictures. The local terminal device is used for interacting with the player through the graphical user interface, namely, conventionally downloading and installing the game program through the electronic device and running. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, may be rendered for display on a display screen of the terminal, or provided to the player by holographic projection. For example, the local terminal device may include a processor, where the processor is configured to display a game scene in an artificial reality manner, where the game scene includes a virtual object and a control for controlling a sphere, where the game scene is a real-time simulated physical game scene of two parties' combat, for example: virtual tennis, virtual table tennis, virtual football, virtual basketball, etc. In the embodiment of the present disclosure, the game scene is exemplified as a virtual tennis game scene.
The first user is the user currently holding the control right of the virtual object. The second terminal is terminal equipment held by a second user, and the first user and the second user are two parties of the fight. A user holding control rights for a virtual object refers to a user that can interoperate with the virtual object, for example: a service or hit operation is performed on the virtual tennis ball, etc. A user who does not hold control of the virtual object may not interoperate with the virtual object.
In particular, a game application, such as a tennis game, is run on the terminal device and a part or all of the game scene is displayed, including at least two virtual characters in the game scene, one virtual character being controllable by a first user and one virtual character being controllable by a second user. When the game scene is presented on the display screen of the terminal device, the display screen is also displayed with a batting control for controlling the virtual character to execute batting operation and a virtual object (such as tennis), and the player can control the virtual object to pass through the batting control. When the game scene is presented in the holographic projection, the virtual object is held in the holographic projection, and the player can control the virtual object to pass through simulating the limb action of waving the racket.
In one embodiment of the present disclosure, when a first user holds the control right of a virtual object, a service or pass operation may be performed on the virtual object, and the first terminal acquires a start position, a start speed, and a start time of the virtual object at the time of a ball striking operation in response to the operation on the virtual object, and at the same time, generates a control right switching request of the virtual object, where the control right switching request of the virtual object is used to request switching of the control right of the virtual object to the other side, even if the second user holds the control right of the virtual object.
S102, determining a motion track according to the initial position and the initial speed, and displaying the virtual scene according to the motion track.
In one embodiment of the present disclosure, in order to avoid the problem that the first user does not understand the virtual object in place after performing the striking operation, the delay feeling is serious. After responding to the batting operation of the first user and acquiring the starting position, the starting speed and the starting time, the first terminal can determine the movement track according to the starting position, the starting speed and the starting time of the virtual object when batting. And displaying the motion trail in the virtual scene. Therefore, before the predicted position and the motion trail sent by the second terminal are received, the motion trail can be determined to move according to the first terminal in the virtual scene, so that the problem that the virtual object stays in situ and cannot be understood after the first user performs batting operation, the delay sense is serious is solved, and the use experience of the user is improved.
In one embodiment of the present disclosure, the motion profile is predicted based on the start position, the start speed, the start time, and a preset network delay. When the motion trail is predicted, the influence of network delay on the motion trail is considered, the delay sense is further reduced, and the use experience of a user is improved.
In one embodiment of the present disclosure, the motion trajectory of the virtual object in the game scene may be generated according to a bezier curve. The Bezier curve is also called Bezier curve or Bezier curve, which is a mathematical curve applied to a two-dimensional graphic application program, and consists of a line segment and a node, wherein the node is a draggable fulcrum, and the line segment is like a telescopic rubber band.
And S103, transmitting the starting position, the starting speed and the starting time.
In the embodiment of the disclosure, the control right switching request of the virtual object is sent to the second user, so that the second user can hold the control right of the virtual object. And simultaneously, the starting position, the starting speed and the starting time are transmitted to the second terminal, so that the second user can predict the motion trail according to the starting position, the starting speed and the starting time and the network delay, and when the predicted motion trail is transmitted to the first terminal, the position of the virtual object displayed by the first terminal is located.
In one embodiment of the present disclosure, the first terminal transmits a start position, a start speed, and a start time to the second terminal, including: the first terminal transmits the start position, start speed and start time to the game server, and the game server transmits the start position, start speed and start time to the second terminal. Further, the first terminal sends a control right switching request of the virtual object to the game server, and the game server sends the control right switching request of the virtual object to the second terminal. After the second terminal receives the control right switching request, the second terminal responds to the control right switching request to enable the second user to successfully obtain the control right of the virtual object.
It should be noted that, in the embodiment of the present disclosure, the execution sequence between the step S102 and the step S103 is not limited, and the two steps may be executed first S102 and then S103. S103 may be performed first and then S102 may be performed first. It is also possible that both steps S102 and S103 are performed simultaneously.
S102, after receiving a predicted position, determining the current position of the virtual object, wherein the predicted position is predicted by a second terminal based on the starting position, the starting speed, the starting time and a preset network delay.
In one embodiment of the present disclosure, after the second user successfully holds the control right of the virtual object, the second terminal predicts the motion trail of the virtual object based on the start position, start time, start speed and network delay of the virtual object. It should be noted that, the manner of determining the motion trail by the second terminal is the same as the manner of determining the motion trail by the first terminal, and specific reference may be made to the description in the above embodiment, which is not repeated in the embodiment of the present disclosure.
In one embodiment of the disclosure, the network delay includes a first network delay and a second network delay, where the first network delay is a network delay caused in a process that the first terminal sends a start position, a start time and a start speed to the second terminal, and the second network delay is a network delay caused in a process that the second terminal sends the motion trail to the first terminal.
The predicted position refers to a position of a virtual object in a virtual scene displayed by the first terminal when the second terminal sends the predicted motion trail to the first terminal based on the starting position, the starting time, the starting speed and the network delay. It should be noted that, the predicted position determined by the second terminal is only one position predicted by the second terminal, that is, a position where the virtual object may arrive, and does not represent an actual position of the virtual object in the virtual scene corresponding to the first terminal.
And S103, displaying the virtual scene based on the current position, the predicted position and the motion trail.
In one embodiment of the present disclosure, the displaying the virtual scene based on the current position, the predicted position, and the motion trajectory includes: and if the predicted position and the current position are the same position, displaying the virtual scene based on the motion trail.
In one embodiment of the present disclosure, it is determined whether the current position and the predicted position are the same position, and if the current position and the predicted position are the same position, the virtual object is controlled to move from the current position (predicted position) according to the second motion trail, that is, the virtual scene is controlled to be displayed according to the existing display mode, and the display mode of the virtual scene is not changed.
In the embodiment of the disclosure, whether the current position and the predicted position are the same position is determined, if the current position and the predicted position are not the same position, the virtual object can be controlled to be directly pulled back to the predicted position from the current position, and the virtual scene is displayed according to the motion trail by taking the predicted position as a starting point.
In one embodiment of the present disclosure, if the current position and the predicted position are not the same position, the virtual object may be controlled to be directly pulled back from the current position to the predicted position, so that a motion track in a virtual scene corresponding to the first terminal is discontinuous, that is, the current position is directly moved to jump to the predicted position, thereby reducing viewing experience of a user.
In one embodiment of the present disclosure, if the current position and the predicted position are not the same position, the display mode of the virtual scene is controlled according to the position relationship between the current position and the predicted starting point, that is, the display mode of the video frame corresponding to the virtual scene is controlled, so that the virtual object gradually approaches the predicted position from the current position, and finally, after a certain track point in the actual moving track of the virtual object coincides with a certain track point determined by the predicted position according to the motion track, the virtual object is controlled to move according to the motion track. It can be understood that, the display mode of the video frame corresponding to the virtual scene is controlled to achieve the effect that the virtual object starts to chase the predicted position from the current position, or to achieve the effect that the virtual object starts to move slowly at the current position and waits until the predicted position is chased.
In one embodiment of the present disclosure, determining whether the current location and the predicted location are the same location may include: and judging whether the coordinate point corresponding to the current position and the coordinate point corresponding to the predicted starting point are the same coordinate point.
The embodiment of the disclosure provides a display method, which comprises the following steps: determining a start position, a start speed and a start time of the virtual object in response to an operation on the virtual object; determining a motion trail according to the initial position and the initial speed, and displaying the virtual scene according to the motion trail; transmitting the starting position, the starting speed and the starting time; after receiving a predicted position, determining the current position of the virtual object, wherein the predicted position is predicted by a second terminal based on the starting position, the starting speed, the starting time and a preset network delay; and displaying the virtual scene based on the current position, the predicted position and the motion trail. In the embodiment of the disclosure, after the first terminal responds to the operation on the virtual object, the starting position, the position speed and the starting time are obtained, the motion trail is predicted and displayed, the starting position, the starting speed and the starting time are sent to the second terminal, and the second terminal sends the predicted position to the first terminal, so that the first terminal can display the virtual scene according to the predicted position and the motion trail, control and display the motion trail of the virtual object in the virtual scene, the motion trail is smoother, and the viewing experience of a user is improved.
Based on the foregoing embodiment, step S130 is further optimized, and the optimized "step 130" displays the virtual scene "mainly including" when the predicted position and the current position are not the same, displaying the virtual scene at a preset speed until, after a certain time interval, the position of the virtual object in the virtual scene displayed at the preset speed is the same as a target position in the virtual scene, where the target position is related to the predicted position, the certain time interval, and the motion trajectory. ".
Wherein the preset speed may be determined according to a positional relationship between the current position and the predicted position. Specifically, when the current position is located before the predicted position, the preset speed may be a speed smaller than the existing speed, that is, the virtual object waits for the predicted position to catch up, where the preset speed is a speed for controlling the virtual object to slow down. When the current position is located after the predicted position, the preset speed may be a speed greater than the existing speed, that is, the virtual object needs to reach the target position within a time interval, where the preset speed is a speed for controlling the virtual object to move faster. It should be noted that, when the abscissa of the current position is greater than the abscissa of the predicted position by taking the starting point of the virtual object as the coordinate starting point (0, 0), the current position is located before the predicted position; when the abscissa of the current position is smaller than the abscissa of the predicted position, the current position is located after the predicted position. As shown in fig. 3, in the virtual scene displayed in the virtual scene (curve L in fig. 3), the start position of the virtual object is P0, the current position is P1, the predicted position is P2, and the predicted position P2 is located before the current position P1, and at this time, the current position P1 needs to slowly move and wait for the predicted position P2 to catch up. It should be noted that, in the embodiment of the present disclosure, the motion trail is not displayed in a curved form in the virtual scene, but the virtual object position is displayed in a video frame form in the virtual environment. That is, in each video frame, a position corresponding to the virtual object, a series of video frames are continuously displayed, and a visual impression of the movement of the virtual object is visually formed.
Specifically, a virtual scene is displayed according to a preset speed, a virtual object continuously moves according to the preset speed, and after a certain time interval, the virtual object is located at a first position in the virtual scene. In the process that the virtual object continuously moves according to the preset speed, the predicted position also continuously moves according to the movement track, and the target position to which the virtual object moves is determined by the movement track and the certain time interval. Specifically, the virtual scene is a position reached in the virtual scene after moving from a predicted position according to a motion track for a certain time interval according to a normal video frame display mode. It should be noted that the target position is merely a position concept and is not displayed in the virtual scene.
And when the first position and the target position are the same, displaying the virtual scene according to a normal video frame display mode, and at the moment, moving the virtual object in the virtual scene according to the movement track from the target position. And when the first position is the same as the target position, starting to move according to the movement track so as to ensure the continuity of the movement curve.
In one embodiment of the present disclosure, the predicted location and the current location are not the same location, comprising: an included angle between a first vector and a current speed direction of the virtual object is greater than 90 degrees or smaller than 90 degrees, wherein the first vector is a vector formed by the predicted position and the current position.
The embodiment of the disclosure provides a method for judging whether the current position and the predicted position of a virtual object are the same position. Specifically, a prediction vector formed by the predicted position B and the current position AAnd velocity direction/>Included angle of greater than 90 degrees, i.e./>It can be determined that the current position of the virtual object is the same position as the predicted position.
It should be noted that, the above determination method may also be used if the current position and the predicted position are the same position.
In one embodiment of the present disclosure, controlling the virtual object to move at a second speed includes: the displaying the virtual scene according to the preset speed comprises the following steps: when the current position is located before the predicted position, controlling the virtual scene to be displayed according to a preset frame inserting mode, wherein the preset frame inserting mode comprises inserting an intermediate frame between a first frame and a second frame, and the first frame and the second frame are any two adjacent frames.
In an embodiment of the present disclosure, when the current position is located before the predicted position (a positional relationship as shown in fig. 3), during the virtual scene display, in order to allow the predicted position to catch up with the current position, the virtual object may be slowly moved from the current position. I.e. the movement speed of the virtual object displayed in the virtual scene becomes slow. The predicted position moves according to the motion track and the normal speed, so that the predicted position can catch up with the current position in a certain time interval. Wherein the normal speed is carried in the motion trajectory.
In one embodiment of the present disclosure, the virtual object is controlled to slowly move from the current position in such a way that the virtual scene is displayed in an interpolated manner starting from the video frame where the current position is located. The frame insertion means that a video frame is inserted between two video frames for display. In the embodiment of the disclosure, the first frame and the second frame may be any set of adjacent video frames in a preset video frame sequence. The preset video frame sequence refers to all video frames between the video frames at the current position and the target position after a certain time interval.
In the embodiment of the disclosure, after the virtual scene is displayed in a preset frame inserting mode, the virtual object moves forward by 1 millimeter in the original 72 frames, and after an intermediate frame is inserted, the virtual object moves forward by 1 millimeter in 144 frames, so that the effect of slow movement of the virtual object is realized. So that the motion curve is smoother.
In the embodiment of the present disclosure, the number of intermediate frames interposed between the first frame and the second frame is not limited, and may be defined according to actual situations. Alternatively, an intermediate frame may be interposed between the first frame and the second frame in order to secure the display effect.
In one embodiment of the present disclosure, the position of the virtual object in the intermediate frame is the same as the position of the virtual object in the first frame. Specifically, the picture content of the intermediate frame is the same as the picture content of the first frame. For example: the original video frame display sequence is: 1,2,3,4,5. The display sequence after the frame is inserted in the frame inserting mode provided by the embodiment of the disclosure is as follows: 1,1,2,2,3,3,4,4,5,5. In other words, each video frame in the above-described preset video frame sequence is displayed 2 times. If a video frame is moved forward, the virtual object is moved 1 mm, in this embodiment, every 2 video frames are used to move forward 1 mm after the frame insertion.
In one embodiment of the present disclosure, the position of the virtual object in the intermediate frame is between the position of the virtual object in the first frame and the position of the virtual object in the second frame. Specifically, other picture contents in the intermediate frame are the same as those of the first frame except for the position where the virtual object is located. The position of the virtual object included in the intermediate frame is a position located between the position of the virtual object in the first frame and the position of the virtual object in the second frame. As shown in fig. 4, the position of the virtual object in the first frame is P11, the position of the virtual object in the second frame is P12, and the position of the virtual object included in the intermediate frame is any one position between P11 and P12. Alternatively, the midpoint between the two points P11 and P12 is calculated as the position of P13.
In the embodiment of the disclosure, one position is selected between two positions as the position of the virtual object of the intermediate frame, namely, the equal frame is realized by using an interpolation mode, so that the curve is smoother.
In one embodiment of the present disclosure, the position of the virtual object in the intermediate frame is determined by determining the position of the virtual object in the first frame and the instantaneous speed change rate of the virtual object in the first frame, where the instantaneous speed change rate of the virtual object is obtained by performing differential calculation on the instantaneous speed of the virtual object in the first frame.
In the embodiment of the disclosure, differential calculation is performed based on the instantaneous speed of the virtual object in the first frame to obtain an instantaneous speed change rate, and the sum of the position of the virtual object in the first frame and the instantaneous speed change rate is used as the position of the virtual object in the intermediate frame. In the embodiment of the disclosure, the position of the virtual object is determined by adopting a differential calculation mode, so that the motion curve is smoother.
In one embodiment of the present disclosure, the displaying the virtual scene at a preset speed includes: and when the current position is located behind the predicted position, controlling the virtual scene to be displayed according to a preset frame skip mode.
In one embodiment of the present disclosure, the manner in which the virtual object is controlled to move rapidly from the current position is that the virtual scene is displayed in a frame-skip manner starting from the video frame where the current position is located. Frame skipping refers to selecting a plurality of discontinuous video frames in a preset video frame sequence to display, rather than displaying all video frame sequences in sequence.
In the embodiment of the present disclosure, a frame skip mode is preset: only the video frames corresponding to the odd numbers may be displayed, and the video frames corresponding to the even numbers may not be displayed. For example: the original video frame display sequence is: 1,2,3,4,5. The display sequence after frame insertion in the frame skip mode provided according to the embodiment of the present disclosure is: 1,3,5. Presetting a frame skipping mode: only the video frames corresponding to the even numbers are displayed, and the video frames corresponding to the odd numbers are not displayed. Further, the preset frame skip mode may be a mode of displaying 1 frame, skipping 2 frames without displaying, or displaying 1 frame, skipping 3 frames without displaying. The frame skip mode is not particularly limited in the embodiments of the present disclosure.
In the embodiment of the disclosure, after the virtual scene is displayed in a preset frame skipping mode, the virtual object moves forward by 1 millimeter in 72 original frames, and after frame skipping, the virtual object moves forward by 1 millimeter in 36 frames of video frames so as to realize the effect of fast movement of the virtual object, so that the motion curve is smoother.
In one implementation of the disclosure, fig. 5 is a flowchart of a display method in an embodiment of the disclosure, where the embodiment may be applicable to a case of optimizing a motion trajectory displayed in a virtual scene, and the method may be performed by a display device, and the display device may be implemented in a software and/or hardware manner. The display method applies a second terminal, wherein the second terminal is a terminal held by a second user, and the first user and the second user are two parties for fight.
As shown in fig. 5, the display method provided in the embodiment of the present disclosure mainly includes steps S201 to S203.
S201, receiving the starting position, the starting speed and the starting time of the virtual object;
s202, predicting the predicted position of the virtual object based on the starting position, the starting speed, the starting time and the preset network delay.
In one embodiment of the present disclosure, the motion profile is predicted based on the start position, the start speed, the start time, and a preset network delay. When the motion trail is predicted, the influence of network delay on the motion trail is considered, the delay sense is further reduced, and the use experience of a user is improved.
In one embodiment of the present disclosure, the motion trajectory of the virtual object in the game scene may be generated according to a bezier curve. The Bezier curve is also called Bezier curve or Bezier curve, which is a mathematical curve applied to a two-dimensional graphic application program, and consists of a line segment and a node, wherein the node is a draggable fulcrum, and the line segment is like a telescopic rubber band.
In one embodiment of the disclosure, the network delay includes a network delay caused in a process of transmitting data from the first terminal to the second terminal, and a network delay caused in a process of transmitting data from the second terminal to the first terminal.
In the embodiment of the disclosure, in the process of predicting the motion trail by the second terminal, network delay caused in the process of sending the control right switching request to the second terminal by the first terminal is not only considered, so that the problem that the virtual object is pulled back to the service point by the second motion trail is avoided, and the motion trail is smoother.
In the embodiment of the disclosure, in the process of predicting the second motion trail, network delay caused in the process of sending the second motion trail to the first terminal by the second terminal is also considered. The predicted starting point of the second motion track is closer to the actual position of the virtual object in the first motion track, so that the motion curve is smoother.
In one embodiment of the present disclosure, the network delay is determined by a delay of a preset number of video frames.
In one embodiment of the present disclosure, the network delay is a rolling average of a plurality of video frames. In implementations of the present disclosure, the rolling average of the plurality of video frames may be a rolling average of the current 30 frames of video frames.
In one embodiment of the present disclosure, the number of states per second of game read is increased during the acquisition of network latency.
In one embodiment of the disclosure, the current position of the virtual object refers to an actual position of the virtual object when the second motion track sent by the second terminal is received in a process that the virtual object moves according to the first motion track. Wherein the current position may be represented in the form of coordinates.
And S203, the predicted position is sent, so that the first terminal displays the virtual scene after receiving the predicted position.
In the embodiment of the disclosure, after the predicted position is sent to the first terminal, the first terminal displays the virtual scene after receiving the predicted position, so that the first terminal is prevented from being pulled back to the starting point, and the motion curve is smoother.
Fig. 5 is a schematic structural diagram of a display device according to an embodiment of the present disclosure, where the embodiment may be applicable to a case of optimizing a motion trajectory displayed in a virtual scene, and the display device may be implemented in a software and/or hardware manner. The device is configured at a first terminal, wherein the first terminal is terminal equipment held by a first user, and the first terminal is used for constructing a game scene, and the game scene comprises a virtual object.
As shown in fig. 6, a display device 60 provided in an embodiment of the present disclosure mainly includes: the virtual object parameter determination module 61, the virtual scene first display module 62, the virtual object parameter transmission module 63, the current position determination module 64, and the virtual scene second display module 65.
A virtual object parameter determining module 61, configured to determine a start position, a start speed, and a start time of the virtual object in response to an operation on the virtual object;
The virtual scene first display module 62 is configured to determine a motion track according to the start position and the start speed, and display the virtual scene according to the motion track;
A virtual object parameter sending module 63, configured to send the start position, the start speed, and the start time;
A current position determining module 64, configured to determine a current position of the virtual object after receiving a predicted position, where the predicted position is predicted by a second terminal based on the start position, the start speed, the start time, and a preset network delay;
And a virtual scene second display module 65, configured to display the virtual scene based on the current position, the predicted position, and the motion trail.
In one embodiment of the present disclosure, the virtual scene second display module 65 is specifically configured to display the virtual scene based on the motion trail if the predicted position and the current position are the same position.
In one embodiment of the present disclosure, the virtual scene second display module 65 is specifically configured to display, when the predicted position and the current position are not the same position, the virtual scene at a preset speed until, after a certain time interval, a position of a virtual object in the virtual scene displayed at the preset speed is the same as a target position in the virtual scene, where the target position is related to the predicted position, the certain time interval, and the motion trail.
In one embodiment of the present disclosure, the predicted location and the current location are not the same location, comprising: an included angle between a first vector and a current speed direction of the virtual object is greater than 90 degrees, wherein the first vector is a vector formed by the predicted position and the current position.
In one embodiment of the present disclosure, the virtual scene second display module 65 includes: and the frame inserting display unit is used for controlling the virtual scene to be displayed according to a preset frame inserting mode when the current position is positioned in front of the predicted position, wherein the preset frame inserting mode comprises the step of inserting an intermediate frame between a first frame and a second frame, and the first frame and the second frame are any two adjacent frames.
In one embodiment of the present disclosure, the position of the virtual object in the intermediate frame is the same as the position of the virtual object in the first frame.
In one embodiment of the present disclosure, the position of the virtual object in the intermediate frame is between the position of the virtual object in the first frame and the position of the virtual object in the second frame.
In one embodiment of the present disclosure, the position of the virtual object in the intermediate frame is determined by determining the position of the virtual object in the first frame and the instantaneous speed change rate of the virtual object in the first frame, where the instantaneous speed change rate of the virtual object is obtained by performing differential calculation on the instantaneous speed of the virtual object in the first frame.
In one embodiment of the present disclosure, the virtual scene second display module 65 includes: and the frame-skipping display unit is used for controlling the virtual scene to be displayed according to a preset frame-skipping mode when the current position is located behind the predicted position.
The display device provided in the embodiment of the present disclosure may perform the steps performed in the display method provided in the embodiment of the present disclosure, and the performing steps and the beneficial effects are not described herein again.
Fig. 7 is a schematic structural diagram of a display device according to an embodiment of the present disclosure, where the embodiment may be applicable to a case of optimizing a motion trajectory displayed in a virtual scene, and the display device may be implemented in a software and/or hardware manner. The device is configured at the second terminal.
As shown in fig. 7, a display device 70 provided in an embodiment of the present disclosure mainly includes: a first parameter receiving module 71, a position predicting module 72 and a predicted position transmitting module 73.
A first parameter receiving module 71, configured to receive a start position, a start speed, and a start time of the virtual object; a position prediction module 72, configured to predict a predicted position of the virtual object based on the start position, the start speed, the start time, and a preset network delay; and the predicted position sending module 73 is configured to send the predicted position, so that the first terminal displays the virtual scene after receiving the predicted position.
In one embodiment of the disclosure, the network delay includes a network delay caused in a process of transmitting data from the first terminal to the second terminal, and a network delay caused in a process of transmitting data from the second terminal to the first terminal.
In one embodiment of the present disclosure, the network delay is determined by a delay of a preset number of video frames.
The display device provided in the embodiment of the present disclosure may perform the steps performed in the display method provided in the embodiment of the present disclosure, and the performing steps and the beneficial effects are not described herein again.
Fig. 8 is a schematic structural diagram of an electronic device in an embodiment of the disclosure. Referring now in particular to fig. 8, a schematic diagram of an electronic device 800 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device 800 in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), wearable terminal devices, and the like, and fixed terminals such as digital TVs, desktop computers, smart home devices, and the like. The electronic device shown in fig. 8 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 8, the electronic device 800 may include a processing means (e.g., a central processor, a graphic processor, etc.) 801 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 802 or a program loaded from a storage means 808 into a Random Access Memory (RAM) 803 to implement a display method of an embodiment as described in the present disclosure. In the RAM 803, various programs and data required for the operation of the terminal device 800 are also stored. The processing device 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
In general, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 807 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, etc.; storage 808 including, for example, magnetic tape, hard disk, etc.; communication means 809. The communication means 809 may allow the terminal device 800 to communicate wirelessly or by wire with other devices to exchange data. While fig. 8 shows a terminal device 800 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts, thereby implementing the display method as described above. In such an embodiment, the computer program may be downloaded and installed from a network via communication device 809, or installed from storage device 808, or installed from ROM 802. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 801.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some embodiments, the client, server, etc. may communicate using any known or future developed network protocol under test, such as HTTP (HyperText Transfer Protocol ), etc., and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any known or future developed network under test.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer-readable medium carries one or more programs which, when executed by the terminal device, cause the terminal device to: determining a start position, a start speed and a start time of the virtual object in response to an operation on the virtual object; determining a motion trail according to the initial position and the initial speed, and displaying the virtual scene according to the motion trail; transmitting the starting position, the starting speed and the starting time; after receiving a predicted position, determining the current position of the virtual object, wherein the predicted position is predicted by a second terminal based on the starting position, the starting speed, the starting time and a preset network delay; and displaying the virtual scene based on the current position, the predicted position and the motion trail.
Alternatively, the terminal device may perform other steps described in the above embodiments when the above one or more programs are executed by the terminal device.
The computer-readable medium carries one or more programs which, when executed by the terminal device, cause the terminal device to: receiving a starting position, a starting speed and a starting time of a virtual object; predicting a predicted position of the virtual object based on the starting position, the starting speed, the starting time and a preset network delay; and sending the predicted position so that the first terminal can display the virtual scene after receiving the predicted position. Alternatively, the terminal device may perform other steps described in the above embodiments when the above one or more programs are executed by the terminal device.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided a display method applied to a first terminal for displaying a virtual scene including a virtual object therein, the method including: determining a start position, a start speed and a start time of the virtual object in response to an operation on the virtual object; determining a motion trail according to the initial position and the initial speed, and displaying the virtual scene according to the motion trail; transmitting the starting position, the starting speed and the starting time; after receiving a predicted position, determining the current position of the virtual object, wherein the predicted position is predicted by a second terminal based on the starting position, the starting speed, the starting time and a preset network delay; and displaying the virtual scene based on the current position, the predicted position and the motion trail.
According to one or more embodiments of the present disclosure, there is provided a display method applied to a second terminal, the method including: receiving a starting position, a starting speed and a starting time of a virtual object; predicting a predicted position of the virtual object based on the starting position, the starting speed, the starting time and a preset network delay; and sending the predicted position so that the first terminal can display the virtual scene after receiving the predicted position.
According to one or more embodiments of the present disclosure, there is provided a display apparatus configured with a first terminal for displaying a virtual scene including a virtual object therein, the method including: a virtual object parameter determining module, configured to determine a start position, a start speed, and a start time of the virtual object in response to an operation on the virtual object; the virtual scene first display module is used for determining a motion trail according to the initial position and the initial speed and displaying the virtual scene according to the motion trail; the virtual object parameter sending module is used for sending the starting position, the starting speed and the starting time; the current position determining module is used for determining the current position of the virtual object after receiving a predicted position, wherein the predicted position is obtained by predicting a second terminal based on the starting position, the starting speed, the starting time and preset network delay; and the virtual scene second display module is used for displaying the virtual scene based on the current position, the predicted position and the motion trail.
According to one or more embodiments of the present disclosure, there is provided a display device configured to a second terminal, the device including: the first parameter receiving module is used for receiving the starting position, the starting speed and the starting time of the virtual object; the position prediction module is used for predicting the predicted position of the virtual object based on the starting position, the starting speed, the starting time and the preset network delay; and the predicted position sending module is used for sending the predicted position so that the first terminal can display the virtual scene after receiving the predicted position.
According to one or more embodiments of the present disclosure, the present disclosure provides an electronic device comprising:
one or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement any of the display methods as provided by the present disclosure.
According to one or more embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a display method as any of the present disclosure provides.
The disclosed embodiments also provide a computer program product comprising a computer program or instructions which, when executed by a processor, implement a display method as described above.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (17)

1. A display method, wherein the method is applied to a first terminal, the first terminal is used for displaying a virtual scene, and the virtual scene includes a virtual object, and the method includes:
Determining a start position, a start speed and a start time of the virtual object in response to an operation on the virtual object;
determining a motion trail according to the initial position and the initial speed, and displaying the virtual scene according to the motion trail;
Transmitting the starting position, the starting speed and the starting time;
after receiving a predicted position, determining the current position of the virtual object, wherein the predicted position is predicted by a second terminal based on the starting position, the starting speed, the starting time and a preset network delay;
and displaying the virtual scene based on the current position, the predicted position and the motion trail.
2. The method of claim 1, wherein the displaying the virtual scene based on the current location, the predicted location, and the motion profile comprises:
And if the predicted position and the current position are the same position, displaying the virtual scene based on the motion trail.
3. The method of claim 1, wherein the displaying the virtual scene based on the current location, the predicted location, and the motion profile comprises:
And displaying the virtual scene according to a preset speed when the predicted position and the current position are not the same position, until the position of the virtual object in the virtual scene displayed according to the preset speed is the same as the target position in the virtual scene after a certain time interval, wherein the target position is related to the predicted position, the certain time interval and the motion trail.
4. A method according to claim 3, wherein the predicted location and the current location are not the same location, comprising: an included angle between a first vector and a current speed direction of the virtual object is greater than 90 degrees, wherein the first vector is a vector formed by the predicted position and the current position.
5. A method according to claim 3, wherein said displaying said virtual scene at a preset speed comprises:
When the current position is located before the predicted position, controlling the virtual scene to be displayed according to a preset frame inserting mode, wherein the preset frame inserting mode comprises inserting an intermediate frame between a first frame and a second frame, and the first frame and the second frame are any two adjacent frames.
6. The method of claim 5, wherein the location of the virtual object in the intermediate frame is the same as the location of the virtual object in the first frame.
7. The method of claim 5, wherein the location of the virtual object in the intermediate frame is between the location of the virtual object in the first frame and the location of the virtual object in the second frame.
8. The method of claim 5, wherein the position of the virtual object in the intermediate frame is determined by a position of the virtual object in the first frame and an instantaneous rate of change of speed of the virtual object in the first frame, wherein the instantaneous rate of change of speed of the virtual object is calculated by differentiating the instantaneous speed of the virtual object in the first frame.
9. A method according to claim 3, wherein said displaying said virtual scene at a preset speed comprises:
and when the current position is located behind the predicted position, controlling the virtual scene to be displayed according to a preset frame skip mode.
10. A display method, wherein the method is applied to a second terminal, the method comprising:
receiving a starting position, a starting speed and a starting time of a virtual object;
predicting a predicted position of the virtual object based on the starting position, the starting speed, the starting time and a preset network delay;
and sending the predicted position so that the first terminal can display the virtual scene after receiving the predicted position.
11. The method of claim 10, wherein the network delay comprises a network delay caused by the first terminal sending data to the second terminal and a network delay caused by the second terminal sending data to the first terminal.
12. The method of claim 10, wherein the network delay is determined by a delay of a preset number of video frames.
13. A virtual object mobile device, wherein the device configures a first terminal, the first terminal is configured to display a virtual scene, the virtual scene includes a virtual object, and the method includes:
A virtual object parameter determining module, configured to determine a start position, a start speed, and a start time of the virtual object in response to an operation on the virtual object;
The virtual scene first display module is used for determining a motion trail according to the initial position and the initial speed and displaying the virtual scene according to the motion trail;
the virtual object parameter sending module is used for sending the starting position, the starting speed and the starting time;
The current position determining module is used for determining the current position of the virtual object after receiving a predicted position, wherein the predicted position is obtained by predicting a second terminal based on the starting position, the starting speed, the starting time and preset network delay;
And the virtual scene second display module is used for displaying the virtual scene based on the current position, the predicted position and the motion trail.
14. A display device, wherein the device is configured in a second terminal, the device comprising:
the first parameter receiving module is used for receiving the starting position, the starting speed and the starting time of the virtual object;
The position prediction module is used for predicting the predicted position of the virtual object based on the starting position, the starting speed, the starting time and the preset network delay;
and the predicted position sending module is used for sending the predicted position so that the first terminal can display the virtual scene after receiving the predicted position.
15. An electronic device, the electronic device comprising:
one or more processors;
a storage means for storing one or more programs;
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-12.
16. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any one of claims 1-12.
17. A computer program product comprising a computer program or instructions which, when executed by a processor, implements the method of any of claims 1-12.
CN202211338462.5A 2022-10-28 2022-10-28 Display method, apparatus, device, storage medium, and program product Pending CN117950547A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211338462.5A CN117950547A (en) 2022-10-28 2022-10-28 Display method, apparatus, device, storage medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211338462.5A CN117950547A (en) 2022-10-28 2022-10-28 Display method, apparatus, device, storage medium, and program product

Publications (1)

Publication Number Publication Date
CN117950547A true CN117950547A (en) 2024-04-30

Family

ID=90800118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211338462.5A Pending CN117950547A (en) 2022-10-28 2022-10-28 Display method, apparatus, device, storage medium, and program product

Country Status (1)

Country Link
CN (1) CN117950547A (en)

Similar Documents

Publication Publication Date Title
US12008220B2 (en) Label display method and apparatus, electronic device, and computer-readable medium
EP4203483A1 (en) Video processing method and apparatus, electronic device, and computer-readable storage medium
CN114077375B (en) Target object display method and device, electronic equipment and storage medium
EP4344224A1 (en) Live-streaming interaction method and apparatus, and readable medium and electronic device
CN110519645B (en) Video content playing method and device, electronic equipment and computer readable medium
WO2023138559A1 (en) Virtual reality interaction method and apparatus, and device and storage medium
CN112291590A (en) Video processing method and device
WO2023169305A1 (en) Special effect video generating method and apparatus, electronic device, and storage medium
CN114168250A (en) Page display method and device, electronic equipment and storage medium
WO2023202590A1 (en) Page switching method and apparatus, and interaction method for terminal device
WO2023143217A1 (en) Special effect prop display method, apparatus, device, and storage medium
CN114025225B (en) Bullet screen control method and device, electronic equipment and storage medium
CN113766303B (en) Multi-screen interaction method, device, equipment and storage medium
CN111833459B (en) Image processing method and device, electronic equipment and storage medium
CN117244249A (en) Multimedia data generation method and device, readable medium and electronic equipment
CN114419201B (en) Animation display method and device, electronic equipment and medium
CN113010300A (en) Image effect refreshing method and device, electronic equipment and computer readable storage medium
CN117950547A (en) Display method, apparatus, device, storage medium, and program product
CN114116081B (en) Interactive dynamic fluid effect processing method and device and electronic equipment
CN117319725A (en) Subtitle display method, device, equipment and medium
GB2600341A (en) Image special effect processing method and apparatus, electronic device and computer-readable storage medium
US11880919B2 (en) Sticker processing method and apparatus
CN114710695B (en) Progress adjustment method, device, electronic equipment, storage medium and program product
US20240236382A9 (en) Methods, apparatuses, readable media and electronic devices for live stream interaction
CN111107279B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination