CN113750530B - Prop control method, device, equipment and storage medium in virtual scene - Google Patents

Prop control method, device, equipment and storage medium in virtual scene Download PDF

Info

Publication number
CN113750530B
CN113750530B CN202111100882.5A CN202111100882A CN113750530B CN 113750530 B CN113750530 B CN 113750530B CN 202111100882 A CN202111100882 A CN 202111100882A CN 113750530 B CN113750530 B CN 113750530B
Authority
CN
China
Prior art keywords
target prop
prop
target
virtual scene
obstacle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111100882.5A
Other languages
Chinese (zh)
Other versions
CN113750530A (en
Inventor
刘智洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202111100882.5A priority Critical patent/CN113750530B/en
Publication of CN113750530A publication Critical patent/CN113750530A/en
Application granted granted Critical
Publication of CN113750530B publication Critical patent/CN113750530B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5372Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • A63F13/573Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using trajectories of game objects, e.g. of a golf ball according to the point of impact
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • A63F13/577Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/64Methods for processing data by generating or executing the game program for computing dynamical parameters of game objects, e.g. motion determination or computation of frictional forces for a virtual car
    • A63F2300/643Methods for processing data by generating or executing the game program for computing dynamical parameters of game objects, e.g. motion determination or computation of frictional forces for a virtual car by determining the impact between objects, e.g. collision detection
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/64Methods for processing data by generating or executing the game program for computing dynamical parameters of game objects, e.g. motion determination or computation of frictional forces for a virtual car
    • A63F2300/646Methods for processing data by generating or executing the game program for computing dynamical parameters of game objects, e.g. motion determination or computation of frictional forces for a virtual car for calculating the trajectory of an object

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to a prop control method, device and equipment in a virtual scene and a storage medium, and relates to the technical field of virtual scenes. The method comprises the following steps: displaying a virtual scene picture, wherein the virtual scene picture is a picture for observing a virtual scene from the view angle of a first virtual object, and the first virtual object carries a target prop; controlling the first virtual object to release the target prop in response to receiving the release operation of the target prop; generating an effect of the target prop on the other side of the collision surface relative to the first obstacle in response to the target prop colliding with the first obstacle in the virtual scene; the collision surface is a surface on which the target prop collides with the first obstacle. By the method, the action effect of the target prop is improved, so that the interaction efficiency and the interaction effect are improved, the moving operation required by a user when the target prop is used is reduced, the occupation of terminal processing resources is reduced, and the cruising ability of the terminal is improved.

Description

Prop control method, device, equipment and storage medium in virtual scene
Technical Field
The embodiment of the application relates to the technical field of virtual scenes, in particular to a prop control method, device and equipment in a virtual scene and a storage medium.
Background
Throwing-type virtual props or launching-type virtual props are provided in many applications (such as virtual reality applications, three-dimensional map applications, simulation programs, first person shooter games, multiplayer online tactical athletic games, etc.) that build virtual scenes to increase interest and strategic, such as flash shots, based on operations in the virtual scenes.
In the related art, a throwing type virtual prop is operated in such a manner that an effect is exerted on a landing point of the virtual prop after the virtual prop is thrown out, and the virtual object within an action range is influenced by the landing point as a center.
However, in the above related art, the use manner of the virtual prop is single, and the effect is limited, so that the use effect of the throwing type virtual prop is poor, and the interaction efficiency and interaction effect with the virtual object are affected.
Disclosure of Invention
The embodiment of the application provides a prop control method, device, equipment and storage medium in a virtual scene, which can improve the action effect of a target prop, thereby improving the interaction efficiency and interaction effect with the target prop. The technical scheme is as follows:
In one aspect, a method for controlling props in a virtual scene is provided, the method comprising:
displaying a virtual scene picture, wherein the virtual scene picture is a picture for observing a virtual scene from the view angle of a first virtual object, and the first virtual object carries a target prop;
controlling the first virtual object to release the target prop in response to receiving a release operation of the target prop;
generating an effect of the target prop on the other side of the collision surface relative to a first obstacle in the virtual scene in response to the target prop colliding with the first obstacle; the collision surface is a surface of the target prop that collides with the first obstacle.
In another aspect, a method for controlling props in a virtual scene is provided, the method comprising:
displaying a virtual scene interface;
displaying a virtual scene picture in the virtual scene interface, wherein the virtual scene picture is a picture for observing a virtual scene from the view angle of a second virtual object;
responding to the action position of a target prop contained in the visual field range of the second virtual object, and displaying a picture corresponding to the action effect of the target prop, wherein the action position of the target prop is positioned at the other side of the collision surface of a first obstacle which collides with the target prop, and the collision surface is the surface of the target prop which collides with the first obstacle; the action position of the target prop is the position of the target prop for generating action effect.
In another aspect, there is provided a prop control device in a virtual scene, the device comprising:
the first picture display module is used for displaying a virtual scene picture, wherein the virtual scene picture is a picture for observing a virtual scene from the view angle of a first virtual object, and the first virtual object carries a target prop;
the release module is used for controlling the first virtual object to release the target prop in response to receiving the release operation of the target prop;
an effect generation module for generating an effect of the target prop on the other side of the collision surface relative to the first obstacle in response to the target prop colliding with the first obstacle in the virtual scene; the collision surface is a surface of the target prop that collides with the first obstacle.
In one possible implementation, the apparatus further includes:
a sight pattern display module for displaying a sight pattern in response to receiving a sighting operation based on the target prop;
the release module is used for controlling the first virtual object to release the target prop based on a moving track in the virtual scene in response to receiving the release operation of the target prop; the movement track is determined based on a throwing position of the target prop and a position indicated by the sight pattern.
In one possible implementation, the movement track is a straight track determined based on the throwing position of the target prop and the position indicated by the sight pattern, or the movement track is a parabolic track determined based on the throwing position of the target prop and the position indicated by the sight pattern.
In one possible implementation manner, in response to the movement track being a straight line track, the action position of the target prop is a first collision point of a ray emitted in the opposite direction of the movement track and the first obstacle with a first reference point as a starting point; the first reference point is a position point determined by taking the second collision point as a starting point after the target prop collides with the first obstacle at the second collision point, and extending a target distance along the extending direction of the moving track; the action position of the target prop is the position of the target prop for generating action effect.
In one possible implementation manner, in response to the movement track being parabolic, the action position of the target prop is a fourth collision point of a ray emitted in a direction opposite to the extension direction of a tangent line of a third collision point along the movement track with the first obstacle, starting from a second reference point; the second reference point is a position point determined by taking the third collision point as a starting point after the target prop collides with the first obstacle at the third collision point, and extending a target distance along the extending direction of the tangent line; the action position of the target prop is the position of the target prop for generating action effect.
In one possible implementation, the first obstacle is a first obstacle that the target prop collides against in the virtual scene.
In one possible implementation, the apparatus further includes:
and the action mark display module is used for displaying an action mark, and the action mark is used for indicating that the target prop has an action effect.
In one possible implementation, the target prop has a range threshold value that is used to indicate a maximum distance of movement of the target prop in the virtual scene;
the action identification display module is used for responding to the situation that the target prop does not collide with the first obstacle, the moving distance of the target prop in the virtual scene reaches the range threshold, and the action identification of the target prop is displayed at the position where the moving distance of the target prop reaches the range threshold.
In one possible implementation, the target prop has a duration threshold value that indicates a maximum length of movement of the target prop from being released to generating an effort effect;
the action identification display module is used for responding to the situation that the target prop is not collided with the first obstacle, the moving duration of the target prop in the virtual scene reaches the duration threshold, and the action identification of the target prop is displayed at the position where the moving duration of the target prop reaches the duration threshold.
In one possible implementation, the target prop has a collision point threshold value to indicate a maximum number of collision points that the target prop can pass through in the virtual scene; the collision point is a point of contact of the target prop with a plane of an obstacle in the virtual scene;
n obstacles are included before the first obstacle, the N obstacles are obstacles in the virtual scene through which the moving track of the target prop passes, and the value of N is determined based on the collision point threshold value; n is a positive integer.
In one possible implementation, the effect of the target prop is to cause visual shielding of a target duration for a virtual object containing the target prop in a field of view.
In another aspect, there is provided a prop control device in a virtual scene, the device comprising:
the interface display module is used for displaying a virtual scene interface;
the second picture display module is used for displaying a virtual scene picture in the virtual scene interface, wherein the virtual scene picture is a picture for observing a virtual scene from the view angle of a second virtual object;
a third image display module, configured to respond to an action position of a target prop included in a visual field range of the second virtual object, and display an image corresponding to an action effect of the target prop, where the action position of the target prop is located at the other side of a collision surface of a first obstacle that collides with the target prop, and the collision surface is a surface of the target prop that collides with the first obstacle; the action position of the target prop is the position of the target prop for generating action effect.
In one possible implementation manner, the action position of the target prop included in the visual field range of the second virtual object means that the vertical distance between the action position and the first position is smaller than a first distance threshold and the horizontal distance between the action position and the first position is smaller than a second distance threshold; the first position refers to a position point of the second virtual object in the virtual scene; the mapping point is based on the action position to make a perpendicular to the plane where the first position is located, and the obtained intersection point of the perpendicular and the plane where the first position is located.
In another aspect, a computer device is provided, where the computer device includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or an instruction set, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the prop control method in the virtual scenario described above.
In another aspect, a computer readable storage medium is provided, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by a processor to implement the prop control method in a virtual scenario described above.
In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the prop control method in the virtual scene provided in the above various alternative implementations.
The technical scheme that this application provided can include following beneficial effect:
after the collision plane with the obstacle is arranged in the virtual scene and the collision is carried out, the target prop with the action effect is generated on the other side of the collision surface relative to the obstacle, so that the target prop can play a role in the virtual object shielded by the obstacle, the action effect of the target prop is improved, the interaction efficiency and the interaction effect are improved, the moving operation required by a user when the target prop is used is reduced, the occupation of terminal processing resources is reduced, the terminal electric quantity consumption is further reduced, and the cruising ability of the terminal is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 is a schematic diagram of a display interface of a virtual scene shown according to an exemplary embodiment;
FIG. 2 is a schematic diagram of a virtual scene service system, shown in an exemplary embodiment in accordance with the present application;
FIG. 3 is a flow chart illustrating a method for prop control in a virtual scenario according to an embodiment of the present application;
FIG. 4 illustrates a flow chart of a prop control method in a virtual scenario, as illustrated in an exemplary embodiment of the present application;
FIG. 5 illustrates a schematic diagram of a virtual scene screen corresponding to a second virtual object;
FIG. 6 illustrates a flow chart of a prop control method in a virtual scenario, as illustrated in an exemplary embodiment of the present application;
FIG. 7 illustrates a schematic diagram of a second scene screen shown in an exemplary embodiment of the present application;
FIG. 8 is a schematic view showing the relative positional relationship of a first plane and an impact surface according to an exemplary embodiment of the present application;
FIG. 9 illustrates a schematic diagram of an active position confirmation process shown in an exemplary embodiment of the present application;
FIG. 10 illustrates a schematic diagram of an active position confirmation process shown in an exemplary embodiment of the present application;
FIG. 11 illustrates a schematic diagram of an action identifier shown in an exemplary embodiment of the present application;
FIG. 12 illustrates a schematic diagram of a virtual scene screen as illustrated in an exemplary embodiment of the present application;
FIG. 13 is a schematic diagram showing a manner of determining whether a target prop is included in a range of a field of view according to an exemplary embodiment of the present application;
FIG. 14 illustrates a flow chart of a prop control method in a virtual scenario illustrated in an exemplary embodiment of the present application;
FIG. 15 illustrates a block diagram of a prop control device in a virtual scene as illustrated in an exemplary embodiment of the present application;
FIG. 16 illustrates a block diagram of a prop control device in a virtual scenario illustrated in an exemplary embodiment of the present application;
FIG. 17 is a block diagram of a computer device shown in accordance with an exemplary embodiment;
fig. 18 is a block diagram of a computer device, according to an example embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
In order to increase the interestingness and strategic performance of operations in a virtual scene, a throwing type virtual prop or an emitting type virtual prop is often provided, and to increase the effect of the throwing type virtual prop or the emitting type virtual prop, further to increase the interaction efficiency and interaction effect between a user and a virtual object in the virtual scene, the embodiment of the application provides a prop control method in the virtual scene, which can increase the interaction effect in the virtual scene.
Virtual scenes are typically presented by application generation in a computer device such as a terminal based on hardware (such as a screen) in the terminal. The terminal can be a mobile terminal such as a smart phone, a tablet computer or an electronic book reader; alternatively, the terminal may be a notebook computer or a personal computer device of a stationary computer. The virtual scene may be a three-dimensional virtual scene, or may be a two-dimensional virtual scene. Taking the example that the virtual scene is a three-dimensional virtual scene as an example, fig. 1 is a schematic diagram of a display interface of the virtual scene shown according to an exemplary embodiment. As shown in fig. 1, the display interface 100 of the virtual scene includes a virtual object 110, an environment screen 120 of the virtual scene in three dimensions, at least one set of virtual control buttons 130, and a virtual object 140. The virtual object 110 may be a current control object of a user account corresponding to the terminal, and the virtual control button 130 is an optional control element, that is, the user may control the virtual object 110 through the virtual control button 130; the virtual object 140 may be a non-user controlled object, that is, the virtual object 140 is controlled by the application program, or the virtual object 140 may be a virtual object controlled by a user account corresponding to another terminal, and the user may interact with the virtual object 140 by controlling the virtual object 110, for example, controlling the virtual object 110 to attack the virtual object 140.
In fig. 1, the virtual object 110 and the virtual object 140 are three-dimensional models in three dimensions, and the environment screen of the three-dimensional virtual scene displayed in the display interface 100 is an object observed from the perspective of the virtual object 110, and as shown in fig. 1, the environment screen 120 of the three-dimensional virtual scene displayed under the perspective of the virtual object 110 is, illustratively, the earth 124, the sky 125, the horizon 123, the hill 121 and the factory building 122.
The virtual object 110 may move instantaneously under the control of a user, for example, the virtual control button 130 shown in fig. 1 is a virtual button for controlling the movement of the virtual object 110, and when the user touches the virtual control button 130, the virtual object 110 may move in a direction of a touch point with respect to the center of the virtual control button 130 in a virtual scene.
Virtual objects refer to movable objects in a virtual scene. The movable object may be at least one of a virtual character, a virtual animal, and a cartoon character. Alternatively, when the virtual scene is a three-dimensional virtual scene, the virtual object may be a three-dimensional stereoscopic model. Each virtual object has its own shape and volume in the three-dimensional virtual scene, occupying a portion of the space in the three-dimensional virtual scene. Optionally, the virtual character is a three-dimensional character constructed based on three-dimensional human skeleton technology, which implements different external figures by wearing different skins. In some implementations, the avatar may also be implemented using a 2.5-dimensional or 2-dimensional model, which is not limited by embodiments of the present application.
Fig. 2 is a schematic structural diagram of a virtual scene service system according to an exemplary embodiment of the present application, and as shown in fig. 2, the system includes: terminal 210, terminal 220, and server 230;
the terminal 210 may be a control terminal corresponding to a virtual object releasing a target prop in the embodiment of the present application, where the terminal 210 may be a mobile phone, a tablet computer, an electronic book reader, a smart glasses, a smart watch, an MP3 player (Moving Picture Experts Group Audio Layer III, dynamic image expert compression standard audio layer 3), an MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert compression standard audio layer 4) player, and so on.
The terminal 220 may be a control terminal corresponding to a virtual object other than the virtual object releasing the target prop in the embodiment of the present application, where the terminal 220 may be a mobile phone, a tablet computer, an electronic book reader, smart glasses, a smart watch, an MP3 player, an MP4 player, and so on.
The terminal 210 and the terminal 220 may have applications supporting virtual scenes installed therein, and accordingly, the server 230 may be a server corresponding to the applications supporting virtual scenes.
The terminal is connected with the server through a communication network. Optionally, the communication network is a wired network or a wireless network.
The server 230 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligence platforms, and the like.
Optionally, the system may further include a management device 240 (not shown in the figure), where the management device 240 is connected to the server 230 through a communication network. Optionally, the communication network is a wired network or a wireless network.
Alternatively, the wireless network or wired network described above uses standard communication techniques and/or protocols. The network is typically the Internet, but may be any network including, but not limited to, a local area network (Local Area Network, LAN), metropolitan area network (Metropolitan Area Network, MAN), wide area network (Wide Area Network, WAN), a mobile, wired or wireless network, a private network, or any combination of virtual private networks. In some embodiments, data exchanged over the network is represented using techniques and/or formats including HyperText Mark-up Language (HTML), extensible markup Language (Extensible Markup Language, XML), and the like. All or some of the links may also be encrypted using conventional encryption techniques such as secure socket layer (Secure Socket Layer, SSL), transport layer security (Transport Layer Security, TLS), virtual private network (Virtual Private Network, VPN), internet protocol security (Internet Protocol Security, IPsec), and the like. In other embodiments, custom and/or dedicated data communication techniques may also be used in place of or in addition to the data communication techniques described above.
Fig. 3 shows a flowchart of a prop control method in a virtual scenario, which may be performed by a computer device, which may be a terminal and/or a server, and which may be schematically implemented as a terminal 210 and/or a server 230 shown in fig. 2, as shown in fig. 3, and may include the following steps:
in step 310, a virtual scene frame is displayed, where the virtual scene frame is a frame in which a virtual scene is observed from a perspective of a first virtual object, where the first virtual object carries a target prop.
The virtual scene screen is a screen of a virtual scene displayed in the control terminal corresponding to the first virtual object. The first virtual object is a virtual object controlled by a control terminal displaying a virtual scene interface; further, the first virtual object may release a target prop in the virtual scene, where the target prop may be a throwing type virtual prop, or may also be an emitting type virtual prop, which is not limited in this application, and the target prop may be a flash bomb, a smoke bomb, a blind bomb, or the like.
Optionally, the effect exerted by the target prop is indiscriminately valid for all objects in the field of view that contain the target prop.
Step 320, in response to receiving the release operation for the target prop, controlling the first virtual object to release the target prop.
Corresponding to the releasing operation of the first virtual object on the target prop, a picture that the target prop released by the first virtual object moves in the virtual scene can be displayed.
Step 330, generating an action effect of the target prop on the other side of the collision surface relative to the first obstacle in response to the target prop colliding with the first obstacle in the virtual scene; the collision surface is a surface on which the target prop collides with the first obstacle.
The target prop collides with a first obstacle in the virtual scene, so that the target prop can be triggered to generate a corresponding action effect, but the surface of the first obstacle corresponding to the action position of the action effect generated by the target prop is different from the surface of the target prop, which collides with the first obstacle.
Optionally, the target prop may generate an action effect on the first plane of the first obstacle after the first obstacle in the virtual scene collides, or generate an action effect on a target position on a space side where the first plane of the first obstacle is located. The first plane is a surface opposite to the collision surface with the first obstacle.
In this embodiment of the present application, the releasing operation of the target prop may be an operation of controlling the virtual object to throw the target prop after the target prop is adjusted, for example, an operation of performing touch control on a throwing control corresponding to the target prop, or the releasing operation of the target prop may be an operation of releasing the target prop by a launching prop or a throwing prop, which is schematically shown, after the first virtual object adjusts the launching prop, the target prop is released into the virtual scene after the touch control operation is performed on the releasing control corresponding to the launching prop, and the releasing mode of the target prop is not limited in this application. The step of calling out the target prop refers to displaying a model corresponding to the target prop in a virtual scene based on a selection control corresponding to the target prop, wherein the target prop can be displayed in a hand of a first virtual object when the first virtual object calls out the target prop, or a transmitting device or a throwing device corresponding to the target prop is displayed in the hand of the first virtual object.
In a virtual scene, a developer realizes the construction of a virtual environment, a virtual object, a virtual vehicle, and the like in the virtual scene through a collision box (Hitbox), wherein the Hitbox is a physical Model for judging the conditions of object hit, object collision, and the like in a 3D game, and is distinguished from an edge smoothing fine appearance Model (Model), the appearance Model is a virtual environment, a virtual object, a virtual vehicle, and the like which are visually seen by a user, the Hitbox is generally constructed by using a simple polygon which is substantially matched with the appearance Model, and the Hitbox is invisible in the virtual environment. In order to define the action range of the target prop, the Hitbox of the target prop can be displayed through the appearance model after the target prop is called out in the virtual scene; in the embodiment of the application, the obstacle in the virtual scene, the target prop and the like can be constructed through the Hitbox, and the obstacle in the virtual scene can comprise a scene model such as a wall surface in the virtual scene, a virtual carrier and the like.
The barrier in the embodiment of the application has at least two planes, and illustratively, when the barrier is a wall surface, the barrier has 6 planes, and when the barrier is a sphere, the barrier has innumerable planes corresponding to the cross section of the sphere. In the embodiment of the application, an obstacle is taken as an example of a wall surface, and a prop control method in a virtual scene provided by the application is described. The first plane corresponding to the action position of the action effect of the target prop can be a surface adjacent to or opposite to the collision plane on the first barrier, for example, the collision surface is an a surface of the wall surface, the first plane can be a B surface adjacent to or opposite to the wall surface a, and the relative relationship between the B surface and the a surface can be determined by the moving track of the target prop and the three-dimensional shape of the barrier.
It should be noted that, when the steps 310 to 330 are executed by the terminal, the terminal may be a control terminal corresponding to a virtual object releasing the target prop in the virtual scene, and the terminal displays the picture corresponding to the step through the display screen; when the steps 310 to 330 are performed by the server, for example, when a virtual scene picture is displayed in a cloud game scene, the server may generate or acquire a picture corresponding to the step, and push the picture to the terminal, so that the terminal displays the picture. In the embodiment of the application, a computer device is taken as an example of a terminal, and a prop control method in a virtual scene provided by the application is described.
In summary, according to the prop control method in the virtual scene provided by the embodiment of the application, after the collision plane with the obstacle is set in the virtual scene, the target prop with the action effect is generated on the other side of the collision surface relative to the obstacle, so that the target prop can have the action effect on the virtual object shielded by the obstacle, the action effect of the target prop is improved, the interaction efficiency and the interaction effect are improved, the moving operation required by a user when using the target prop is reduced, the occupation of terminal processing resources is reduced, the terminal electric quantity consumption is reduced, and the cruising ability of the terminal is improved.
For other virtual objects in the virtual scene, when the target prop generates the action effect, if the target prop is within the field of view of the virtual object, the virtual object may be affected by the action effect of the target prop, fig. 4 shows a flowchart of a prop control method in the virtual scene, which is shown in an exemplary embodiment of the present application, and the prop control method in the virtual scene may be performed by a computer device, which may be a terminal and/or a server, and the terminal may be schematically implemented as the terminal 220 and/or the server 230 shown in fig. 2, as shown in fig. 4, and the prop control method in the virtual scene may include the following steps:
Step 410, displaying a virtual scene interface.
In step 420, a virtual scene picture is displayed in the virtual scene interface, the virtual scene picture being a picture of the virtual scene viewed at the perspective of the second virtual object.
The second virtual object is other virtual objects in the virtual scene except the first virtual object, and the first virtual object is a virtual object for releasing the target prop in the virtual scene.
Step 430, in response to the action position of the target prop included in the field of view of the second virtual object, displaying a picture corresponding to the action effect of the target prop, where the action position of the target prop is located at the other side of the collision surface of the first obstacle that collides with the target prop, and the collision surface is the surface of the target prop that collides with the first obstacle; the action position of the target prop is the position of the target prop for generating action effect.
The prop control method in the virtual scene provided in the embodiment of the present application may be applied to a scene, fig. 5 shows a schematic diagram of a virtual scene picture corresponding to a second virtual object, as shown in fig. 5, the virtual scene includes a virtual object a and a virtual object B, a first barrier 510 is spaced between the virtual object a and the virtual object B, fig. 5 shows a wall body, the virtual object a releases the target prop into the virtual scene based on a release operation of the target prop, so that the target prop moves in the virtual scene, when the target prop collides with one plane of the first barrier 510, an action effect is confirmed to be generated on one side corresponding to the other plane of the first barrier 510, and the action effect of the target prop is used as a blind-causing bullet, for example, and is used for creating a visual shielding for a virtual object including an action position of the target prop in a visual field range within a certain period, as shown in fig. 5, when the action effect is generated on one side corresponding to the other plane of the first barrier, the target prop is not located in the visual field range, and therefore the action effect of the virtual object a is not shown in the virtual object corresponding part of the virtual object a (the visual field picture 5 a); since the action position is within the visual field of the virtual object B, the action effect of the target prop is effective for the virtual object B, and the screen 520 corresponding to the action effect of the target prop is displayed on the virtual scene interface of the terminal corresponding to the virtual object B, as shown in part (B) of fig. 5.
In summary, according to the prop control method in the virtual scene provided by the embodiment of the application, after the collision plane with the obstacle is set in the virtual scene, the target prop with the action effect is generated on the other side of the collision surface relative to the obstacle, so that the target prop can have the action effect on the virtual object shielded by the obstacle, the action effect of the target prop is improved, the interaction efficiency and the interaction effect are improved, the moving operation required by a user when using the target prop is reduced, the occupation of terminal processing resources is reduced, the terminal electric quantity consumption is reduced, and the cruising ability of the terminal is improved.
In the embodiment of the present application, before the target prop is released, the release direction of the target prop needs to be determined based on the adjustment operation of the user, based on which, fig. 6 shows a flowchart of a prop control method in a virtual scene, which is shown in an exemplary embodiment of the present application, and the prop control method in the virtual scene may be executed by a computer device, which may be a terminal and/or a server in the system shown in fig. 2, and as shown in fig. 6, the prop control method in the virtual scene may include the following steps:
In step 610, a virtual scene frame is displayed, where the virtual scene frame is a frame in which a virtual scene is observed from a perspective of a first virtual object, where the first virtual object carries a target prop.
Step 620, in response to receiving a targeting operation based on the target prop, displays a sight pattern.
In one possible implementation, the target prop may have a corresponding launch prop or throw prop to effect confirmation of the launch direction of the target prop upon receipt of a targeting operation based on the target prop or throw prop, and release of the target prop toward the targeting direction upon receipt of a release operation of the target prop. Or, when the virtual object calls the target prop, the target prop can be displayed on the hand of the virtual object, and the calling mode and the using mode of the target prop are not limited. Fig. 7 is a schematic diagram of a second scene shown in an exemplary embodiment of the present application, where the second scene includes a throwing prop 710, as shown in fig. 7, and the throwing prop 710 may be disposed in a left hand or a right hand of a virtual object to implement left hand throwing or right hand throwing of a target prop, and fig. 7 is a left hand throwing, and before a user releases the target prop, a direction adjustment (aiming operation) may be performed on the handheld throwing prop to adjust a throwing direction of the target prop, as shown in fig. 7, so as to implement confirmation of the throwing direction of the target prop, and in order to make the user define the throwing direction of the target prop, as shown in fig. 7, a sight pattern 720 is displayed in the virtual scene corresponding to the direction adjustment of the throwing prop.
In one possible scenario, the target prop, or the launch prop/throw prop to which the target prop corresponds, needs to be equipped according to user operation before entering the virtual environment so that the user can use the target prop after entering the virtual environment.
In step 630, in response to receiving the release operation for the target prop, the first virtual object is controlled to release the target prop in the virtual scene based on the movement track.
The trajectory is determined based on the throwing position of the target prop and the position indicated by the sight pattern.
In the embodiment of the application, the moving track is a straight track determined based on the throwing position of the target prop and the position indicated by the sight pattern, or the moving track is a parabolic track determined based on the throwing position of the target prop and the position indicated by the sight pattern.
Optionally, when the position indicated by the sight pattern is located at the position where the obstacle is located, the movement track may be a straight line track obtained by connecting the throwing position of the target prop with the position indicated by the sight pattern, or a movement track of the target prop is constructed based on a preset parabolic model with the throwing position of the target prop as a starting point and the position indicated by the sight pattern as an end point. When the position indicated by the sight pattern is not located at the position indicated by the obstacle, connecting the throwing position of the target prop with the position indicated by the sight pattern to obtain a line segment, extending the line segment to obtain a collision point on the first obstacle which the extension line passes through, determining the throwing position of the target prop and the line segment between the collision point on the first obstacle as a linear track of the target prop, or constructing a parabola corresponding to the moving track of the target prop based on the throwing position of the target prop, the position indicated by the sight pattern and a preset parabola model, so that the parabola passes through the throwing position of the target prop and the position indicated by the sight pattern, obtaining the collision point of the parabola and the first obstacle as an end point, and obtaining the parabola taking the throwing position as a starting point and the collision point of the first obstacle as the end point as the parabola track of the target prop.
Step 640, in response to the collision of the target prop with the first obstacle in the virtual scene, generating an action effect of the target prop on the other side of the collision surface relative to the first obstacle; the collision surface is a surface on which the target prop collides with the first obstacle.
In the three-dimensional virtual scene, the relative position relation between the first plane and the collision surface can be determined based on the spatial relation between the moving track of the target prop and the obstacle, and the first plane is the surface of the first obstacle corresponding to the action position for generating the action effect of the target prop; when the target prop collides with the obstacle, if an extension line of a moving track of the target prop passes through a plane opposite to the collision surface, the first plane is a plane opposite to the collision surface; if an extension line of the movement track of the target prop passes through a plane adjacent to the collision surface, the first plane is a plane adjacent to the collision surface. Fig. 8 is a schematic diagram showing a relative positional relationship between a first plane and a collision surface, which is shown in an exemplary embodiment of the present application, and as shown in fig. 8, a portion a in fig. 8 shows a schematic diagram in which the first plane is a plane opposite to the collision surface, a plane 811 is a plane in which a target prop collides with a first obstacle 810, a plane 812 is a plane through which an extension line of a movement track of the target prop passes out of the first obstacle 810, and is a plane on a spatially corresponding side of an effect of generating an action of the target prop, and the plane 812 is the first plane; part B in fig. 8 shows a schematic diagram of the first plane shown in an exemplary embodiment of the present application as a plane adjacent to the collision surface, when the angle between the target prop and the plane 811 of the first obstacle is changed, an extension line of the movement track of the target prop may be led to pass through the plane 813 adjacent to the plane 811, where the plane 813 is the first plane.
In this embodiment of the present application, after determining the collision point between the target prop and the first obstacle, determining that the movement of the target prop is completed and cannot be continued, and calculating to obtain the action position of the target prop, where in one possible implementation manner, the action position of the target prop is located on the first plane of the first obstacle, in this case, in response to the movement track being a straight track, the action position of the target prop is a first collision point between a ray emitted in a direction opposite to the movement track and the first obstacle, with the first reference point as a starting point; the first reference point is a position point determined by taking the second collision point as a starting point after collision of the target prop and the first obstacle occurs at the second collision point, and extending the target distance along the extending direction of the moving track; the action position of the target prop is the position of the target prop for generating action effect.
Fig. 9 is a schematic diagram showing an action position confirmation procedure according to an exemplary embodiment of the present application, where, as shown in fig. 9, when the movement track is a straight track, in order to determine the action position of the target prop on the first obstacle, the terminal may determine the first reference point 920 by using the second collision point 910 of the target prop and the first obstacle as a starting point, extending the target distance along the extending direction of the line segment confirmed based on the throwing position and the second collision point, determining the first reference point 920 by using the first reference point 920 as a starting point, and then, emitting a ray in the opposite direction of the extending direction of the line segment confirmed based on the throwing position and the second collision point until the ray collides with the first obstacle, and acquiring the point where the ray collides with the first obstacle as the first collision point 930, that is, the action position of the target prop.
Responding to the movement track being parabolic, wherein the action position of the target prop is a fourth collision point of rays emitted along the direction opposite to the extension direction of the tangent line of the third collision point of the movement track and the first obstacle by taking the second reference point as a starting point; the second reference point is a position point determined by taking the third collision point as a starting point after the collision of the target prop and the first obstacle at the third collision point, and extending the target distance along the extending direction of the tangent line.
Fig. 10 is a schematic diagram showing an operation position confirmation process according to an exemplary embodiment of the present application, as shown in fig. 10, when a movement track is parabolic, a terminal may extend a target distance along an extension direction of a tangent line of the movement track at a third collision point 1010 of a target prop and a first obstacle by using the third collision point 1010 as a starting point, determine a second reference point 1020, and then, with the second reference point 1020 as a starting point, emit a ray in a direction opposite to the extension direction of the tangent line until the ray collides with the first obstacle, and acquire a point where the ray collides with the first obstacle as a first collision point 1030, that is, an operation position of the target prop.
The value of the target distance may be set by a person based on actual requirements, in general, the value of the target distance needs to enable the first reference point to be located outside the first obstacle, which is schematically changed along with the change of the collision angle of the moving track relative to the first obstacle, so that the calculated target distance can enable the first reference point to be located outside the first obstacle under the condition that the thickness of the first obstacle is fixed, which is schematically expressed by the following formula:
l=d/sinθ+x。
Wherein l represents the target distance, d represents the thickness of the first obstacle, θ represents the collision angle of the movement track of the target prop relative to the first obstacle, x is a random positive number, and it should be noted that the process of calculating the target distance is illustrative, and the setting or calculation of the target distance is not limited in this application.
In another possible implementation manner, the action position of the target prop is located in a spatial range outside the first plane of the first obstacle, in which case, the first reference point or the second reference point may be directly acquired as the action position of the target prop, or the action position of the target prop in the spatial range outside the first plane may be determined based on other manners, which is not limited in this application.
In order to enable a user corresponding to a virtual object using the target prop to acquire a state of generating an action effect of the target prop, that is, to determine whether the target prop has triggered the action effect, in one possible implementation manner, an action identifier is displayed, where the action identifier is used to indicate that the target prop has exerted an action effect, that is, when the target prop exerts an action effect, the action identifier is displayed.
The action identifier may be at least one of a text identifier, an image identifier, and an animation identifier, where the content of the text identifier, the image identifier, and the animation identifier may be set by related personnel based on actual requirements, fig. 11 shows a schematic diagram of the action identifier according to an exemplary embodiment of the present application, and when an action effect is performed on another plane of the first obstacle after the target prop collides with the first obstacle, as shown in fig. 11, the action identifier 1110 is displayed in the virtual scene, and illustratively, a display position of the action identifier may correspond to a collision point of the target prop and the first obstacle, or a display position of the action identifier may also correspond to an action position of the target prop, or the display position of the action identifier may be set by a developer, where the display position of the target prop in the virtual scene is not limited.
In one possible implementation, the target prop has a range threshold value, where the range threshold value is used to indicate a maximum movement distance of the target prop in the virtual scene, that is, the range threshold value is used to indicate a maximum distance that the target prop can maintain a state of generating no action effect in the virtual scene on the premise that the target prop does not collide with the first obstacle during movement.
In one possible implementation, the active position of the target prop is located on the first obstacle that the target prop collides against during its movement in the virtual scene, i.e. the first obstacle is the first obstacle that the target prop collides against in the virtual scene. That is, the target prop cannot pass through the obstacle in the virtual scene, when the target prop collides with any obstacle in the virtual scene, the target prop can generate an action effect on one side of other planes of the obstacle, at this time, the first obstacle may be the first obstacle that the target prop collides with in the virtual scene, or in another possible implementation manner, the target prop can pass through the obstacle in the virtual scene, but the target prop has a collision point threshold, and when the number of collision points generated by the collision of the target prop with the obstacle in the virtual scene reaches the collision point threshold, the action effect is confirmed, and the collision point threshold is used for indicating the maximum number of collision points that the target prop can pass through in the virtual scene, and the collision point is the contact point of the target prop with the plane of the obstacle in the virtual scene; at this time, the first obstacle is an obstacle in the virtual scene determined based on the collision point threshold, N obstacles are included before the first obstacle, N obstacles are obstacles in the virtual scene through which the movement track of the target prop passes, and the value of N is determined based on the collision point threshold. In the embodiment of the present application, the target prop passes through an obstacle to be regarded as having two collisions with the obstacle, that is, two collision points with the obstacle, fig. 12 is a schematic diagram showing a virtual scene image shown in an exemplary embodiment of the present application, as shown in fig. 12, for the obstacle 1210, two collision points of the target prop and the obstacle 1210 are respectively a collision point 1211 and a collision point 1212, that is, two collision points of the target prop and the obstacle are an incident point and an exit point; in this embodiment of the present application, the threshold value of the collision point is a positive integer, and the threshold value of the collision point is singular, where the value of N may be: (M-1)/2, M is a collision point threshold, that is, when the target prop collides with the (M-1)/2+1 th obstacle, an action effect is generated on the side corresponding to the other plane of the obstacle corresponding to the M-th collision point, wherein the other plane is a plane different from the plane in which the M-th collision point is located.
In one possible implementation manner, on the premise that the target prop can pass through the obstacle, the target prop can generate an action effect at the collision point corresponding to the collision point threshold, and at this time, the collision point threshold is a double number, and the value of N can be: (M-2)/2, M is a threshold of collision point, that is, when the target prop collides with the (M-2)/2+1 th obstacle, an action effect is generated at the plane where the M-th collision point is located.
The collision point threshold value can be preset by related personnel, or in one possible implementation manner, in response to receiving a calling operation on the target prop, a collision point threshold value setting page is displayed in the virtual scene interface, wherein the collision point threshold value setting page comprises a numerical value setting control; in response to receiving an adjustment operation of the value setting control, changing a collision point value in a collision point threshold setting page; and in response to receiving a selection operation of a confirmation control contained in the collision point threshold setting page, determining a user-defined collision point value as the collision point threshold of the target prop. Optionally, the collision point threshold setting page may also be implemented as an obstacle number setting page, where the obstacle number setting interface may include a numerical value setting control, configured to set the number of maximum obstacles that the target prop can pass through in the virtual scene; in response to receiving a selection operation of a confirmation control included in the obstacle number setting page, determining the number of the user-defined maximum obstacles as the maximum value of the obstacles that the target prop can pass through in the virtual scene, in which case the target prop generates an action effect on one side of another surface different from the collision surface of the obstacle when the target prop collides with the next obstacle after passing through the obstacle corresponding to the maximum value of the obstacles.
And in response to the target prop not colliding with the first obstacle, and the moving distance of the target prop in the virtual scene reaching the range threshold, displaying the action identification of the target prop at the position where the moving distance of the target prop reaches the range threshold.
When the first obstacle is the first obstacle which is collided by the target prop in the virtual scene, the target prop moves along the moving track, and the moving distance of the target prop in the virtual scene reaches the range threshold under the premise that the target prop does not collide with any obstacle in the virtual scene, and the position of the target prop behind the range threshold of the target prop is determined, so that an action effect is generated; when the first obstacle is an obstacle determined based on a collision point threshold in the virtual scene, the target prop passes through a plurality of obstacles in the process of moving along the moving track, and when the collision point threshold is not reached, the moving distance of the target prop in the virtual scene reaches a range threshold, and the target prop is controlled to be at the position of the target prop after the target prop moves the range threshold, so that an action effect is generated; in order to make the user clear the position information of the action effect generated by the target prop, the action identifier of the target prop can be displayed at the position of the action effect generated by the target prop.
In one possible implementation, the target prop has a duration threshold to indicate a maximum length of movement of the target prop from being released to generating the effort effect; that is, the duration threshold is used to indicate a maximum duration that the target prop can maintain the non-generated action effect in the virtual scene on the premise that the target prop does not collide with the first obstacle in the moving process;
and in response to the target prop not colliding with the first obstacle, and the moving time length of the target prop in the virtual scene reaching a time length threshold, displaying the action identifier of the target prop at the position where the moving time length of the target prop reaches the time length threshold.
When the first obstacle is the first obstacle which is collided by the target prop in the virtual scene, the target prop does not collide with any obstacle in the virtual scene in the moving process along the moving track, the moving time length of the target prop in the virtual scene reaches a time length threshold value, and the position of the target prop behind the time length threshold value of the target prop is determined, so that an action effect is generated; when the first obstacle is an obstacle determined based on a collision point threshold in the virtual scene, the target prop passes through a plurality of obstacles in the process of moving along the moving track, and when the collision point threshold is not reached, the moving duration of the target prop in the virtual scene reaches a duration threshold, and the position of the target prop behind the duration threshold of the target prop is controlled to generate an action effect; in order to make the user clear the position information of the action effect generated by the target prop, the action identifier of the target prop can be displayed at the position of the action effect generated by the target prop.
Optionally, there is a correspondence between the duration threshold and the distance threshold, and when the change mode and the number of the movement rate of the target prop are consistent, the distance threshold is in a proportional relationship with the duration threshold, that is, whether the target prop plays a role in the virtual scene when the target prop is not collided with the first obstacle may be determined based on one of the duration threshold and the distance threshold.
Illustratively, in the process that the target prop moves along the moving track in the virtual scene, whether the target prop plays a role or not can be judged based on whether the target prop collides with the first obstacle or not, whether the moving distance of the target prop reaches a distance threshold value or not/whether the moving duration of the target prop reaches the moving duration or not, and when the target prop meets any one of the conditions, the target prop is confirmed to generate the role or not at the corresponding position.
In one possible implementation manner, the effect of the target prop is that the virtual object containing the target prop in the view field range causes visual shielding of the target duration; for example, the target prop may be implemented as a blind-causing bullet.
When the target prop plays a role, judging whether the acting position of the target prop is in the visual field range of a corresponding second virtual object or not for terminals corresponding to other virtual objects in the virtual scene, wherein the acting position of the target prop contained in the visual field range of the second virtual object is a mapping point of a plane where a first position is located until the target prop acts, the vertical distance between the mapping point and the first position is smaller than a first distance threshold, and the horizontal distance between the first position is smaller than a second threshold; the first position refers to a position point of the second virtual object in the virtual scene; the mapping point is based on the action position to make a perpendicular to the plane in which the first position is located, and the obtained intersection point of the perpendicular and the plane in which the first position is located. Fig. 13 is a schematic diagram showing a determination manner of whether an action position of a target prop is included in a view field range according to an exemplary embodiment of the present application, as shown in fig. 13, P is a position of a second virtual object in a virtual scene, O is an action position of the target prop in the virtual scene, plane L is a plane in which P is located, an OP is connected, a perpendicular OC perpendicular to plane L is made through an O point, C is an intersection point of the perpendicular and plane L, that is, a mapping point of the O point in plane L, if a vertical distance a between the C point and the P point is smaller than a first distance threshold, a horizontal distance b between the C point and the P point is smaller than a second distance threshold, an action position of the target prop is confirmed to be included in the view field range of the second virtual object, and a picture corresponding to an action effect of the target prop is displayed in a virtual scene interface of a control terminal corresponding to the second virtual object; when the target prop is a blind-causing bullet, and the field of view corresponding to the second virtual object includes the action position of the target prop, when the target prop plays an action effect, the second virtual object is subjected to visual shielding of the target duration, that is, a picture 520 shown in fig. 5 is displayed in the control terminal corresponding to the second virtual object, so that the effect of visual shielding is achieved. Optionally, as time changes, the visual shielding effect of the target prop is gradually reduced, and the screen of the shielded virtual scene in the virtual scene interface corresponding to the second virtual object is gradually reduced until the virtual scene is completely displayed in the virtual scene interface. The first distance threshold is determined based on the width of the control terminal corresponding to the second virtual object, the second distance threshold is determined based on the length of the control terminal corresponding to the second virtual object, and illustratively, the value of the first distance threshold is equal to the width of the control terminal corresponding to the second virtual object, the value of the second distance threshold is equal to half of the length of the control terminal corresponding to the second virtual object, or the values of the first distance threshold and the second distance threshold can be determined by related personnel; if the vertical distance a between the point C and the point P is not smaller than the first distance threshold value, and the horizontal distance between the point C and the point P is not smaller than the second distance threshold value, confirming that the action position of the target prop is not included in the visual field range of the second virtual object, and not displaying a picture corresponding to the action effect of the target prop in the control terminal corresponding to the second virtual object.
It should be noted that, when the related content in the steps 610 to 640 is executed by the terminal, the terminal displays the screen corresponding to the step through the display screen; when the steps 610 to 640 are executed by the server, for example, when virtual scene images are displayed in a cloud game scene, the server may generate or acquire images corresponding to the steps, push the images corresponding to the steps to the terminal, so that the terminal displays the images, or calculate or confirm position information in the images, and send images corresponding to calculation results or confirmation results to the terminal, so that the terminal displays corresponding contents at corresponding positions; or after obtaining the calculation result or the confirmation result, the server generates a corresponding picture and pushes the generated picture to the terminal.
In summary, according to the prop control method in the virtual scene provided by the embodiment of the application, after the collision plane with the obstacle is set in the virtual scene, the target prop with the action effect is generated on the other side of the collision surface relative to the obstacle, so that the target prop can have the action effect on the virtual object shielded by the obstacle, the action effect of the target prop is improved, the interaction efficiency and the interaction effect are improved, the moving operation required by a user when using the target prop is reduced, the occupation of terminal processing resources is reduced, the terminal electric quantity consumption is reduced, and the cruising ability of the terminal is improved.
Taking the target prop as a blind-causing bullet, and taking an example that the target prop can play an effect on a side corresponding to other planes of an obstacle when the target prop collides with the obstacle (such as a wall, etc.), fig. 14 shows a flowchart of a prop control method in a virtual scene shown in an exemplary embodiment of the present application, where the method may be executed by a terminal, and as shown in fig. 14, the prop control method in the virtual scene may include the following steps:
in step 1401, the first virtual object switches out a blind-causing bullet.
Step 1402, it is determined whether an aiming operation is received, if yes, step 1403 is executed, and if not, the state of step 1401 is returned.
Illustratively, the aiming operation may be pressing a release control, such as pressing a fire control. In a state that the release control is kept pressed, a user corresponding to the first virtual object can conduct direction adjustment by sliding a finger in the release control area.
Step 1403, a sight pattern is displayed.
Step 1404, it is determined whether a release operation is received, if yes, step 1405 is executed, and if not, the state of step 1403 is returned.
Wherein the release operation may be changing the release control from a pressed state to a released state, such as releasing the firing control.
In step 1405, a blind-causing bullet is fired.
Step 1406, judging whether the target prop collides with the obstacle, if so, executing step 1407, otherwise, executing step 1409.
Step 1407, determining an explosion location of the target prop.
The surface corresponding to the explosion position of the target prop is different from the collision surface, the collision surface is the surface where the target prop collides with the obstacle, and the explosion position is the position of the action effect exerted by the target prop.
Step 1408, explosion.
Step 1409, move on in the virtual scene.
Step 1410, determining whether the moving distance of the target prop exceeds the distance threshold, if so, executing step 1408, otherwise, returning to step 1409.
Step 1411, determining whether the field of view of the second virtual object includes an explosion position, if yes, executing step 1412, otherwise, ending.
Step 1412, displaying a flashing effect on the interface of the control terminal corresponding to the second virtual object.
The implementation process corresponding to the steps 1401 to 1412 may refer to the relevant content in the embodiment shown in fig. 3, 4 or 6, and will not be described herein.
In summary, according to the prop control method in the virtual scene provided by the embodiment of the application, after the collision plane with the obstacle is set in the virtual scene, the target prop with the action effect is generated on the other side of the collision surface relative to the obstacle, so that the target prop can have the action effect on the virtual object shielded by the obstacle, the action effect of the target prop is improved, the interaction efficiency and the interaction effect are improved, the moving operation required by a user when using the target prop is reduced, the occupation of terminal processing resources is reduced, the terminal electric quantity consumption is reduced, and the cruising ability of the terminal is improved.
Fig. 15 shows a block diagram of a prop control device in a virtual scene according to an exemplary embodiment of the present application, where, as shown in fig. 15, the device may include:
a first screen display module 1510, configured to display a virtual scene screen, where the virtual scene screen is a screen for observing a virtual scene from a perspective of a first virtual object, and the first virtual object carries a target prop;
a release module 1520 for controlling the first virtual object to release the target prop in response to receiving a release operation of the target prop;
an effect generation module 1530 for generating an effect of the target prop on the other side of the collision surface with respect to the first obstacle in response to the target prop colliding with the first obstacle in the virtual scene; the collision surface is a surface of the target prop that collides with the first obstacle.
In one possible implementation, the apparatus further includes:
a sight pattern display module for displaying a sight pattern in response to receiving a sighting operation based on the target prop;
the release module is used for controlling the first virtual object to release the target prop based on a moving track in the virtual scene in response to receiving the release operation of the target prop; the movement track is determined based on a throwing position of the target prop and a position indicated by the sight pattern.
In one possible implementation, the movement track is a straight track determined based on the throwing position of the target prop and the position indicated by the sight pattern, or the movement track is a parabolic track determined based on the throwing position of the target prop and the position indicated by the sight pattern.
In one possible implementation manner, in response to the movement track being a straight line track, the action position of the target prop is a first collision point of a ray emitted in the opposite direction of the movement track and the first obstacle with a first reference point as a starting point; the first reference point is a position point determined by taking the second collision point as a starting point after the target prop collides with the first obstacle at the second collision point, and extending a target distance along the extending direction of the moving track; the action position of the target prop is the position of the target prop for generating action effect.
In one possible implementation manner, in response to the movement track being parabolic, the action position of the target prop is a fourth collision point of a ray emitted in a direction opposite to the extension direction of a tangent line of a third collision point along the movement track with the first obstacle, starting from a second reference point; the second reference point is a position point determined by taking the third collision point as a starting point after the target prop collides with the first obstacle at the third collision point, and extending a target distance along the extending direction of the tangent line; the action position of the target prop is the position of the target prop for generating action effect.
In one possible implementation, the first obstacle is a first obstacle that the target prop collides against in the virtual scene.
In one possible implementation, the apparatus further includes:
and the action mark display module is used for displaying an action mark, and the action mark is used for indicating that the target prop has generated an action effect.
In one possible implementation, the target prop has a range threshold value that is used to indicate a maximum distance of movement of the target prop in the virtual scene;
the action identification display module is used for responding to the situation that the target prop does not collide with the first obstacle, the moving distance of the target prop in the virtual scene reaches the range threshold, and the action identification of the target prop is displayed at the position where the moving distance of the target prop reaches the range threshold.
In one possible implementation, the target prop has a duration threshold value that indicates a maximum length of movement of the target prop from being released to generating an effort effect;
the action identification display module is used for responding to the situation that the target prop is not collided with the first obstacle, the moving duration of the target prop in the virtual scene reaches the duration threshold, and the action identification of the target prop is displayed at the position where the moving duration of the target prop reaches the duration threshold.
In one possible implementation, the target prop has a collision point threshold value to indicate a maximum number of collision points that the target prop can pass through in the virtual scene; the collision point is a point of contact of the target prop with a plane of an obstacle in the virtual scene;
n obstacles are included before the first obstacle, the N obstacles are obstacles in the virtual scene through which the moving track of the target prop passes, and the value of N is determined based on the collision point threshold value; n is a positive integer.
In one possible implementation, the effect of the target prop is to cause visual shielding of a target duration for a virtual object containing the target prop in a field of view.
In summary, according to the prop control device in the virtual scene provided by the embodiment of the application, after the collision plane with the obstacle is set in the virtual scene, the target prop with the action effect is generated on the other side of the collision surface relative to the obstacle, so that the target prop can have the action effect on the virtual object shielded by the obstacle, the action effect of the target prop is improved, the interaction efficiency and the interaction effect are improved, the moving operation required by a user when the target prop is used is reduced, the occupation of terminal processing resources is reduced, the terminal electric quantity consumption is reduced, and the cruising ability of the terminal is improved.
Fig. 16 shows a block diagram of a prop control device in a virtual scene according to an exemplary embodiment of the present application, where the device may include:
an interface display module 1610, configured to display a virtual scene interface;
a second screen display module 1620, configured to display a virtual scene screen in the virtual scene interface, where the virtual scene screen is a screen for observing a virtual scene at a view angle of a second virtual object;
a third screen display module 1630, configured to display a screen corresponding to an action effect of a target prop in response to an action position of the target prop included in a field of view of the second virtual object, where the action position of the target prop is located at the other side of a collision surface of a first obstacle that collides with the target prop, and the collision surface is a surface of the target prop that collides with the first obstacle; the action position of the target prop is the position of the target prop for generating action effect.
In one possible implementation manner, the action position of the target prop included in the visual field range of the second virtual object means that the vertical distance between the action position and the first position is smaller than a first distance threshold and the horizontal distance between the action position and the first position is smaller than a second distance threshold; the first position refers to a position point of the second virtual object in the virtual scene; the mapping point is based on the action position to make a perpendicular to the plane where the first position is located, and the obtained intersection point of the perpendicular and the plane where the first position is located.
In summary, according to the prop control device in the virtual scene provided by the embodiment of the application, after the collision plane with the obstacle is set in the virtual scene, the target prop with the action effect is generated on the other side of the collision surface relative to the obstacle, so that the target prop can have the action effect on the virtual object shielded by the obstacle, the action effect of the target prop is improved, the interaction efficiency and the interaction effect are improved, the moving operation required by a user when the target prop is used is reduced, the occupation of terminal processing resources is reduced, the terminal electric quantity consumption is reduced, and the cruising ability of the terminal is improved.
Fig. 17 illustrates a block diagram of a computer device 1700 shown in an exemplary embodiment of the present application. The computer device may be implemented as a server in the above-described aspects of the present application. The computer apparatus 1700 includes a central processing unit (Central Processing Unit, CPU) 1701, a system Memory 1704 including a random access Memory (Random Access Memory, RAM) 1702 and a Read-Only Memory (ROM) 1703, and a system bus 1705 connecting the system Memory 1704 and the central processing unit 1701. The computer device 1700 also includes a mass storage device 1706 for storing an operating system 1709, application programs 1710, and other program modules 1711.
The mass storage device 1706 is connected to the central processing unit 1701 through a mass storage controller (not shown) connected to the system bus 1705. The mass storage device 1706 and its associated computer-readable media provide non-volatile storage for the computer device 1700. That is, the mass storage device 1706 may include a computer readable medium (not shown) such as a hard disk or a compact disk-Only (CD-ROM) drive.
The computer readable medium may include computer storage media and communication media without loss of generality. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, erasable programmable read-Only register (Erasable Programmable Read Only Memory, EPROM), electrically erasable programmable read-Only Memory (EEPROM) flash Memory or other solid state Memory technology, CD-ROM, digital versatile disks (Digital Versatile Disc, DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will recognize that the computer storage medium is not limited to the one described above. The system memory 1704 and mass storage 1706 described above may be referred to collectively as memory.
According to various embodiments of the disclosure, the computer device 1700 may also operate by being connected to a remote computer on a network, such as the Internet. I.e., the computer device 1700 may be connected to the network 1708 through a network interface unit 1707 coupled to the system bus 1705, or other types of networks or remote computer systems (not shown) may also be coupled to the network interface unit 1707.
The memory further includes at least one instruction, at least one program, a code set, or an instruction set, where the at least one instruction, the at least one program, the code set, or the instruction set is stored in the memory, and the central processing unit 1701 implements all or part of the steps in the prop control method in the virtual scenario shown in the foregoing embodiments by executing the at least one instruction, the at least one program, the code set, or the instruction set.
Fig. 18 shows a block diagram of a computer device 1800 provided by an exemplary embodiment of the present application. The computer device 1800 may be implemented as the terminal described above, such as: smart phones, tablet computers, notebook computers or desktop computers. The computer device 1800 may also be referred to as a user device, a portable terminal, a laptop terminal, a desktop terminal, or the like.
In general, the computer device 1800 includes: a processor 1801 and a memory 1802.
Processor 1801 may include one or more processing cores, such as a 4-core processor, an 18-core processor, or the like. The processor 1801 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 1801 may also include a main processor and a coprocessor, the main processor being a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1801 may be integrated with a GPU (Graphics Processing Unit, image processor) for taking care of rendering and rendering of content that the display screen is required to display. In some embodiments, the processor 1801 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
The memory 1802 may include one or more computer-readable storage media, which may be non-transitory. The memory 1802 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1802 is used to store at least one instruction for execution by processor 1801 to implement all or part of the steps in a prop control method in a virtual scene provided by a method embodiment in the present application.
In some embodiments, the computer device 1800 may also optionally include: a peripheral interface 1803 and at least one peripheral. The processor 1801, memory 1802, and peripheral interface 1803 may be connected by a bus or signal line. The individual peripheral devices may be connected to the peripheral device interface 1803 by buses, signal lines or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1804, a display screen 1805, a camera assembly 1806, audio circuitry 1807, and a power supply 1809.
The peripheral interface 1803 may be used to connect I/O (Input/Output) related at least one peripheral device to the processor 1801 and memory 1802. In some embodiments, processor 1801, memory 1802, and peripheral interface 1803 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 1801, memory 1802, and peripheral interface 1803 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
In some embodiments, the computer device 1800 also includes one or more sensors 1810. The one or more sensors 1810 include, but are not limited to: acceleration sensor 1811, gyro sensor 1812, pressure sensor 1813, optical sensor 1815, and proximity sensor 1816.
Those skilled in the art will appreciate that the architecture shown in fig. 18 is not limiting and that more or fewer components than shown may be included or that certain components may be combined or that a different arrangement of components may be employed.
In an exemplary embodiment, a computer readable storage medium is also provided for storing at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by a processor to implement all or part of the steps in the prop control method in a virtual scenario described above. For example, the computer readable storage medium may be Read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), compact disc Read-Only Memory (CD-ROM), magnetic tape, floppy disk, optical data storage device, and the like.
In an exemplary embodiment, a computer program product or a computer program is also provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium and executes the computer instructions to cause the computer device to perform all or part of the steps of the method shown in any of the embodiments of fig. 3, 4, 6 or 14 described above.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (14)

1. A prop control method in a virtual scene, the method comprising:
displaying a virtual scene picture, wherein the virtual scene picture is a picture for observing a virtual scene from the view angle of a first virtual object, and the first virtual object carries a target prop;
controlling the first virtual object to release the target prop in response to receiving a release operation of the target prop;
generating an effect of the target prop on the other side of the collision surface relative to a first obstacle in the virtual scene in response to the target prop colliding with the first obstacle; the collision surface is a surface of the target prop which collides with the first obstacle; the effect of the target prop is that a virtual object containing the target prop in a visual field range causes visual shielding of target duration;
wherein the first obstacle is the first obstacle that the target prop collides in the virtual scene;
alternatively, the target prop has a collision point threshold value to indicate a maximum number of collision points that the target prop can pass through in the virtual scene; the collision point is a point of contact of the target prop with a plane of an obstacle in the virtual scene; n obstacles are included before the first obstacle, the N obstacles are obstacles through which the moving track of the target prop passes, and the value of N is determined based on the collision point threshold value; n is a positive integer.
2. The method of claim 1, wherein prior to controlling the first virtual object to release the target prop in response to receiving a release operation on the target prop, the method further comprises:
displaying a sight pattern in response to receiving a sighting operation based on the target prop;
the controlling the first virtual object to release the target prop in response to receiving a release operation on the target prop includes:
controlling the first virtual object to release the target prop in the virtual scene based on a moving track in response to receiving a release operation of the target prop; the movement track is determined based on a throwing position of the target prop and a position indicated by the sight pattern.
3. The method of claim 2, wherein the movement trajectory is a straight trajectory determined based on a throwing position of the target prop and a position indicated by the sight pattern, or the movement trajectory is a parabolic trajectory determined based on a throwing position of the target prop and a position indicated by the sight pattern.
4. A method according to claim 3, wherein in response to the movement track being a straight track, the active position of the target prop is a first point of collision of a ray emitted in the opposite direction of the movement track with the first obstacle, starting at a first reference point; the first reference point is a position point determined by taking the second collision point as a starting point after the target prop collides with the first obstacle at the second collision point, and extending a target distance along the extending direction of the moving track; the action position of the target prop is a position for generating the action effect of the target prop.
5. A method according to claim 3, wherein in response to the movement track being a parabolic track, the active position of the target prop is a fourth collision point with the first obstacle, the fourth collision point being a ray emitted in a direction opposite to the direction of extension of the tangent to the movement track at the third collision point, starting from the second reference point; the second reference point is a position point determined by taking the third collision point as a starting point after the target prop collides with the first obstacle at the third collision point, and extending a target distance along the extending direction of the tangent line; the action position of the target prop is a position for generating the action effect of the target prop.
6. The method according to claim 1, wherein the method further comprises:
and displaying an action mark, wherein the action mark is used for indicating that the target prop has an action effect.
7. The method of claim 6, wherein the target prop has a range threshold value that is used to indicate a maximum distance of movement of the target prop in the virtual scene; the method further comprises the steps of:
and in response to the target prop not colliding with the first obstacle, and the moving distance of the target prop in the virtual scene reaches the range threshold, displaying the action identification of the target prop at a position where the moving distance of the target prop reaches the range threshold.
8. The method of claim 6, wherein the target prop has a duration threshold value that indicates a maximum duration of movement of the target prop from being released to an effect; the method further comprises the steps of:
and responding to the situation that the target prop does not collide with the first obstacle, wherein the moving duration of the target prop in the virtual scene reaches the duration threshold, and displaying the action identification of the target prop at the position where the moving duration of the target prop reaches the duration threshold.
9. A prop control method in a virtual scene, the method comprising:
displaying a virtual scene interface;
displaying a virtual scene picture in the virtual scene interface, wherein the virtual scene picture is a picture for observing a virtual scene from the view angle of a second virtual object;
responding to the action position of a target prop contained in the visual field range of the second virtual object, and displaying a picture corresponding to the action effect of the target prop, wherein the action position of the target prop is positioned at the other side of the collision surface of a first obstacle which collides with the target prop, and the collision surface is the surface of the target prop which collides with the first obstacle; the action position of the target prop is the position of the target prop for generating action effect; the effect of the target prop is that a virtual object containing the target prop in a visual field range causes visual shielding of target duration;
Wherein the first obstacle is the first obstacle that the target prop collides in the virtual scene;
alternatively, the target prop has a collision point threshold value to indicate a maximum number of collision points that the target prop can pass through in the virtual scene; the collision point is a point of contact of the target prop with a plane of an obstacle in the virtual scene; n obstacles are included before the first obstacle, the N obstacles are obstacles through which the moving track of the target prop passes, and the value of N is determined based on the collision point threshold value; n is a positive integer.
10. The method of claim 7, wherein the active position of the target prop included in the field of view of the second virtual object is a mapping point of the active position on a plane where the first position is located, a vertical distance from the first position is less than a first distance threshold, and a horizontal distance from the first position is less than a second distance threshold; the first position refers to a position point of the second virtual object in the virtual scene; the mapping point is based on the action position to make a perpendicular to the plane where the first position is located, and the obtained intersection point of the perpendicular and the plane where the first position is located.
11. A prop control device in a virtual scene, the device comprising:
the first picture display module is used for displaying a virtual scene picture, wherein the virtual scene picture is a picture for observing a virtual scene from the view angle of a first virtual object, and the first virtual object carries a target prop;
the release module is used for controlling the first virtual object to release the target prop in response to receiving the release operation of the target prop;
an effect generation module for generating an effect of the target prop on the other side of the collision surface relative to the first obstacle in response to the target prop colliding with the first obstacle in the virtual scene; the collision surface is a surface of the target prop which collides with the first obstacle; the effect of the target prop is that a virtual object containing the target prop in a visual field range causes visual shielding of target duration;
wherein the first obstacle is the first obstacle that the target prop collides in the virtual scene;
alternatively, the target prop has a collision point threshold value to indicate a maximum number of collision points that the target prop can pass through in the virtual scene; the collision point is a point of contact of the target prop with a plane of an obstacle in the virtual scene; n obstacles are included before the first obstacle, the N obstacles are obstacles through which the moving track of the target prop passes, and the value of N is determined based on the collision point threshold value; n is a positive integer.
12. A prop control device in a virtual scene, the device comprising:
the interface display module is used for displaying a virtual scene interface;
the second picture display module is used for displaying a virtual scene picture in the virtual scene interface, wherein the virtual scene picture is a picture for observing a virtual scene from the view angle of a second virtual object;
a third image display module, configured to respond to an action position of a target prop included in a visual field range of the second virtual object, and display an image corresponding to an action effect of the target prop, where the action position of the target prop is located at the other side of a collision surface of a first obstacle that collides with the target prop, and the collision surface is a surface of the target prop that collides with the first obstacle; the action position of the target prop is the position of the target prop for generating action effect; the effect of the target prop is that a virtual object containing the target prop in a visual field range causes visual shielding of target duration;
wherein the first obstacle is the first obstacle that the target prop collides in the virtual scene;
Alternatively, the target prop has a collision point threshold value to indicate a maximum number of collision points that the target prop can pass through in the virtual scene; the collision point is a point of contact of the target prop with a plane of an obstacle in the virtual scene; n obstacles are included before the first obstacle, the N obstacles are obstacles through which the moving track of the target prop passes, and the value of N is determined based on the collision point threshold value; n is a positive integer.
13. A computer device comprising a processor and a memory storing at least one computer program loaded and executed by the processor to implement a prop control method in a virtual scene as claimed in any one of claims 1 to 10.
14. A computer readable storage medium having stored therein at least one computer program loaded and executed by a processor to implement a method of prop control in a virtual scene as claimed in any one of claims 1 to 10.
CN202111100882.5A 2021-09-18 2021-09-18 Prop control method, device, equipment and storage medium in virtual scene Active CN113750530B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111100882.5A CN113750530B (en) 2021-09-18 2021-09-18 Prop control method, device, equipment and storage medium in virtual scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111100882.5A CN113750530B (en) 2021-09-18 2021-09-18 Prop control method, device, equipment and storage medium in virtual scene

Publications (2)

Publication Number Publication Date
CN113750530A CN113750530A (en) 2021-12-07
CN113750530B true CN113750530B (en) 2023-07-21

Family

ID=78796439

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111100882.5A Active CN113750530B (en) 2021-09-18 2021-09-18 Prop control method, device, equipment and storage medium in virtual scene

Country Status (1)

Country Link
CN (1) CN113750530B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006297121A (en) * 1996-05-02 2006-11-02 Sega Corp Game device, method of processing and recording medium for the same
CN109675308A (en) * 2019-01-10 2019-04-26 网易(杭州)网络有限公司 Display control method, device, storage medium, processor and terminal in game
CN110694279A (en) * 2019-10-30 2020-01-17 腾讯科技(深圳)有限公司 Game special effect display method, device, equipment and medium
CN111035924A (en) * 2019-12-24 2020-04-21 腾讯科技(深圳)有限公司 Prop control method, device, equipment and storage medium in virtual scene
CN111589150A (en) * 2020-04-22 2020-08-28 腾讯科技(深圳)有限公司 Control method and device of virtual prop, electronic equipment and storage medium
CN112121414A (en) * 2020-09-29 2020-12-25 腾讯科技(深圳)有限公司 Tracking method and device in virtual scene, electronic equipment and storage medium
CN112121434A (en) * 2020-09-30 2020-12-25 腾讯科技(深圳)有限公司 Interaction method and device of special effect prop, electronic equipment and storage medium
CN112619164A (en) * 2020-12-22 2021-04-09 上海米哈游天命科技有限公司 Method, device and equipment for determining flight height of transmitting target and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006297121A (en) * 1996-05-02 2006-11-02 Sega Corp Game device, method of processing and recording medium for the same
CN109675308A (en) * 2019-01-10 2019-04-26 网易(杭州)网络有限公司 Display control method, device, storage medium, processor and terminal in game
CN110694279A (en) * 2019-10-30 2020-01-17 腾讯科技(深圳)有限公司 Game special effect display method, device, equipment and medium
CN111035924A (en) * 2019-12-24 2020-04-21 腾讯科技(深圳)有限公司 Prop control method, device, equipment and storage medium in virtual scene
CN111589150A (en) * 2020-04-22 2020-08-28 腾讯科技(深圳)有限公司 Control method and device of virtual prop, electronic equipment and storage medium
CN112121414A (en) * 2020-09-29 2020-12-25 腾讯科技(深圳)有限公司 Tracking method and device in virtual scene, electronic equipment and storage medium
CN112121434A (en) * 2020-09-30 2020-12-25 腾讯科技(深圳)有限公司 Interaction method and device of special effect prop, electronic equipment and storage medium
CN112619164A (en) * 2020-12-22 2021-04-09 上海米哈游天命科技有限公司 Method, device and equipment for determining flight height of transmitting target and storage medium

Also Published As

Publication number Publication date
CN113750530A (en) 2021-12-07

Similar Documents

Publication Publication Date Title
JP7206398B2 (en) USER INTERFACE DISPLAY METHOD, USER INTERFACE DISPLAY DEVICE, TERMINAL, COMPUTER DEVICE, AND PROGRAM
EP3266507B1 (en) Location-based experience with interactive merchandise
CN110585731B (en) Method, device, terminal and medium for throwing virtual article in virtual environment
KR102181587B1 (en) Virtual reality control system
CN111714886B (en) Virtual object control method, device, equipment and storage medium
KR20210143301A (en) Virtual object control method and apparatus, device, and storage medium
JP7191210B2 (en) Virtual environment observation method, device and storage medium
CN111462307A (en) Virtual image display method, device, equipment and storage medium of virtual object
CN110801629B (en) Method, device, terminal and medium for displaying virtual object life value prompt graph
CN110465087A (en) Control method, device, terminal and the storage medium of virtual objects
CN111589144B (en) Virtual character control method, device, equipment and medium
CN113209617A (en) Virtual object marking method and device
CN115430153A (en) Collision detection method, device, apparatus, medium, and program in virtual environment
CN113599810B (en) Virtual object-based display control method, device, equipment and medium
CN113041616B (en) Method, device, electronic equipment and storage medium for controlling skip word display in game
CN113750531B (en) Prop control method, device, equipment and storage medium in virtual scene
CN113750530B (en) Prop control method, device, equipment and storage medium in virtual scene
CN113680061A (en) Control method, device, terminal and storage medium of virtual prop
KR20190038730A (en) Apparatus, method and computer for controlling a virtual cricket game
CN114011069A (en) Control method of virtual object, storage medium and electronic device
US20240216791A1 (en) Virtual reality control system
CN115957509A (en) Virtual object control method, device, equipment and storage medium
US20240216805A1 (en) Virtual reality control system
US20230285859A1 (en) Virtual world sound-prompting method, apparatus, device and storage medium
CN112791418B (en) Determination method and device of shooting object, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant