CN112827169A - Game image processing method and device, storage medium and electronic equipment - Google Patents

Game image processing method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN112827169A
CN112827169A CN202110229619.XA CN202110229619A CN112827169A CN 112827169 A CN112827169 A CN 112827169A CN 202110229619 A CN202110229619 A CN 202110229619A CN 112827169 A CN112827169 A CN 112827169A
Authority
CN
China
Prior art keywords
vertex
position information
virtual
current
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110229619.XA
Other languages
Chinese (zh)
Other versions
CN112827169B (en
Inventor
张积强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110229619.XA priority Critical patent/CN112827169B/en
Publication of CN112827169A publication Critical patent/CN112827169A/en
Application granted granted Critical
Publication of CN112827169B publication Critical patent/CN112827169B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/428Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a game image processing method and device, a storage medium and electronic equipment. The method comprises the following steps: in the process that a virtual plant in a virtual scene displayed by a client interacts with a controlled virtual object, acquiring vertex position information corresponding to each vertex of the virtual plant under a coordinate system corresponding to the virtual scene at present and object position information corresponding to the virtual object under the coordinate system at present; comparing the position information of each vertex with the position information of the object, and determining the motion state information of each vertex on the virtual plant during interaction; determining the position offset of each vertex on the virtual plant based on the motion state information, and sequentially carrying out coordinate offset processing on the position information of each vertex according to the position offset; and displaying the image of the virtual plant after coordinate offset processing in the client. The invention solves the technical problem of poor game experience effect of a user caused by the rigid interaction effect of the virtual characters and the virtual plants in the existing game scene.

Description

Game image processing method and device, storage medium and electronic equipment
Technical Field
The invention relates to the field of computers, in particular to a game image processing method and device, a storage medium and electronic equipment.
Background
In a virtual scene provided by a 3D game application in the related art, in order to truly reproduce a process of interacting a virtual character and a virtual plant, an existing virtual character and bush interaction scheme generally adopts a method of simply introducing a character position into a bush model and calculating a vertex position offset value that becomes stronger from bottom to top, so as to achieve an interaction effect that a bush character is squeezed open. The interaction effect of virtual characters and virtual plants in the existing game scene is rigid, only rough deviation exists, no other action exists, and the process that the shaking during interaction and the shaking after movement stop slowly stop along with time is lacked, so that the problem that the game experience effect of a user is poor can be solved.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a game image processing method and device, a storage medium and electronic equipment, which are used for at least solving the technical problem of poor game experience effect of a user caused by the rigid interaction effect of virtual characters and virtual plants in the existing game scene.
According to an aspect of an embodiment of the present invention, there is provided a game image processing method including: in the process that a virtual plant in a virtual scene displayed by a client interacts with a controlled virtual object, acquiring vertex position information corresponding to each vertex of the virtual plant under a coordinate system corresponding to the virtual scene at present and object position information corresponding to the virtual object under the coordinate system at present; comparing the vertex position information with the object position information in sequence to determine motion state information of each vertex on the virtual plant during interaction; determining the position offset of each vertex on the virtual plant based on the motion state information, and sequentially carrying out coordinate offset processing on the position information of each vertex according to the position offset; and displaying the image of the virtual plant after coordinate offset processing in the client.
According to another aspect of the embodiments of the present invention, there is also provided a game image processing apparatus including: an obtaining unit, configured to obtain vertex position information corresponding to each vertex on a virtual plant currently in a coordinate system corresponding to a virtual scene and object position information corresponding to the virtual object currently in the coordinate system, in a process that a virtual plant in the virtual scene displayed by a client interacts with a controlled virtual object; a comparison unit, configured to compare the vertex position information and the object position information in sequence, so as to determine motion state information of each vertex on the virtual plant during interaction; a shifting unit configured to determine a position shift amount of each vertex on the virtual plant based on the motion state information, and sequentially perform coordinate shift processing on each vertex position information according to the position shift amount; and a display unit for displaying the image of the virtual plant after the coordinate offset processing in the client.
According to still another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to execute the above-mentioned game image processing method when executed.
According to still another aspect of the embodiments of the present invention, there is also provided an electronic device including a memory in which a computer program is stored and a processor configured to execute the game image processing method described above by the computer program.
In the embodiment of the invention, in the process of interaction between a virtual plant in a virtual scene displayed by a client and a controlled virtual object, vertex position information corresponding to each vertex on the virtual plant under a coordinate system corresponding to the virtual scene at present and object position information corresponding to the virtual object under the coordinate system at present are acquired; comparing the vertex position information with the object position information in sequence to determine motion state information of each vertex on the virtual plant during interaction; determining the position offset of each vertex on the virtual plant based on the motion state information, and sequentially carrying out coordinate offset processing on the position information of each vertex according to the position offset; the method for displaying the image of the virtual plant after coordinate offset processing in the client side includes the steps of determining motion state information of each vertex on the virtual plant during interaction, determining position offset of each vertex on the virtual plant based on the motion state information, and sequentially performing coordinate offset processing on the position information of each vertex according to the position offset, so that the purposes of increasing jitter during interaction of the virtual character and the virtual plant and slowly stopping the jitter after movement is stopped along with time are achieved, the technical effects of improving user experience and enhancing picture reality are achieved, and the technical problem that the user game experience effect is poor due to the fact that the interaction effect of the virtual character and the virtual plant in the existing game scene is stiff is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic diagram of an application environment of an alternative game image processing method according to an embodiment of the invention;
FIG. 2 is a schematic diagram of an application environment of an alternative game image processing method according to an embodiment of the invention;
FIG. 3 is a flow diagram of an alternative game image processing method according to an embodiment of the present invention;
FIG. 4 is a schematic view of image display according to an alternative game image processing method in the related art;
FIG. 5 is a schematic view of image display according to another alternative game image processing method in the related art;
FIG. 6 is a schematic diagram of an interface display of an alternative game image processing method according to an embodiment of the present invention;
FIG. 7 is a flow diagram of another alternative game image processing method according to an embodiment of the present invention;
FIG. 8 is a graphical representation of an image processing curve for an alternative game image processing method according to an embodiment of the present invention;
FIG. 9 is a graphical representation of an image processing curve of an alternative game image processing method according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of an image display of an alternative game image processing method according to an embodiment of the present invention;
FIG. 11 is a schematic view of an image display of an alternative game image processing method according to the related art;
FIG. 12 is a schematic diagram of an image display of an alternative game image processing method according to an embodiment of the present invention;
FIG. 13 is a pictorial representation of an alternative game image processing method in accordance with an embodiment of the present invention;
FIG. 14 is a pictorial representation of an alternative game image processing method in accordance with an embodiment of the present invention;
FIG. 15 is a pictorial representation of an alternative game image processing method in accordance with an embodiment of the present invention;
FIG. 16 is a schematic diagram of an image display of an alternative game image processing method according to an embodiment of the present invention;
FIG. 17 is a schematic diagram of an alternative game image processing apparatus according to an embodiment of the present invention;
fig. 18 is a schematic structural diagram of an alternative electronic device according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In embodiments of the present invention, the following technical terms may be used, but are not limited to:
render Texture: is a texture that can be rendered, and the camera rendering target can be rendered to a temporary storage space for trial use. It can be used to implement image-based rendering effects, dynamic shading, projectors, reflective or surveillance cameras.
Taril, which may be referred to as streaking or banding, refers to a system that can generate streaks. Such as the tail flame of an aircraft, for example, is common.
According to an aspect of the embodiments of the present invention, a game image processing method is provided, and optionally, as an optional implementation, the game image processing method may be applied to, but not limited to, an application environment as shown in fig. 1. The application environment comprises: the terminal equipment 102, the network 104 and the server 106 are used for human-computer interaction with the user. The user 108 and the terminal device 102 can perform human-computer interaction, and a game image processing application client is operated in the terminal device 102. The terminal device 102 includes a human-machine interaction screen 1022, a processor 1024, and a memory 1026. The human-computer interaction screen 1022 is configured to present motion state information of each vertex on the virtual plant during interaction, and is further configured to present an image of the virtual plant after coordinate offset processing; the processor 1024 is configured to, during interaction between a virtual plant in a virtual scene displayed by the client and a controlled virtual object, obtain vertex position information corresponding to each vertex on the virtual plant in a coordinate system corresponding to the virtual scene at present, and object position information corresponding to the virtual object in the coordinate system at present. The memory 1026 is configured to store vertex position information corresponding to each vertex on the virtual plant in the coordinate system corresponding to the virtual scene, object position information corresponding to the virtual object in the coordinate system, and an image of the virtual plant after coordinate offset processing.
In addition, the server 106 includes a database 1062 and a processing engine 1064, where the database 1062 is used to store vertex position information corresponding to each vertex on the virtual plant in the coordinate system corresponding to the virtual scene at present, object position information corresponding to the virtual object in the coordinate system at present, and an image of the virtual plant after coordinate offset processing. The processing engine 1064 is configured to sequentially compare the position information of each vertex with the position information of the object, so as to determine motion state information of each vertex on the virtual plant during interaction; determining the position offset of each vertex on the virtual plant based on the motion state information, and sequentially carrying out coordinate offset processing on the position information of each vertex according to the position offset; and displaying the image of the virtual plant after coordinate offset processing in the client.
The specific process comprises the following steps: assuming that a game image processing application client is running in the terminal device 102 shown in fig. 1, the user 108 operates the human-computer interaction screen 1022 to manage and operate the virtual character, as shown in step S102, in the process that a virtual plant in a virtual scene displayed by the client interacts with a controlled virtual object, vertex position information corresponding to each vertex of the virtual plant in a coordinate system corresponding to the virtual scene at present and object position information corresponding to the virtual object in the coordinate system at present are obtained; then, step S104 is executed to send vertex position information corresponding to each vertex on the virtual plant in the coordinate system corresponding to the virtual scene at present and object position information corresponding to the virtual object in the coordinate system at present to the server 106 through the network 104. After receiving the request, the server 106 executes steps S106-S110 to sequentially compare the vertex position information and the object position information to determine the motion state information of each vertex on the virtual plant during interaction; determining the position offset of each vertex on the virtual plant based on the motion state information, and sequentially carrying out coordinate offset processing on the position information of each vertex according to the position offset; and displaying the image of the virtual plant after coordinate offset processing in the client. And as step S112, notify the terminal device 102 through the network 104, return the image of the virtual plant after the coordinate offset processing.
As another alternative, the game image processing method described above in this application may be applied to the application environment shown in fig. 2. As shown in fig. 2, a human-computer interaction may be performed between a user 202 and a user device 204. The user equipment 204 includes a memory 206 and a processor 208. The user device 204 in this embodiment may refer to, but is not limited to, the operation performed by the terminal device 102 to obtain the image of the virtual plant after the coordinate offset processing.
Optionally, in this embodiment, the terminal device 102 and the user device 204 may include, but are not limited to, at least one of the following: mobile phones (such as Android phones, iOS phones, etc.), notebook computers, tablet computers, palm computers, MID (Mobile Internet Devices), PAD, desktop computers, smart televisions, etc. The target client may be a video client, an instant messaging client, a browser client, an educational client, etc. The network 104 may include, but is not limited to: a wired network, a wireless network, wherein the wired network comprises: a local area network, a metropolitan area network, and a wide area network, the wireless network comprising: bluetooth, WIFI, and other networks that enable wireless communication. The server may be a single server, a server cluster composed of a plurality of servers, or a cloud server. The above is merely an example, and this is not limited in this embodiment.
Optionally, as an optional implementation manner, as shown in fig. 3, the game image processing method includes:
s302, in the process that a virtual plant in a virtual scene displayed by a client interacts with a controlled virtual object, vertex position information corresponding to each vertex of the virtual plant under a coordinate system corresponding to the virtual scene at present and object position information corresponding to the virtual object under the coordinate system at present are obtained;
s304, sequentially comparing the vertex position information with the object position information to determine motion state information of each vertex on the virtual plant during interaction;
s306, determining the position offset of each vertex on the virtual plant based on the motion state information, and sequentially carrying out coordinate offset processing on the position information of each vertex according to the position offset;
and S308, displaying the image of the virtual plant after coordinate offset processing in the client.
In step S302, in actual application, the virtual scene may include, but is not limited to, a scene in which virtual characters and virtual plants of various online games or stand-alone games interact with each other, and is not limited herein. The vertex position information of each vertex on the virtual plant currently corresponding to a coordinate system corresponding to the virtual scene is obtained, taking the virtual plant as a strawband as an example, the vertex position information of each object of each vertex of each leaf of grass contacted with the virtual character in the strawband in the coordinate system corresponding to the virtual scene may be obtained, where the coordinate system may include, but is not limited to, a world coordinate system.
In step S304, during actual application, the vertex position information and the object position information are sequentially compared to determine the motion state information of each vertex on the virtual plant during interaction, that is, the vertex position information and the object position information of the plant are updated and compared in real time, so that the state of the virtual plant during interaction with the object position information can be obtained in real time as running or static.
In step S306, during actual application, the position offset of each vertex on the virtual plant is determined based on the motion state information, for example, when the virtual plant is in a motion state, the motion offset can be determined, and when the virtual plant is in a static state, no offset occurs. And after the virtual plants are deviated, sequentially carrying out coordinate deviation processing on the vertex position information of each deviated virtual plant according to the position deviation amount. Performing the coordinate shift here may include performing a small range of adjustment (small amplitude wobble of the virtual fingerprint) in the three-dimensional spatial coordinate system.
In step S308, in an actual application, the client may be a game client used by the current user, and after performing coordinate offset on the virtual plant, an image of the virtual plant with the coordinate offset may be displayed in the client of the current user, where a process screen in which the virtual object and the mat are interacted and the shake after stopping moving slowly stops over time may be displayed.
In the embodiment of the invention, in the process of interaction between a virtual plant in a virtual scene displayed by a client and a controlled virtual object, vertex position information corresponding to each vertex on the virtual plant under a coordinate system corresponding to the virtual scene at present and object position information corresponding to the virtual object under the coordinate system at present are acquired; comparing the vertex position information with the object position information in sequence to determine motion state information of each vertex on the virtual plant during interaction; determining the position offset of each vertex on the virtual plant based on the motion state information, and sequentially carrying out coordinate offset processing on the position information of each vertex according to the position offset; the method for displaying the image of the virtual plant after coordinate offset processing in the client side includes the steps of determining motion state information of each vertex on the virtual plant during interaction, determining position offset of each vertex on the virtual plant based on the motion state information, and sequentially performing coordinate offset processing on the position information of each vertex according to the position offset, so that the purposes of increasing jitter during interaction of the virtual character and the virtual plant and slowly stopping the jitter after movement is stopped along with time are achieved, the technical effects of improving user experience and enhancing picture reality are achieved, and the technical problem that the user game experience effect is poor due to the fact that the interaction effect of the virtual character and the virtual plant in the existing game scene is stiff is solved.
In one embodiment, step S304 includes: comparing the vertex position information and the object position information in sequence to determine the interaction distance and the interaction direction vector between each vertex and the virtual object; in this embodiment, that is, when the virtual object interacts with the virtual plant, for example, after the virtual object enters a grass, the distance between each leaf of the grass and the virtual object and the moving direction of the virtual object are determined, and the direction vector of the interaction is determined according to the moving direction of the virtual object.
Determining a motion vector matched with each vertex based on vertex random vectors generated for the vertex position information respectively;
acquiring script change indicating variables matched with each vertex according to the logic scripts corresponding to the virtual scene; in this embodiment, the logic script may include, but is not limited to, configuring a coordinate variation amount of each vertex of the virtual plant according to a moving state of the virtual object, and is not limited herein.
And determining the motion state information of each vertex on the virtual plant as a distance data set based on the interaction distances corresponding to the respective N vertices and a direction vector set based on the interaction direction vectors corresponding to the respective N vertices, the motion vectors corresponding to the respective N vertices, and the script change indicating variables corresponding to the respective N vertices, where N is the number of vertices on the virtual plant.
In an embodiment, the sequentially comparing the vertex position information and the object position information to determine the interaction distance and the interaction direction vector between each vertex and the virtual object includes:
in each vertex position information, sequentially taking the vertex corresponding to each vertex position information as a current vertex, and executing the following operations:
extracting a first vertex coordinate of the current vertex in a first direction and a second vertex coordinate of the current vertex in a second direction in the coordinate system from the vertex position information of the current vertex;
extracting a first object coordinate of the virtual object in the first direction and a second object coordinate of the virtual object in the second direction in the coordinate system from the object position information;
and comparing the first vertex coordinates of the current vertex with the first object coordinates of the virtual object, and the second vertex coordinates of the current vertex with the second object coordinates of the virtual object, respectively, to obtain the interaction distance and the interaction direction vector between the current vertex and the virtual object.
In an embodiment, the determining the motion vector of each vertex based on the vertex random vectors respectively generated for the vertex position information includes:
in each vertex position information, sequentially taking the vertex corresponding to each vertex position information as a current vertex, and executing the following operations: acquiring the vertex random vector generated for the vertex position information of the current vertex; and determining the motion vector matched with the current vertex according to the vertex random vector and the current time vector of the current vertex.
In an embodiment, the determining the motion vector matched with the current vertex according to the vertex random vector and the current time vector of the current vertex includes: obtaining a first weighted sum result between the vertex random vector of the current vertex and the current time vector; performing a function processing on the first weighted sum result to obtain the motion vector matched with the current vertex, wherein the function processing includes: decimal taking processing, absolute value taking processing and trigonometric function processing.
In an embodiment, the obtaining, according to the logic script corresponding to the virtual scene, the script change indicating variable that each vertex matches includes: in each vertex position information, sequentially taking the vertex corresponding to each vertex position information as a current vertex, and executing the following operations: and under the condition that the logic script indicates that the current image frame where the current vertex is located has a position movement change compared with the previous image frame, configuring a script change indicating variable matched with the current vertex to be 1, wherein the script change indicating variable is automatically changed from 1 to 0 in a target time period. In the present embodiment, that is, in the case where the virtual object moves in position, a process of gradually changing the disturbance effect of the grass (virtual plant) from normal to vanishing after the virtual character stops moving can be realized.
And when the logic script indicates that the current image frame where the current vertex is located does not have position movement change compared with the previous image frame, configuring a script change indication variable matched with the current vertex to be 0. In this embodiment, when the position of the virtual object is not changed, the script indication variable for matching the grass of each page of the current grass (virtual plant) is zero, which indicates that the position of the top of the grass is not changed and is in a static state.
In an embodiment, the determining the position offset of each vertex on the virtual plant based on the motion state information includes:
in each vertex position information, sequentially taking the vertex corresponding to each vertex position information as a current vertex, and executing the following operations:
obtaining a first product result between the motion vector of the current vertex and the script change indicating variable of the current vertex;
obtaining a second weighted sum result between the first product result and the interaction direction vector;
and obtaining a third product result of the second weighted sum result, the distance data set and a coordinate direction component of the current vertex in a third direction, as the position offset of the current vertex.
In one embodiment, the sequentially performing the coordinate shift processing on each of the vertex position information according to the position shift amount includes:
and adjusting vertex coordinates in each direction included in the vertex position information of the current vertex, respectively, in accordance with the position shift amount of the current vertex, to complete the coordinate shift processing of the current vertex.
In the embodiment of the invention, in the process of interaction between a virtual plant in a virtual scene displayed by a client and a controlled virtual object, vertex position information corresponding to each vertex on the virtual plant under a coordinate system corresponding to the virtual scene at present and object position information corresponding to the virtual object under the coordinate system at present are acquired; comparing the vertex position information with the object position information in sequence to determine motion state information of each vertex on the virtual plant during interaction; determining the position offset of each vertex on the virtual plant based on the motion state information, and sequentially carrying out coordinate offset processing on the position information of each vertex according to the position offset; the method for displaying the image of the virtual plant after coordinate offset processing in the client side includes the steps of determining motion state information of each vertex on the virtual plant during interaction, determining position offset of each vertex on the virtual plant based on the motion state information, and sequentially performing coordinate offset processing on the position information of each vertex according to the position offset, so that the purposes of increasing jitter during interaction of the virtual character and the virtual plant and slowly stopping the jitter after movement is stopped along with time are achieved, the technical effects of improving user experience and enhancing picture reality are achieved, and the technical problem that the user game experience effect is poor due to the fact that the interaction effect of the virtual character and the virtual plant in the existing game scene is stiff is solved.
In the related art, a common scheme of an interaction scheme of a grass and a virtual character is to realize an interaction effect of a grass being shoved by a character by simply transmitting the position of the character into a grass model and calculating a vertex position offset value which is stronger from bottom to top. As shown in fig. 4, the grass is pushed around by the stiff virtual character. In fig. 5, a diagram (a) shows a raw grass model, a diagram (b) shows a model after the strawberries and characters have interacted and shifted, and a diagram (c) shows a three-dimensional coordinate system. As shown in fig. 10, fig. 10 shows a display image when the virtual object does not enter the mat in the related art; as shown in fig. 11, fig. 11 shows a screen when the virtual object interacts with the grass in the related art, and it is obvious that the grass interaction effect is stiff, only has a rough offset, and has no other action, and the jitter during the interaction is absent and the jitter after the movement is stopped stops slowly with time.
In order to solve the above technical problem, in an application embodiment based on the above embodiment, in a processing process of the game image processing method: by calculating the interaction relationship between the position of the character in the scene and the vertex position of the summary in the scene, as shown in fig. 6, it is necessary to set a scene such as a bush 602 first, and then load a virtual character 604.
Based on the foregoing embodiments, in an application embodiment, as shown in fig. 7, the method for processing the game image includes: s702, loading scenes and roles; then step S704 is entered to obtain role related parameters; step S706, carrying out disturbance calculation; step S708, completing the process, and displaying the interactive scene of the virtual character and the virtual plant.
In an embodiment, the method for processing a game image includes: .
And S1, loading scenes and roles normally.
And S2, acquiring the world coordinates of each frame of the character, transmitting the world coordinates into a plant (such as grass), and naming the variable as position.
S3, the model vertex position of the grass is named as gposition. The position calculation is performed for each vertex in the shader.
a) Firstly, the vertex position of the grass model is transferred from the model space to the world space, and the world position of each vertex is obtained, so that the correct calculation can be carried out with the world coordinates of the character.
b) Then, we need to calculate the influence range of the role on the grass, the variable name is range, and we obtain the range by calculating the distance between the cposition and the gposition and then subtracting smoothstep by 1 to obtain the vertex with the distance of 0.2 to 0.5. The calculated curve is shown in fig. 8, and the range (curve variation range) is a data set from 0 to 1, and the variation curve obtained by varying the distance from 1 to 0 is shown in fig. 8.
c) Next, the movement calculation at the time of the grass interaction is performed. We cancel the y-axis calculation when calculating, and this part only calculates the x-axis and z-axis because we want the rocking motion not to change the height of the grass.
i. First, a regular random number random needs to be obtained for the x-axis and z-axis of each vertex, and the random is calculated according to the formula random.
Then, the x-axis and z-axis motion functions are calculated and the result is set as variable animatsin, time, and the formula is animatsin. xz-sin (abs (random. xz + time) -0.5) × 2), which results in the motion curve animatsin as shown in fig. 9. The random variable is added to make the numerical value change of each virtual plant vertex and the adjacent vertex different, but have the change curve of the virtual plant vertex and the adjacent vertex according to the distance, so the random variable is called regular random number.
d) Then, the direction of pushing when the character collides with grass (virtual plant) is calculated, and the direction vector on the 2-dimensional plane of the vertex of each vertex corresponding to the character can be obtained by subtracting the x-axis position and the z-axis position of the vertex of the grass from the xz-axis position of the angle color point. We set this vector to the variable dir. The formula dir ═ gposition.
e) Meanwhile, a moved variable is calculated through a logic script, whether the position of the role is changed or not is judged through the script, and if the position of the role is not changed, the moved variable is changed from 1 to 0 within 2 seconds. If it is moving, the moved variable is set to 1. In order to realize a gradual change process from normal to disappearance of the disturbance effect of the grass after the character stops moving, the judgment needs to be realized in the script because the logic judgment cannot be carried out in the shader.
f) Finally, the previous results are combined and multiplied by the height of the top of the grass model. And obtaining a final change result naming variable finiOffset. The formula is finiOffset ═ (animatsin. xz. moved + dir. xz.) range ═ g position.y;
and adding the finiOffset to the vertex position of the original grass model, and obtaining the final output result of each vertex world space coordinate (g position.x + finiOffset.x, g position.y +0, g position.z + finiOffset.z).
And S4, obtaining the dynamic effect of the interaction of the virtual character and the grass. As shown in fig. 12, the virtual character enters the grass clump, pushing the grass clump away; fig. 13, 14 and 15 are views of a virtual character entering a strake and interacting with the strake, where the strake is disturbed by the virtual character and continuously swings in a small amplitude. As shown in FIG. 13, after the avatar 1302 enters the grass, the top of grass blade 1304 and the top of grass blade 1306 swing a small amount from right to left; as shown in fig. 14, when the virtual character 1402 enters the grass and is in a stationary state, the top of the grass blade 1404 and the top of the grass blade 1406 return to the positions before the swing, and the swing does not occur; as shown in fig. 15, when the virtual character 1502 moves further after entering the grass, the top of the grass 1504 and the top of the grass 1506 swing slightly from left to right. As shown in fig. 16, the rocking is stopped slowly when the grass stops, and the original state is restored. The above process is merely an example and is not limiting.
The embodiment of the invention firstly does not sample a new mapping on performance, only increases a small amount of calculation and has good performance effect. Secondly, the problem of the original rigid and violent interaction effect is solved, and the whole interaction process is natural and comfortable. Meanwhile, the defect that the grass is immediately immobile when the grass stops is also solved.
In an application embodiment, a trail (trailing) may be added to the embodiment of the present invention and rendered in a graph and transferred into a grass model, instead of the range calculation in the above embodiment, the calculation of moved variable is cancelled. The whole process adds a Render Texture sample, and although some effects are more refined, the consumption is greatly increased.
In the embodiment of the invention, in the process of interaction between a virtual plant in a virtual scene displayed by a client and a controlled virtual object, vertex position information corresponding to each vertex on the virtual plant under a coordinate system corresponding to the virtual scene at present and object position information corresponding to the virtual object under the coordinate system at present are acquired; comparing the vertex position information with the object position information in sequence to determine motion state information of each vertex on the virtual plant during interaction; determining the position offset of each vertex on the virtual plant based on the motion state information, and sequentially carrying out coordinate offset processing on the position information of each vertex according to the position offset; the method for displaying the image of the virtual plant after coordinate offset processing in the client side includes the steps of determining motion state information of each vertex on the virtual plant during interaction, determining position offset of each vertex on the virtual plant based on the motion state information, and sequentially performing coordinate offset processing on the position information of each vertex according to the position offset, so that the purposes of increasing jitter during interaction of the virtual character and the virtual plant and slowly stopping the jitter after movement is stopped along with time are achieved, the technical effects of improving user experience and enhancing picture reality are achieved, and the technical problem that the user game experience effect is poor due to the fact that the interaction effect of the virtual character and the virtual plant in the existing game scene is stiff is solved.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
According to another aspect of the embodiments of the present invention, there is also provided a game image processing apparatus for implementing the above-described game image processing method. As shown in fig. 17, the apparatus includes:
an obtaining unit 1702, configured to obtain vertex position information corresponding to each vertex on a virtual plant currently in a coordinate system corresponding to a virtual scene and object position information corresponding to the virtual object currently in the coordinate system, in a process that a virtual plant in a virtual scene displayed by a client interacts with a controlled virtual object;
a comparison unit 1704, configured to sequentially compare the vertex position information and the object position information to determine motion state information of each vertex on the virtual plant during interaction;
a shifting unit 1706, configured to determine a position shift amount of each vertex on the virtual plant based on the motion state information, and sequentially perform coordinate shifting processing on each vertex position information according to the position shift amount;
a display unit 1708, configured to display the image of the virtual plant after the coordinate offset processing in the client.
In the embodiment of the present invention, the virtual scene may include, but is not limited to, a scene in which virtual characters and virtual plants of various online games or stand-alone games interact with each other, and is not limited herein. The vertex position information of each vertex on the virtual plant which corresponds to the coordinate system corresponding to the virtual scene at present is obtained, taking the virtual plant as a grass cluster as an example, the vertex position information of each object of each vertex of each page of grass which is contacted with the virtual character in the grass cluster and in the coordinate system corresponding to the virtual scene can be obtained, and here, the coordinate system may include a time coordinate system.
In the embodiment of the present invention, the vertex position information and the object position information are sequentially compared to determine the motion state information of each vertex on the virtual plant during the interaction, that is, the vertex position information of the plant and the position information of the object are updated and compared in real time, so that the state of the virtual plant during the interaction with the object position information is obtained in real time as being running or static.
In the embodiment of the present invention, the position offset of each vertex on the virtual plant is determined based on the motion state information, for example, when the virtual plant is in a motion state, the motion offset may be determined, and when the virtual plant is in a static state, the virtual plant does not have a shift. And after the virtual plants are deviated, sequentially carrying out coordinate deviation processing on the vertex position information of each deviated virtual plant according to the position deviation amount. The coordinate shifting may here comprise a small amplitude of the wobble in a three-dimensional spatial coordinate system.
In the embodiment of the present invention, the client may be a game client used by the current user, and after performing coordinate offset on the virtual plant, an image of the virtual plant in which the coordinate offset occurs is displayed in the client of the current user.
In an embodiment, the comparing unit 1704 further includes:
a comparison module, configured to compare the vertex position information and the object position information in sequence, so as to determine an interaction distance and an interaction direction vector between each vertex and the virtual object;
a first determining module, configured to determine a motion vector that matches each vertex based on vertex random vectors that are generated for the vertex position information respectively;
the first acquisition module is used for acquiring script change indicating variables matched with each vertex according to the logic scripts corresponding to the virtual scenes;
a second determining module, configured to determine, as the motion state information of each vertex on the virtual plant, a distance data set based on the interaction distances corresponding to each of N vertices and a direction vector set based on the interaction direction vectors corresponding to each of the N vertices, the motion vectors corresponding to each of the N vertices, and the script change indicating variables corresponding to each of the N vertices, where N is the number of vertices on the virtual plant.
In one embodiment, the alignment module comprises: a first extraction subunit, configured to, in each piece of vertex position information, sequentially take a vertex corresponding to each piece of vertex position information as a current vertex, perform the following operations: extracting a first vertex coordinate of the current vertex in a first direction and a second vertex coordinate of the current vertex in a second direction in the coordinate system from the vertex position information of the current vertex;
a second extraction subunit configured to extract, from the object position information, a first object coordinate of the virtual object in the first direction and a second object coordinate of the virtual object in the second direction in the coordinate system;
and a comparison subunit, configured to compare the first vertex coordinates of the current vertex with the first object coordinates of the virtual object, and compare the second vertex coordinates of the current vertex with the second object coordinates of the virtual object, respectively, to obtain the interaction distance and the interaction direction vector between the current vertex and the virtual object.
In one embodiment, the first determining module includes: a third obtaining subunit, configured to, in each of the vertex position information, sequentially take a vertex corresponding to each vertex position information as a current vertex, and perform the following operations: acquiring the vertex random vector generated for the vertex position information of the current vertex;
a first determining subunit, configured to determine the motion vector matched with the current vertex according to the vertex random vector and the current time vector of the current vertex.
In an embodiment, the first determining subunit includes: an obtaining submodule, configured to obtain a first weighted sum result between the vertex random vector of the current vertex and the current time vector;
a processing submodule, configured to perform function processing on the first weighted sum result to obtain the motion vector matched with the current vertex, where the function processing includes: decimal taking processing, absolute value taking processing and trigonometric function processing.
In one embodiment, the obtaining module includes: a first configuration subunit, configured to, in each of the vertex position information, sequentially take a vertex corresponding to each vertex position information as a current vertex, perform the following operations: under the condition that the logic script indicates that the current image frame where the current vertex is located has a position movement change compared with the previous image frame, configuring a script change indicating variable matched with the current vertex to be 1, wherein the script change indicating variable automatically changes from 1 to 0 in a target time period;
and a second configuration subunit, configured to configure the script change indication variable matched with the current vertex to be 0 when the logic script indicates that the current image frame in which the current vertex is located does not have a position movement change compared with the previous image frame.
In one embodiment, the offset unit 1706 includes: a second obtaining module, configured to, in each piece of vertex position information, sequentially take a vertex corresponding to each piece of vertex position information as a current vertex, and perform the following operations: obtaining a first product result between the motion vector of the current vertex and the script change indicating variable of the current vertex;
a third obtaining module, configured to obtain a second weighted sum result between the first product result and the interaction direction vector;
a fourth obtaining module, configured to obtain a result of the second weighted summation, and a result of a third product between the distance data set and a coordinate direction component of the current vertex in a third direction, as the position offset of the current vertex.
In an embodiment, the shifting unit 1706 further includes an adjusting module, configured to respectively adjust vertex coordinates in each direction included in the vertex position information of the current vertex according to the position shifting amount of the current vertex, so as to complete coordinate shifting processing of the current vertex.
In the embodiment of the invention, in the process of interaction between a virtual plant in a virtual scene displayed by a client and a controlled virtual object, vertex position information corresponding to each vertex on the virtual plant under a coordinate system corresponding to the virtual scene at present and object position information corresponding to the virtual object under the coordinate system at present are acquired; comparing the vertex position information with the object position information in sequence to determine motion state information of each vertex on the virtual plant during interaction; determining the position offset of each vertex on the virtual plant based on the motion state information, and sequentially carrying out coordinate offset processing on the position information of each vertex according to the position offset; the method for displaying the image of the virtual plant after coordinate offset processing in the client side includes the steps of determining motion state information of each vertex on the virtual plant during interaction, determining position offset of each vertex on the virtual plant based on the motion state information, and sequentially performing coordinate offset processing on the position information of each vertex according to the position offset, so that the purposes of increasing jitter during interaction of the virtual character and the virtual plant and slowly stopping the jitter after movement is stopped along with time are achieved, the technical effects of improving user experience and enhancing picture reality are achieved, and the technical problem that the user game experience effect is poor due to the fact that the interaction effect of the virtual character and the virtual plant in the existing game scene is stiff is solved.
According to still another aspect of the embodiments of the present invention, there is also provided an electronic device for implementing the game image processing method, where the electronic device may be the terminal device or the server shown in fig. 1. The present embodiment takes the electronic device as a terminal device as an example for explanation. As shown in fig. 18, the electronic device comprises a memory 1802 having stored therein a computer program, and a processor 1804 arranged to execute the steps of any of the above-described method embodiments by means of the computer program.
Optionally, in this embodiment, the electronic device may be located in at least one network device of a plurality of network devices of a computer network.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring vertex position information corresponding to each vertex on the virtual plant in the coordinate system corresponding to the virtual scene and object position information corresponding to the virtual object in the coordinate system when the virtual plant in the virtual scene displayed by the client interacts with the controlled virtual object;
s2, sequentially comparing the vertex position information and the object position information to determine the motion state information of each vertex on the virtual plant during interaction;
s3, determining the position offset of each vertex on the virtual plant based on the motion state information, and sequentially performing coordinate offset processing on each vertex position information according to the position offset;
and S4, displaying the image of the virtual plant after the coordinate offset processing in the client.
Alternatively, it can be understood by those skilled in the art that the structure shown in fig. 18 is only an illustration, and the electronic device may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 18 is a diagram illustrating a structure of the electronic device. For example, the electronics may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 18, or have a different configuration than shown in FIG. 18.
The memory 1802 can be used for storing software programs and modules, such as program instructions/modules corresponding to the game image processing method and apparatus in the embodiments of the present invention, and the processor 1804 executes various functional applications and data processing by running the software programs and modules stored in the memory 1802, that is, implementing the game image processing method described above. The memory 1802 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 1802 can further include memory located remotely from the processor 1804, which can be connected to the terminals over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 1802 may be specifically, but not limited to, configured to store information, such as vertex position information corresponding to each vertex on the virtual plant currently in a coordinate system corresponding to the virtual scene, and an image of the virtual plant after coordinate offset processing. As an example, as shown in fig. 18, the memory 1802 may include, but is not limited to, an obtaining unit 1702, a comparing unit 1704, a shifting unit 1706 and a display unit 1708 in the game image processing apparatus. In addition, the game image processing device may further include, but is not limited to, other module units in the game image processing device, which is not described in detail in this example.
Optionally, the transmitting device 1806 is configured to receive or transmit data via a network. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 1806 includes a Network adapter (NIC) that can be connected to a router via a Network cable and other Network devices to communicate with the internet or a local area Network. In one example, the transmission device 1806 is a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In addition, the electronic device further includes: a display 1808, configured to display an image of the virtual plant after coordinate offset processing; and a connection bus 1810 for connecting the respective module components in the above-described electronic apparatus.
In other embodiments, the terminal device or the server may be a node in a distributed system, where the distributed system may be a blockchain system, and the blockchain system may be a distributed system formed by connecting a plurality of nodes through a network communication. Nodes can form a Peer-To-Peer (P2P, Peer To Peer) network, and any type of computing device, such as a server, a terminal, and other electronic devices, can become a node in the blockchain system by joining the Peer-To-Peer network.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the game image processing method. Wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the above-mentioned computer-readable storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring vertex position information corresponding to each vertex on the virtual plant in the coordinate system corresponding to the virtual scene and object position information corresponding to the virtual object in the coordinate system when the virtual plant in the virtual scene displayed by the client interacts with the controlled virtual object;
s2, sequentially comparing the vertex position information and the object position information to determine the motion state information of each vertex on the virtual plant during interaction;
s3, determining the position offset of each vertex on the virtual plant based on the motion state information, and sequentially performing coordinate offset processing on each vertex position information according to the position offset;
and S4, displaying the image of the virtual plant after the coordinate offset processing in the client.
Alternatively, in this embodiment, a person skilled in the art may understand that all or part of the steps in the methods of the foregoing embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (11)

1. A game image processing method, comprising:
in the process that a virtual plant in a virtual scene displayed by a client interacts with a controlled virtual object, acquiring vertex position information corresponding to each vertex of the virtual plant under a coordinate system corresponding to the virtual scene at present and object position information corresponding to the virtual object under the coordinate system at present;
sequentially comparing the vertex position information with the object position information to determine motion state information of each vertex on the virtual plant during interaction;
determining the position offset of each vertex on the virtual plant based on the motion state information, and sequentially carrying out coordinate offset processing on the position information of each vertex according to the position offset;
and displaying the image of the virtual plant after coordinate offset processing in the client.
2. The method according to claim 1, wherein the sequentially comparing the vertex position information and the object position information to determine the motion state information of each vertex on the virtual plant during interaction comprises:
sequentially comparing the vertex position information with the object position information to determine the interaction distance and the interaction direction vector between each vertex and the virtual object;
determining a motion vector matched with each vertex respectively based on vertex random vectors generated for the vertex position information respectively;
acquiring script change indicating variables matched with each vertex according to the logic scripts corresponding to the virtual scene;
and determining a distance data set formed by the interaction distances corresponding to the N vertexes and a direction vector set formed by the interaction direction vectors corresponding to the N vertexes, the motion vectors corresponding to the N vertexes and the script change indicating variables corresponding to the N vertexes as the motion state information of each vertex on the virtual plant, wherein N is the number of vertexes on the virtual plant.
3. The method of claim 2, wherein the sequentially comparing the vertex position information and the object position information to determine the interaction distance and the interaction direction vector between each vertex and the virtual object comprises:
in each piece of vertex position information, sequentially taking the vertex corresponding to each piece of vertex position information as a current vertex, and executing the following operations:
extracting a first vertex coordinate of the current vertex in a first direction and a second vertex coordinate of the current vertex in a second direction under the coordinate system from the vertex position information of the current vertex;
extracting first object coordinates of the virtual object in the first direction and second object coordinates of the virtual object in the second direction in the coordinate system from the object position information;
and respectively comparing the first vertex coordinate of the current vertex with the first object coordinate of the virtual object, and comparing the second vertex coordinate of the current vertex with the second object coordinate of the virtual object to obtain the interaction distance and the interaction direction vector between the current vertex and the virtual object.
4. The method of claim 2, wherein determining the motion vector for each vertex based on the vertex random vectors generated for the respective vertex position information comprises:
in each piece of vertex position information, sequentially taking the vertex corresponding to each piece of vertex position information as a current vertex, and executing the following operations:
obtaining the vertex random vector generated for the vertex position information of the current vertex;
and determining the motion vector matched with the current vertex according to the vertex random vector and the current time vector of the current vertex.
5. The method of claim 4, wherein the determining the motion vector that the current vertex matches from the vertex stochastic vector and a current temporal vector of the current vertex comprises:
obtaining a first weighted sum result between the vertex random vector of the current vertex and the current time vector;
performing function processing on the first weighted sum result to obtain the motion vector matched with the current vertex, wherein the function processing includes: decimal taking processing, absolute value taking processing and trigonometric function processing.
6. The method according to claim 2, wherein the obtaining the script change indicating variable matched with each vertex according to the logic script corresponding to the virtual scene comprises:
in each piece of vertex position information, sequentially taking the vertex corresponding to each piece of vertex position information as a current vertex, and executing the following operations:
under the condition that the logic script indicates that the current image frame where the current vertex is located has a position movement change compared with the last image frame, configuring a script change indicating variable matched with the current vertex to be 1, wherein the script change indicating variable will automatically change from 1 to 0 within a target time period;
and in the case that the logic script indicates that the current image frame where the current vertex is located has not changed in position movement compared with the last image frame, configuring a script change indicating variable matched with the current vertex to be 0.
7. The method of claim 2, wherein determining the position offset of each vertex on the virtual plant based on the motion state information comprises:
in each piece of vertex position information, sequentially taking the vertex corresponding to each piece of vertex position information as a current vertex, and executing the following operations:
obtaining a first product result between the motion vector of the current vertex and the script change indicating variable of the current vertex;
obtaining a second weighted sum result between the first product result and the interaction direction vector;
and acquiring a second weighted summation result, and taking a third product result between the distance data set and the coordinate direction component of the current vertex in the third direction as the position offset of the current vertex.
8. The method according to claim 7, wherein the sequentially performing coordinate shift processing on the vertex position information according to the position shift amount comprises:
and respectively adjusting vertex coordinates in each direction contained in the vertex position information of the current vertex according to the position offset of the current vertex so as to finish the coordinate offset processing of the current vertex.
9. A game image processing apparatus, comprising:
the system comprises an acquisition unit, a processing unit and a control unit, wherein the acquisition unit is used for acquiring vertex position information which corresponds to each vertex on a virtual plant under a coordinate system corresponding to a virtual scene at present and object position information which corresponds to the virtual object under the coordinate system at present in the process that the virtual plant in the virtual scene displayed by a client interacts with a controlled virtual object;
the comparison unit is used for sequentially comparing the vertex position information with the object position information to determine the motion state information of each vertex on the virtual plant during interaction;
the migration unit is used for determining the position migration amount of each vertex on the virtual plant based on the motion state information and sequentially performing coordinate migration processing on the position information of each vertex according to the position migration amount;
and the display unit is used for displaying the image of the virtual plant after coordinate offset processing in the client.
10. A computer-readable storage medium, comprising a stored program, wherein the program when executed performs the method of any one of claims 1 to 8.
11. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method of any of claims 1 to 8 by means of the computer program.
CN202110229619.XA 2021-03-02 2021-03-02 Game image processing method and device, storage medium and electronic equipment Active CN112827169B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110229619.XA CN112827169B (en) 2021-03-02 2021-03-02 Game image processing method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110229619.XA CN112827169B (en) 2021-03-02 2021-03-02 Game image processing method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN112827169A true CN112827169A (en) 2021-05-25
CN112827169B CN112827169B (en) 2022-11-08

Family

ID=75934288

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110229619.XA Active CN112827169B (en) 2021-03-02 2021-03-02 Game image processing method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112827169B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113426106A (en) * 2021-06-24 2021-09-24 网易(杭州)网络有限公司 Display control method and device in game, electronic equipment and storage medium
CN113838170A (en) * 2021-08-18 2021-12-24 网易(杭州)网络有限公司 Target virtual object processing method and device, storage medium and electronic device
WO2022111003A1 (en) * 2020-11-30 2022-06-02 成都完美时空网络技术有限公司 Game image processing method, apparatus, program, and readable medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110930486A (en) * 2019-11-28 2020-03-27 网易(杭州)网络有限公司 Rendering method and device of virtual grass in game and electronic equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110930486A (en) * 2019-11-28 2020-03-27 网易(杭州)网络有限公司 Rendering method and device of virtual grass in game and electronic equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022111003A1 (en) * 2020-11-30 2022-06-02 成都完美时空网络技术有限公司 Game image processing method, apparatus, program, and readable medium
CN113426106A (en) * 2021-06-24 2021-09-24 网易(杭州)网络有限公司 Display control method and device in game, electronic equipment and storage medium
CN113426106B (en) * 2021-06-24 2024-03-12 网易(杭州)网络有限公司 Display control method and device in game, electronic equipment and storage medium
CN113838170A (en) * 2021-08-18 2021-12-24 网易(杭州)网络有限公司 Target virtual object processing method and device, storage medium and electronic device

Also Published As

Publication number Publication date
CN112827169B (en) 2022-11-08

Similar Documents

Publication Publication Date Title
CN112827169B (en) Game image processing method and device, storage medium and electronic equipment
CN110163938B (en) Animation control method and device, storage medium and electronic device
CN111558221B (en) Virtual scene display method and device, storage medium and electronic equipment
US20230405452A1 (en) Method for controlling game display, non-transitory computer-readable storage medium and electronic device
US20240037839A1 (en) Image rendering
CN115578515B (en) Training method of three-dimensional reconstruction model, three-dimensional scene rendering method and device
CN110585713A (en) Method and device for realizing shadow of game scene, electronic equipment and readable medium
CN111142967B (en) Augmented reality display method and device, electronic equipment and storage medium
CN115082608A (en) Virtual character clothing rendering method and device, electronic equipment and storage medium
CN112843704A (en) Animation model processing method, device, equipment and storage medium
CN115482325A (en) Picture rendering method, device, system, equipment and medium
CN111179438A (en) AR model dynamic fixing method and device, electronic equipment and storage medium
CN111583372A (en) Method and device for generating facial expression of virtual character, storage medium and electronic equipment
CN115115752A (en) Virtual garment deformation prediction method and device, storage medium and electronic equipment
CN114565705A (en) Virtual character simulation and live broadcast method, device, equipment and storage medium
CN112206519B (en) Method, device, storage medium and computer equipment for realizing game scene environment change
CN116246026B (en) Training method of three-dimensional reconstruction model, three-dimensional scene rendering method and device
CN113313796B (en) Scene generation method, device, computer equipment and storage medium
CN117689826A (en) Three-dimensional model construction and rendering method, device, equipment and medium
CN113592875B (en) Data processing method, image processing method, storage medium, and computing device
CN114742970A (en) Processing method of virtual three-dimensional model, nonvolatile storage medium and electronic device
CN114820895A (en) Animation data processing method, device, equipment and system
CN113946221A (en) Eye driving control method and device, storage medium and electronic equipment
CN112750195B (en) Three-dimensional reconstruction method and device of target object, storage medium and electronic equipment
WO2023142756A1 (en) Live broadcast interaction method, device, and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40043867

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant