CN111282269A - Game running method and device, electronic equipment and storage medium - Google Patents

Game running method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111282269A
CN111282269A CN202010097096.3A CN202010097096A CN111282269A CN 111282269 A CN111282269 A CN 111282269A CN 202010097096 A CN202010097096 A CN 202010097096A CN 111282269 A CN111282269 A CN 111282269A
Authority
CN
China
Prior art keywords
dimensional
target object
game
point
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010097096.3A
Other languages
Chinese (zh)
Inventor
刘小龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huya Technology Co Ltd
Original Assignee
Guangzhou Huya Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huya Technology Co Ltd filed Critical Guangzhou Huya Technology Co Ltd
Priority to CN202010097096.3A priority Critical patent/CN111282269A/en
Publication of CN111282269A publication Critical patent/CN111282269A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress

Abstract

The application provides a game running method and device, electronic equipment and a storage medium, and relates to the technical field of game running. In the present application, first, position update information of each three-dimensional point of a target object in a first game video frame is acquired, where a plurality of the position update information includes at least two different position update information. And secondly, updating the three-dimensional position coordinates of each three-dimensional point based on the position updating information of the three-dimensional point, and generating a new target object based on the updated three-dimensional position coordinates. A second game video frame is then generated based on the new target object. Based on the method, the problem that the performance requirement of the existing game running technology on the running equipment of the game is high can be solved.

Description

Game running method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of game operation technologies, and in particular, to a game operation method and apparatus, an electronic device, and a storage medium.
Background
In various games at present, in order to present a three-dimensional stereoscopic effect, a corresponding game engine needs to be configured. The inventor researches and discovers that when a game is run based on a game engine, the effective running of the game engine needs to consume larger resources (such as computing resources of a processor, storage resources of a memory and the like), so that the performance requirement of a running device of the game engine is higher.
Disclosure of Invention
In view of the above, an object of the present application is to provide a game running method and apparatus, an electronic device, and a storage medium, so as to solve the problem that the performance requirement of the existing game running technology on a running device of a game is high.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
a game play method, comprising:
acquiring position updating information of each three-dimensional point of a target object in a first game video frame, wherein the position updating information at least comprises two different position updating information;
updating the three-dimensional position coordinates of each three-dimensional point based on the position updating information of the three-dimensional point, and generating a new target object based on the updated three-dimensional position coordinates;
a second game video frame is generated based on the new target object.
In a preferred option of the embodiment of the present application, in the game running method, the step of obtaining the position update information of each three-dimensional point of the target object in the first game video frame includes:
acquiring a trigger instruction of a target object, wherein the target object is a planar game object in a first game video frame;
generating location update information for each three-dimensional point based on the triggering instruction, wherein the target object includes a plurality of the three-dimensional points.
In a preferred option of the embodiment of the present application, in the game running method, the step of generating the position update information of each three-dimensional point based on the trigger instruction includes:
determining a three-dimensional target point on the target object based on the trigger instruction;
and generating position updating information of the three-dimensional point based on the position relation between the three-dimensional point and the three-dimensional target point aiming at each three-dimensional point.
In a preferred option of the embodiment of the present application, in the game execution method, the step of generating, for each of the three-dimensional points, position update information of the three-dimensional point based on a positional relationship between the three-dimensional point and the three-dimensional target point includes:
acquiring the position relation between each three-dimensional point and the three-dimensional target point;
and generating position updating information of the three-dimensional point based on a preset fluctuation function and a position relation corresponding to the three-dimensional point aiming at each three-dimensional point.
In a preferred choice of the embodiment of the present application, in the game execution method, the step of determining a three-dimensional target point on the target object based on the trigger instruction includes:
acquiring motion information of a motion object based on the trigger instruction, wherein the motion object and the target object are different game objects in the first game video frame, and the motion object is in contact with the target object in the first game video frame;
determining a plurality of three-dimensional target points on the target object based on the motion information, wherein the first three-dimensional target point determined is a point where the target object is in contact with the motion object;
and a plurality of pieces of position update information determined for each of the three-dimensional points are generated based on the plurality of three-dimensional target points, respectively, and are used for performing the update processing a plurality of times, and the three-dimensional position coordinate subjected to the update processing at the next time is the three-dimensional position coordinate obtained by performing the update processing at the previous time, so as to generate a plurality of second game video frames.
In a preferred option of the embodiment of the present application, in the game running method, the step of obtaining the trigger instruction of the target object includes:
acquiring the relative position relation between a target object and a moving object, wherein the moving object is a game object which moves along the direction close to the target object in the first game video frame;
and generating a trigger instruction of the target object when the relative position relation meets a preset condition.
In a preferable selection of the embodiment of the present application, in the game running method, the step of updating the three-dimensional position coordinates of each three-dimensional point based on the position update information of the three-dimensional point includes:
acquiring a depth position coordinate included in the three-dimensional position coordinate of each three-dimensional point;
and updating the depth position coordinates of the three-dimensional points based on the position updating information of each three-dimensional point to obtain new depth position coordinates of each three-dimensional point.
In a preferable selection of the embodiment of the present application, in the game execution method, the target object is a three-dimensional mesh structure, and the step of generating a new target object based on the updated three-dimensional position coordinates includes:
connecting the three-dimensional points corresponding to the updated three-dimensional position coordinates;
and obtaining a new three-dimensional mesh structure based on the plurality of three-dimensional points and the connecting line obtained by performing the connecting processing on the three-dimensional points, and taking the new three-dimensional mesh structure as a new target object.
In a preferable selection of the embodiment of the present application, in the game running method, the step of performing connection processing on three-dimensional points corresponding to the plurality of updated three-dimensional position coordinates includes:
determining a first direction and a second direction, wherein the first direction is different from the second direction;
and connecting the three-dimensional points corresponding to the updated three-dimensional position coordinates along the first direction and the second direction respectively.
In a preferred option of the embodiment of the present application, in the game running method, before the step of obtaining the position update information of each three-dimensional point of the target object in the first game video frame, the game running method further includes:
generating a two-dimensional mesh structure, and determining a transverse position coordinate and a longitudinal position coordinate of each two-dimensional point included in the two-dimensional mesh structure, wherein the two-dimensional mesh structure comprises a plurality of two-dimensional points;
generating a depth position coordinate for each two-dimensional point to obtain a plurality of three-dimensional points comprising a transverse position coordinate, a longitudinal position coordinate and a depth position coordinate, wherein the plurality of depth position coordinates at least comprises two different depth position coordinates.
In a preferred option of the embodiment of the present application, in the game running method, the step of generating the depth position coordinates for each of the two-dimensional points includes:
acquiring the transverse direction of the transverse position coordinate and/or the longitudinal direction of the longitudinal position coordinate;
determining different depth position coordinates for the plurality of two-dimensional points along the lateral direction and/or the longitudinal direction, respectively.
An embodiment of the present application further provides a game running apparatus, including:
the information acquisition module is used for acquiring position updating information of each three-dimensional point of a target object in a first game video frame, wherein the position updating information at least comprises two different position updating information;
the updating processing module is used for updating the three-dimensional position coordinates of the three-dimensional points based on the position updating information of each three-dimensional point and generating a new target object based on the updated three-dimensional position coordinates;
and the video frame generation module is used for generating a second game video frame based on the new target object.
On the basis, an embodiment of the present application further provides an electronic device, including:
a memory for storing a computer program;
and the processor is connected with the memory and is used for executing the computer program so as to realize the game running method.
On the basis of the above, the embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed, implements the game running method described above.
According to the game running method and device, the electronic equipment and the storage medium, the position updating information of each three-dimensional point of the target object is obtained, the target object is updated (namely a new target object is generated) based on the position updating information, so that the first game video frame of the target game is updated to the second game video frame, and at least two three-dimensional points between the first game video frame and the second game video frame are subjected to different three-dimensional changes due to the fact that the plurality of pieces of position updating information at least comprise two different pieces of position updating information, and a three-dimensional effect is presented. Therefore, on the basis of ensuring that the running process of the target game can show a three-dimensional effect, the running of the target game can be enabled to be independent of a game engine, so that the running of the target game is enabled to be independent of running equipment with higher performance, the problem that the performance requirement of the existing game running technology on the running equipment of the game is higher is solved, the application limit of the target game is reduced, the application range of the target game can be improved, the popularization and application are facilitated, and therefore the practical value is higher.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
Fig. 1 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Fig. 2 is a schematic flowchart of steps included in a game running method according to an embodiment of the present application.
Fig. 3 is a schematic flowchart of step S110 in fig. 2.
Fig. 4 is a schematic flowchart of step S111 in fig. 3.
Fig. 5 is a schematic flowchart of step S113 in fig. 3.
Fig. 6 is a schematic flowchart of step S120 in fig. 2.
Fig. 7 is a schematic diagram illustrating an effect of performing an update process according to an embodiment of the present application.
Fig. 8 is a schematic flowchart of step S130 in fig. 2.
Fig. 9 is a schematic diagram illustrating an effect of performing a connection process according to an embodiment of the present application.
Fig. 10 is a schematic diagram illustrating another effect of performing a connection process according to an embodiment of the present application.
Fig. 11 is a flowchart illustrating step S131 in fig. 8.
Fig. 12 is a schematic flowchart of other steps included in a game running method according to an embodiment of the present application.
Fig. 13 is a flowchart illustrating step S160 in fig. 12.
Fig. 14 is a block diagram schematically illustrating each functional module included in the game execution apparatus according to the embodiment of the present application.
Icon: 10-an electronic device; 12-a memory; 14-a processor; 100-a game play device; 110-an information acquisition module; 120-update processing module; 130-video frame generation module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
As shown in fig. 1, an embodiment of the present application provides an electronic device 10, which may include a memory 12, a processor 14, and a game execution apparatus 100.
Wherein the memory 12 and the processor 14 are electrically connected directly or indirectly to realize data transmission or interaction. For example, they may be electrically connected to each other via one or more communication buses or signal lines. The game execution apparatus 100 includes at least one software function module, which may be a game client, and may be stored in the memory 12 in the form of software or firmware (firmware). The processor 14 is configured to execute an executable computer program stored in the memory 12, for example, a software functional module and a computer program included in the game running apparatus 100, so as to implement the game running method provided by the embodiment of the present application.
Alternatively, the Memory 12 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
Also, the Processor 14 may be a general-purpose Processor including a Central Processing Unit (CPU), a Network Processor (NP), a System on chip (SoC), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
It will be appreciated that the configuration shown in fig. 1 is merely illustrative and that the electronic device 10 may also include more or fewer components than shown in fig. 1, or may have a different configuration than shown in fig. 1, for example, and may also include a communication unit for interacting information with other devices, such as a game server.
The electronic device 10 may include, but is not limited to, a mobile phone, a tablet computer, a computer, and other terminal devices with data processing capability.
With reference to fig. 2, an embodiment of the present application further provides a game running method applicable to the electronic device 10. Wherein the method steps defined by the flow related to the game running method can be implemented by the electronic device 10. The specific process shown in FIG. 2 will be described in detail below.
Step S110, position update information of each three-dimensional point of the target object in the first game video frame is acquired.
In this embodiment, during the running process of the target game, the position update information of each three-dimensional point included in the target object in the first game video frame (e.g., the current game video frame) of the target game may be acquired first.
The target object comprises a plurality of three-dimensional points, so that the obtained position updating information is a plurality of positions, and the plurality of position updating information at least comprises two different position updating information.
And step S120, updating the three-dimensional position coordinates of the three-dimensional points based on the position updating information of each three-dimensional point, and generating a new target object based on the updated three-dimensional position coordinates.
In this embodiment, after the plurality of pieces of position update information are acquired based on step S110, for each of the plurality of three-dimensional points, the three-dimensional position coordinates of the three-dimensional point may be updated based on the position update information corresponding to the three-dimensional point. In this way, a plurality of updated three-dimensional position coordinates can be obtained.
Then, a new target object is generated based on the obtained plurality of updated three-dimensional position coordinates. In this way, since the plurality of location update information includes at least two different location update information, after the update processing is performed, at least two three-dimensional location coordinates of the plurality of three-dimensional location coordinates are updated (changed) differently.
Step S130, generating a second game video frame based on the new target object.
In this embodiment, after the new target object is generated based on step S120, the target object in the first game video frame may be subjected to update processing based on the new target object to generate the second game video frame.
The position updating information at least comprises two different position updating information, so that at least two three-dimensional points between the first game video frame and the second game video frame are subjected to different three-dimensional changes, and a three-dimensional stereoscopic effect is presented.
Based on the method, on the basis of ensuring that the three-dimensional effect can be presented in the running process of the target game, the running of the target game can be enabled to be independent of a game engine, so that the running of the target game is enabled to be independent of running equipment with higher performance, the problem that the performance requirement of the existing game running technology on the running equipment of the game is higher is solved, the application limit of the target game is reduced, and the application range of the target game can be improved.
It should be noted that, in step S110, a specific manner of obtaining the location update information is not limited, and may be selected according to actual application requirements.
For example, in an alternative example, the location update information may be generated directly based on the game user's game play of the target game based on the game user's requirements (e.g., the game user directly plays the target object).
For another example, in another alternative example, based on other requirements of the game (e.g. the target object is not directly changed based on the game operation of the game user), in conjunction with fig. 3, step S110 may include step S111 and step S113 to generate the location update information, which is described below.
Step S111, a trigger instruction of the target object is acquired.
In this embodiment, the trigger instruction of the target object may be obtained first.
Wherein the target object is a planar game object in the first game video frame. That is, before the target object is updated, the target object may have a planar structure, that is, the target object includes a plurality of three-dimensional points in a plane.
And step S113, generating position updating information of each three-dimensional point based on the triggering instruction.
In this embodiment, after the trigger instruction is acquired based on step S111, the position update information of each of the plurality of three-dimensional points included in the target object may be generated based on the trigger instruction. In this manner, a plurality of location update information may be obtained.
Optionally, the specific manner of obtaining the trigger instruction based on step S111 is not limited, and may be selected according to the actual application requirement.
For example, in an alternative example, the trigger instruction may be configured in advance, for example, a trigger instruction is generated every time a preset time period elapses, and then the location update information of each three-dimensional point is generated based on the trigger instruction.
For another example, in another alternative example, in order to improve the richness of the target game, a moving object may be further included in the target game. Based on this, the trigger instruction may be generated based on the moving object and the target object, and therefore, in conjunction with fig. 4, step S111 may include step S111a and step S111b to generate the trigger instruction, which is described in detail below.
In step S111a, the relative positional relationship between the target object and the moving object is acquired.
In this embodiment, during the running process of the target game (including before the first game video frame is generated and at the time of generating the first game video frame), the relative positional relationship between the target object and the moving object may be acquired in real time.
Wherein, the moving object is a game object moving in a direction close to the target object in the first game video frame. That is, during the above-described operation, the moving object moves in a direction close to the target object.
Step S111b, when the relative position relationship satisfies a preset condition, generating a trigger instruction of the target object.
In this embodiment, after the relative positional relationship is acquired based on step S111a, it may be determined whether the relative positional relationship satisfies the preset condition, and when the relative positional relationship satisfies the preset condition, a trigger instruction of the target object may be generated.
It is understood that the specific manner of executing step S111a to obtain the relative position relationship is not limited, and may be selected according to the actual application requirement, for example, different selections may be made based on the specific content of the relative position relationship.
For example, in an alternative example, the relative positional relationship may be a distance between the target object and the moving object. In this way, the relative positional relationship may be obtained by calculating a current distance between the target object and the moving object based on the initial distance between the target object and the moving distance of the moving object, so as to obtain the relative positional relationship between the target object and the moving object.
Likewise, the specific manner of executing step S111b to determine whether the relative positional relationship satisfies the preset condition is not limited, and the relative positional relationship may be selected differently, for example, the specific content of the relative positional relationship is different.
For example, in an alternative example, in combination with the foregoing example, the relative position relationship may be a distance between the target object and the moving object, and correspondingly, the preset condition may be that the distance is 0.
In this way, when the distance between the target object and the moving object is 0, it can be determined that the relative positional relationship satisfies the preset condition. That is, when the moving object moves to a position in contact with the target object, a trigger instruction of the target object may be generated.
Therefore, in a specific use example, the first game video frame is a game video frame when the moving object moves to a position in contact with the target object, that is, when the moving object is in contact with the target object, the first game video frame needs to be updated to a second game video frame to present a visual effect that the target object is updated to the new target object, so as to realize dynamic display of the target object.
Optionally, the specific manner of generating the location update information based on the trigger instruction based on step S113 is not limited, and may be selected according to actual application requirements.
For example, in an alternative example, in order to reduce the amount of calculation required for the device to perform step S113 and improve the efficiency of performing step S113, a plurality of location update information configured in advance may be directly acquired upon receiving the trigger instruction.
For another example, in another alternative example, in order to enable the generated position update information to ensure that the generated position update information can perform update processing with higher precision on the three-dimensional position coordinates, in conjunction with fig. 5, step S113 may include step S113a and step S113b to generate the position update information, which is described in detail below.
Step S113a, determining a three-dimensional target point on the target object based on the trigger instruction.
In this embodiment, after the trigger instruction is acquired based on step S111, a three-dimensional target point may be determined on the target object based on the trigger instruction.
That is, the trigger instruction is used to trigger execution of the operation of determining the three-dimensional target point.
Step S113b is to generate, for each of the three-dimensional points, position update information of the three-dimensional point based on the positional relationship between the three-dimensional point and the three-dimensional target point.
In the present embodiment, after the three-dimensional target point is determined based on step S113a, the position update information of each of the three-dimensional points may be generated based on the positional relationship between each of the three-dimensional points and the three-dimensional target point, respectively.
That is, the corresponding location update information may be different for three-dimensional points at locations that are specifically different from the three-dimensional target point.
It is understood that the specific manner of executing the step S113a to determine the three-dimensional target point is not limited, and may be selected according to the actual application requirements.
For example, in an alternative example, in order to reduce the amount of calculation for performing step S113a and improve the efficiency of performing step S113, only one of the three-dimensional target points may be determined, such as a contact point of the moving object with the target object as the three-dimensional target point.
For another example, in another alternative example, in order to increase the richness of the update or three-dimensional change of the target object based on the moving object to increase the interest of the target game, the step S113a may include the following sub-steps:
firstly, motion information of a moving object can be acquired based on the trigger instruction, wherein the moving object and the target object are different game objects in the first game video frame, and the moving object is in contact with the target object in the first game video frame;
second, a plurality of three-dimensional target points may be determined on the target object based on the motion information.
The position updating information determined for each three-dimensional point is multiple, the multiple position updating information of one three-dimensional point is generated based on the multiple three-dimensional target points respectively and is used for performing the updating processing for multiple times, and the three-dimensional position coordinate subjected to the updating processing at the next time is the three-dimensional position coordinate obtained by performing the updating processing at the previous time so as to generate multiple second game video frames.
That is, for each of the plurality of three-dimensional target points, location update information may be generated for each of the three-dimensional points based on the three-dimensional target point. For example, if the number of the three-dimensional target points is 3, each of the three-dimensional points has 3 pieces of the location update information; if the number of the three-dimensional target points is 5, each three-dimensional point has 5 pieces of position updating information.
Thus, if there are N three-dimensional target points and M three-dimensional points, the generated position update relationships can be shown in the following table:
Figure BDA0002385770230000121
Figure BDA0002385770230000131
in the sub-step, the specific content of the motion information is not limited, and may include, but is not limited to, speed information, direction information, and the like.
In addition, in the sub-step, the specific manner of determining the plurality of three-dimensional target points is not limited, and may be selected according to the actual application requirements.
For example, in an alternative example, a plurality of three-dimensional points may be determined among the plurality of three-dimensional points, and the determined plurality of three-dimensional points may be used as the three-dimensional target point.
For another example, in another alternative example, a point at which the target object is in contact with the moving object may be determined, and then the point may be used as the first three-dimensional target point. In this way, at least one three-dimensional target point is determined again on the basis of the motion information.
In detail, in a specific application example, if the motion information includes direction information and speed information. When the direction information is left, the determined second three-dimensional target point can be located on the left side of the determined first three-dimensional target point; when the direction information is to the right, the determined second three-dimensional target point may be located to the right of the determined first three-dimensional target point.
When the speed information is large, the number of the determined three-dimensional target points can be large; when the velocity information is small, the number of the determined three-dimensional target points may be small.
Therefore, the impact effect of the motion object on the target object can be presented, and the target game has higher reality, so that the interest of the target game is improved.
It is understood that the specific manner of executing step S113b to generate the location updating information based on the location relationship is not limited, and may be selected according to the actual application requirement.
For example, in an alternative example, when the position relationship is a distance between the three-dimensional point and the three-dimensional target point, and the position update information is a change value of a three-dimensional coordinate, the change value may have a direct relationship with the distance, that is, a three-dimensional point having a larger distance from the three-dimensional target point has a larger change value; and the three-dimensional point with smaller distance from the three-dimensional target point has smaller variation value.
For another example, in another alternative example, step S113b may include the following sub-steps:
firstly, the position relation between each three-dimensional point and the three-dimensional target point can be obtained;
secondly, for each three-dimensional point, position update information of the three-dimensional point can be generated based on a preset fluctuation function and a position relationship corresponding to the three-dimensional point.
Based on this, since the position update information is generated by adopting the fluctuation function, a fluctuation effect can be presented among a plurality of generated position update information, so that an effect that the target object generates fluctuation (concussion) based on the impact of the moving object is presented, a game user is ensured to have higher visual perception, and the stickiness of the game user to the target game is increased.
The specific type of the fluctuation function is not limited, and may be selected according to the actual application requirements, such as a sine function or a logarithmic function.
In detail, in a specific application example, when the position relationship is a distance between the three-dimensional point and the three-dimensional target point, and the position update information is a variation value of a three-dimensional coordinate, the distance may be used as an independent variable of the fluctuation function, and the variation value may be used as a dependent variable of the fluctuation function.
In this way, after the change value (i.e., the position update information) is obtained, the change value and the three-dimensional position coordinates of the three-dimensional point may be superimposed, so as to obtain the three-dimensional position coordinates after the update processing of the three-dimensional point.
Also, considering that in the foregoing example, the three-dimensional target point may be plural, in order to create a more realistic concussion effect after the target object is hit by the moving object, different ones of the fluctuation functions may be configured when calculating the position update information based on different three-dimensional target points.
For example, in an alternative example, different amplitude values may be set for the fluctuation functions to form different fluctuation functions, for example, according to the order of determining the three-dimensional target points, the amplitude value of the fluctuation function corresponding to a preceding three-dimensional target point may be greater than the amplitude value of the fluctuation function corresponding to a succeeding three-dimensional target point for any two adjacent three-dimensional target points. Therefore, the oscillation amplitude can be created, the effect of the oscillation amplitude is smaller and smaller along with the change of time, and finally, the oscillation is stopped.
It should be noted that, in step S120, a specific manner of performing the update processing on the three-dimensional position coordinate is not limited, and may be selected according to actual application requirements.
For example, in an alternative example, the three-dimensional position coordinates may include lateral position coordinates, longitudinal position coordinates, and depth position coordinates, which are all updated.
Wherein a plane formed by the lateral position coordinates and the longitudinal position coordinates may be parallel to a display screen of the first game video frame (such as the display screen of the electronic device 10 described above). The depth position coordinate may be perpendicular to the display screen.
For another example, in another alternative example, in order to reduce the amount of calculation performed in step S120 and improve the efficiency of performing step S120, in addition to the effect that three-dimensional stereoscopic change can be achieved based on the update processing, only the depth position coordinates included in the three-dimensional position coordinates may be subjected to the update processing. Thus, in conjunction with fig. 6, step S120 may include step S121 and step S122 to perform an update process on the three-dimensional position coordinates, which is described in detail below.
Step S121, obtaining a depth position coordinate included in the three-dimensional position coordinate of each three-dimensional point.
In this embodiment, after the position update information of each of the three-dimensional points is acquired based on step S110, in order to facilitate the update process of the three-dimensional position coordinates of each of the three-dimensional points, the depth position coordinates of each of the three-dimensional points may be acquired for each of the three-dimensional points.
Step S123, updating the depth position coordinates of each three-dimensional point based on the position update information of each three-dimensional point, to obtain new depth position coordinates of each three-dimensional point.
In this embodiment, after the depth position coordinates of each of the three-dimensional points are acquired based on step S121, the depth position coordinates of each of the three-dimensional points may be updated based on the position update information of the three-dimensional point. In this way, a new depth position coordinate of each of the three-dimensional points can be obtained, thereby obtaining a new three-dimensional position coordinate of each of the three-dimensional points.
And the position updating information is a change value of the depth position coordinate of the corresponding three-dimensional point.
For example, in a specific application example, if the three-dimensional position coordinate of a three-dimensional point before the update process is (x, y, z) and the change value of the depth position coordinate of the three-dimensional point is Dz, the three-dimensional position coordinate of the three-dimensional point after the update process (i.e., the new three-dimensional position coordinate) may be (x, y, z + Dz).
It should be noted that, in step S130, a specific manner of generating the new target object is not limited, and may be selected according to actual application requirements, for example, different selections may be made based on different specific shapes of the target object.
For example, in an alternative example, the target object may be a planar plate-shaped structure (as shown in fig. 7), and thus, after the updated three-dimensional position coordinates are obtained, a filling process may be performed based on three-dimensional points corresponding to the updated three-dimensional position coordinates, so as to obtain a new plate-shaped structure (as shown in fig. 7).
For another example, in another alternative example, the target object may be a planar mesh structure or a three-dimensional mesh structure. Wherein, when the target object is required to have a stereoscopic effect, the target object can be configured into a three-dimensional mesh structure.
Based on this, in conjunction with fig. 8, step S130 may include step S131 and step S133 to generate the new target object, which is described in detail below.
Step S131, performing connection processing on the three-dimensional points corresponding to the plurality of updated three-dimensional position coordinates.
In this embodiment, after obtaining a plurality of three-dimensional position coordinates based on step S120, a connection process may be performed on three-dimensional points corresponding to the plurality of three-dimensional position coordinates, that is, the plurality of three-dimensional points may be connected through a connection line.
Step S133, a new three-dimensional mesh structure is obtained based on the plurality of three-dimensional points and the connection lines obtained by performing the connection processing on the three-dimensional points, and the new three-dimensional mesh structure is used as a new target object.
In this embodiment, after the plurality of three-dimensional points are subjected to the connection processing based on step S131, the new three-dimensional mesh structure may be obtained based on the plurality of three-dimensional points and the connection lines obtained by performing the connection processing for each three-dimensional point, thereby obtaining the new target object.
That is, the new target object may also be a three-dimensional mesh structure.
Optionally, the specific manner of performing the update processing on the three-dimensional point in step S131 is not limited, and may be selected according to the actual application requirement.
For example, in an alternative example, all the adjacent two three-dimensional points may be subjected to the connection processing (as shown in fig. 9).
For another example, in another alternative example, the target object and the new target object may be nets (such as football nets, table tennis nets, badminton nets, etc.), so that, in order to make the new target object have a more realistic net effect, in conjunction with fig. 10 and 11, step S131 may include step S131a and step S131b to perform a connection process on three-dimensional points, which is described below.
Step S131a, a first direction and a second direction are determined.
In this embodiment, after obtaining the plurality of three-dimensional position coordinates after the update processing based on step S120, two directions, such as the first direction and the second direction, may be determined first.
Wherein the first direction (X direction as shown in fig. 10) is different from the second direction (Y direction as shown in fig. 10). The first direction and the second direction may be the same as or different from the direction in which the lateral position coordinate is located and the direction in which the longitudinal position coordinate is located, and are not particularly limited herein.
Step S131b is to perform connection processing on the three-dimensional points corresponding to the plurality of three-dimensional position coordinates after the update processing, along the first direction and the second direction, respectively.
In this embodiment, after the first direction and the second direction are determined based on step S131a, the three-dimensional points corresponding to the plurality of three-dimensional position coordinates after the update processing may be connected along the first direction and the second direction, respectively.
It is understood that, in the three-dimensional points corresponding to the plurality of three-dimensional position coordinates after the update processing, there may be connecting lines between some three-dimensional points that are not parallel to the plane formed by the first direction and the second direction. Therefore, when performing the connection processing of the three-dimensional points corresponding to the plurality of three-dimensional position coordinates after the update processing, the connection processing can be performed in the first direction and the second direction based on the projection of the three-dimensional points on the plane.
Therefore, the projection of each obtained connecting line in the plane where the first direction and the second direction are located is parallel to the first direction or the second direction respectively.
Further, it should be noted that, in order to ensure effective execution of step S110, a three-dimensional point having the three-dimensional position coordinates needs to be generated in advance. The specific mode for generating the three-dimensional point is not limited, and can be selected according to the actual application requirements.
For example, in an alternative example, a plurality of three-dimensional position coordinates may be generated directly, resulting in a plurality of said three-dimensional points.
For another example, in another alternative example, in combination with fig. 12, the game running method may further include step S150 and step S160 to generate the three-dimensional point, which is described below.
Step S150, generating a two-dimensional mesh structure, and determining a transverse position coordinate and a longitudinal position coordinate of each two-dimensional point included in the two-dimensional mesh structure.
In this embodiment, a two-dimensional mesh structure including a plurality of two-dimensional points may be generated, and a lateral position coordinate and a longitudinal position coordinate may be determined for each of the two-dimensional points.
Step S160, generating a depth position coordinate for each of the two-dimensional points to obtain a plurality of three-dimensional points including a horizontal position coordinate, a vertical position coordinate, and a depth position coordinate.
In this embodiment, after determining the lateral position coordinates and the longitudinal position coordinates of each of the two-dimensional points based on step S150, depth position coordinates may also be generated for each of the two-dimensional points. In this manner, a plurality of two-dimensional points having only lateral position coordinates and longitudinal position coordinates may be converted into a corresponding plurality of three-dimensional points having lateral position coordinates, longitudinal position coordinates, and depth position coordinates.
Wherein the plurality of depth position coordinates includes at least two different depth position coordinates. In this way, the target object generated based on the plurality of three-dimensional points corresponding to the plurality of three-dimensional position coordinates can be made to exhibit a three-dimensional stereoscopic effect.
Optionally, the specific manner of executing step S160 to generate the depth position coordinate is not limited, and may be selected according to the actual application requirement, for example, different selections may be made based on different requirements of the presented stereoscopic effect.
For example, in an alternative example, at least one depth position coordinate may be selected from a plurality of depth position coordinates and assigned differently from the other depth position coordinates, such that at least two different depth position coordinates are included in the plurality of depth position coordinates.
For another example, in another alternative example, in order to enable the target object to present a three-dimensional stereoscopic effect and belong to a planar three-dimensional structure, in conjunction with fig. 13, step S160 may include step S161 and step S163 to generate the depth position coordinates, as described below.
Step S161, acquiring a transverse direction of the transverse position coordinate and/or a longitudinal direction of the longitudinal position coordinate.
In this embodiment, after determining the lateral position coordinate and the longitudinal position coordinate based on step S150, the lateral direction of the lateral position coordinate may be acquired first, or the longitudinal direction of the longitudinal position coordinate may be acquired first, or the lateral direction of the lateral position coordinate and the longitudinal direction of the longitudinal position coordinate may be acquired first.
Step S163 of determining different depth position coordinates for the plurality of two-dimensional points in the lateral direction and/or the longitudinal direction, respectively.
In this embodiment, after obtaining the transverse direction and/or the longitudinal direction based on step S161, based on different requirements, different depth position coordinates may be determined for the plurality of two-dimensional points along the transverse direction, respectively (for example, different depth position coordinates corresponding to different transverse position coordinates are different and may be changed linearly, and different depth position coordinates corresponding to the same transverse position coordinate are the same); or, different depth position coordinates may be determined for the plurality of two-dimensional points along the longitudinal direction (for example, the depth position coordinates corresponding to different longitudinal position coordinates are different and may be changed linearly, and the depth position coordinates corresponding to the same longitudinal position coordinate are the same); alternatively, different depth position coordinates may be determined for the plurality of two-dimensional points along the transverse direction and the longitudinal direction, respectively (e.g., different depth position coordinates corresponding to different transverse position coordinates may be different and may change linearly; different depth position coordinates corresponding to different longitudinal position coordinates may be different and may change linearly).
Based on the above, the coordinates of different depth positions can be changed linearly, so that the target object formed by the three-dimensional points can show a planar effect and a three-dimensional effect.
With reference to fig. 14, the embodiment of the present application further provides a game running apparatus 100 applicable to the electronic device 10. The game execution apparatus 100 may include an information acquisition module 110, an update processing module 120, and a video frame generation module 130.
The information obtaining module 110 is configured to obtain location update information of each three-dimensional point of a target object in a first game video frame, where the location update information at least includes two different location update information. In this embodiment, the information obtaining module 110 may be configured to perform step S110 shown in fig. 2, and reference may be made to the foregoing description of step S110 regarding the relevant content of the information obtaining module 110.
The update processing module 120 is configured to update the three-dimensional position coordinates of each three-dimensional point based on the position update information of the three-dimensional point, and generate a new target object based on the updated three-dimensional position coordinates. In this embodiment, the update processing module 120 may be configured to execute step S120 shown in fig. 2, and reference may be made to the foregoing description of step S120 for relevant contents of the update processing module 120.
The video frame generating module 130 is configured to generate a second game video frame based on the new target object. In this embodiment, the video frame generation module 130 may be configured to execute step S130 shown in fig. 2, and reference may be made to the foregoing description of step S130 for relevant contents of the video frame generation module 130.
Further, in some examples, the game execution apparatus 100 may further include other modules, and the specific functions of the other modules are not limited and may be selected according to actual application requirements. For example, it can be used to:
generating a two-dimensional mesh structure, and determining a transverse position coordinate and a longitudinal position coordinate of each two-dimensional point included in the two-dimensional mesh structure, wherein the two-dimensional mesh structure comprises a plurality of two-dimensional points; generating a depth position coordinate for each two-dimensional point to obtain a plurality of three-dimensional points comprising a transverse position coordinate, a longitudinal position coordinate and a depth position coordinate, wherein the plurality of depth position coordinates at least comprises two different depth position coordinates.
For a specific way of obtaining a plurality of three-dimensional points including the horizontal position coordinate, the vertical position coordinate, and the depth position coordinate, the explanation of the game running method (for example, the explanation of step S150 and step S160) may be referred to, and details are not repeated herein.
In an embodiment of the present application, there is also provided a computer-readable storage medium, in which a computer program is stored, and the computer program executes the steps of the game running method when running.
The steps executed when the computer program runs are not described in detail herein, and reference may be made to the above explanation of the game running method.
In summary, according to the game running method and apparatus, the electronic device, and the storage medium provided by the present application, the position update information of each three-dimensional point of the target object is obtained, and the target object is updated (i.e., a new target object is generated) based on the position update information, so that the first game video frame of the target game is updated to the second game video frame, and at least two three-dimensional points between the first game video frame and the second game video frame are subjected to different three-dimensional changes due to the fact that the plurality of position update information at least includes two different position update information, so that a three-dimensional effect is presented. Therefore, on the basis of ensuring that the running process of the target game can show a three-dimensional effect, the running of the target game can be enabled to be independent of a game engine, so that the running of the target game is enabled to be independent of running equipment with higher performance, the problem that the performance requirement of the existing game running technology on the running equipment of the game is higher is solved, the application limit of the target game is reduced, the application range of the target game can be improved, the popularization and application are facilitated, and therefore the practical value is higher.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus and method embodiments described above are illustrative only, as the flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, an electronic device, or a network device) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (14)

1. A game execution method, comprising:
acquiring position updating information of each three-dimensional point of a target object in a first game video frame, wherein the position updating information at least comprises two different position updating information;
updating the three-dimensional position coordinates of each three-dimensional point based on the position updating information of the three-dimensional point, and generating a new target object based on the updated three-dimensional position coordinates;
a second game video frame is generated based on the new target object.
2. A game execution method according to claim 1, wherein the step of acquiring the position update information of each three-dimensional point of the target object in the first game video frame includes:
acquiring a trigger instruction of a target object, wherein the target object is a planar game object in a first game video frame;
generating location update information for each three-dimensional point based on the triggering instruction, wherein the target object includes a plurality of the three-dimensional points.
3. A game execution method according to claim 2, wherein the step of generating the position update information of each three-dimensional point based on the trigger instruction includes:
determining a three-dimensional target point on the target object based on the trigger instruction;
and generating position updating information of the three-dimensional point based on the position relation between the three-dimensional point and the three-dimensional target point aiming at each three-dimensional point.
4. A game execution method according to claim 3, wherein the step of generating, for each of the three-dimensional points, position update information of the three-dimensional point based on a positional relationship between the three-dimensional point and the three-dimensional target point includes:
acquiring the position relation between each three-dimensional point and the three-dimensional target point;
and generating position updating information of the three-dimensional point based on a preset fluctuation function and a position relation corresponding to the three-dimensional point aiming at each three-dimensional point.
5. A game execution method according to claim 3, wherein the step of determining a three-dimensional target point on the target object based on the trigger instruction comprises:
acquiring motion information of a motion object based on the trigger instruction, wherein the motion object and the target object are different game objects in the first game video frame, and the motion object is in contact with the target object in the first game video frame;
determining a plurality of three-dimensional target points on the target object based on the motion information, wherein the first three-dimensional target point determined is a point where the target object is in contact with the motion object;
and a plurality of pieces of position update information determined for each of the three-dimensional points are generated based on the plurality of three-dimensional target points, respectively, and are used for performing the update processing a plurality of times, and the three-dimensional position coordinate subjected to the update processing at the next time is the three-dimensional position coordinate obtained by performing the update processing at the previous time, so as to generate a plurality of second game video frames.
6. A game execution method according to claim 2, wherein the step of acquiring the trigger instruction of the target object includes:
acquiring the relative position relation between a target object and a moving object, wherein the moving object is a game object which moves along the direction close to the target object in the first game video frame;
and generating a trigger instruction of the target object when the relative position relation meets a preset condition.
7. A game execution method according to any one of claims 1 to 6, wherein the step of updating the three-dimensional position coordinates of each of the three-dimensional points based on the position update information of the three-dimensional point includes:
acquiring a depth position coordinate included in the three-dimensional position coordinate of each three-dimensional point;
and updating the depth position coordinates of the three-dimensional points based on the position updating information of each three-dimensional point to obtain new depth position coordinates of each three-dimensional point.
8. A game execution method according to any one of claims 1 to 6, wherein the target object is a three-dimensional mesh structure, and the step of generating a new target object based on the updated three-dimensional position coordinates includes:
connecting the three-dimensional points corresponding to the updated three-dimensional position coordinates;
and obtaining a new three-dimensional mesh structure based on the plurality of three-dimensional points and the connecting line obtained by performing the connecting processing on the three-dimensional points, and taking the new three-dimensional mesh structure as a new target object.
9. The game execution method according to claim 8, wherein the step of performing the connection processing on the three-dimensional points corresponding to the plurality of updated three-dimensional position coordinates includes:
determining a first direction and a second direction, wherein the first direction is different from the second direction;
and connecting the three-dimensional points corresponding to the updated three-dimensional position coordinates along the first direction and the second direction respectively.
10. A game execution method according to any one of claims 1 to 6, wherein, prior to the step of acquiring the position update information of each three-dimensional point of the target object in the first game video frame, the game execution method further comprises:
generating a two-dimensional mesh structure, and determining a transverse position coordinate and a longitudinal position coordinate of each two-dimensional point included in the two-dimensional mesh structure, wherein the two-dimensional mesh structure comprises a plurality of two-dimensional points;
generating a depth position coordinate for each two-dimensional point to obtain a plurality of three-dimensional points comprising a transverse position coordinate, a longitudinal position coordinate and a depth position coordinate, wherein the plurality of depth position coordinates at least comprises two different depth position coordinates.
11. A game play method according to claim 10, wherein the step of generating depth position coordinates for each of the two-dimensional points comprises:
acquiring the transverse direction of the transverse position coordinate and/or the longitudinal direction of the longitudinal position coordinate;
different depth position coordinates are determined for the plurality of two-dimensional points along the lateral direction and/or the longitudinal direction, respectively.
12. A game execution apparatus, comprising:
the information acquisition module is used for acquiring position updating information of each three-dimensional point of a target object in a first game video frame, wherein the position updating information at least comprises two different position updating information;
the updating processing module is used for updating the three-dimensional position coordinates of the three-dimensional points based on the position updating information of each three-dimensional point and generating a new target object based on the updated three-dimensional position coordinates;
and the video frame generation module is used for generating a second game video frame based on the new target object.
13. An electronic device, comprising:
a memory for storing a computer program;
a processor coupled to the memory for executing the computer program to implement the game execution method of any of claims 1-11.
14. A computer-readable storage medium on which a computer program is stored, characterized in that the program, when executed, implements the game execution method of any one of claims 1 to 11.
CN202010097096.3A 2020-02-17 2020-02-17 Game running method and device, electronic equipment and storage medium Pending CN111282269A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010097096.3A CN111282269A (en) 2020-02-17 2020-02-17 Game running method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010097096.3A CN111282269A (en) 2020-02-17 2020-02-17 Game running method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111282269A true CN111282269A (en) 2020-06-16

Family

ID=71017965

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010097096.3A Pending CN111282269A (en) 2020-02-17 2020-02-17 Game running method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111282269A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010019333A1 (en) * 2000-01-27 2001-09-06 Square Co. Ltd. Methods and apparatus for transforming three-dimensional objects in video games
JP2008257746A (en) * 2008-06-26 2008-10-23 Nintendo Co Ltd Input processing program and input processor

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010019333A1 (en) * 2000-01-27 2001-09-06 Square Co. Ltd. Methods and apparatus for transforming three-dimensional objects in video games
JP2008257746A (en) * 2008-06-26 2008-10-23 Nintendo Co Ltd Input processing program and input processor

Similar Documents

Publication Publication Date Title
CN111815755B (en) Method and device for determining blocked area of virtual object and terminal equipment
US11301954B2 (en) Method for detecting collision between cylindrical collider and convex body in real-time virtual scenario, terminal, and storage medium
CN108714303B (en) Collision detection method in game, apparatus and computer-readable storage medium
EP3965431A1 (en) Video data processing method and related device
CN103157281B (en) Display method and display equipment of two-dimension game scene
CN107875630B (en) Rendering area determination method and device
CN111298443B (en) Game object control method and device, electronic equipment and storage medium
CN108159691A (en) Game state synchronous method and system
CN112807681B (en) Game control method, game control device, electronic equipment and storage medium
CN110120087B (en) Label marking method and device for three-dimensional virtual sand table and terminal equipment
CN112882576B (en) AR interaction method and device, electronic equipment and storage medium
CN108646917B (en) Intelligent device control method and device, electronic device and medium
CN103530243A (en) System, method, and computer program product for determining settings for a device
CN106611443B (en) Three-dimensional topographic point picking method and device
CN108159694B (en) Flexible body flutter simulation method, flexible body flutter simulation device and terminal equipment
KR20210141971A (en) Method, apparatus, terminal, and storage medium for selecting virtual objects
CN111282269A (en) Game running method and device, electronic equipment and storage medium
CN111353007B (en) Geographic element pickup method, coding method and device based on vector slicing and electronic equipment
CN111744183A (en) Illumination sampling method and device in game and computer equipment
CN109718543B (en) Same-screen processing method and device
KR20180088876A (en) Image processing method and apparatus
CN114288662A (en) NPC behavior control method and device and electronic equipment
CN114470786A (en) Frame synchronization data processing method and device, readable medium and electronic equipment
CN114119846A (en) Hierarchical level detail model generation method and device
CN116506680B (en) Comment data processing method and device for virtual space and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination